Building your own Command Line Interface (CLI)

In my new position as an Architect at Centene working of the Unified Member View (UMV) team, I came into a situation where the team has multiple console application that do similar things but no common way of executing them or even knowing what to pass into the applications other than asking someone on the team or digging headlong into the code to discover what values are valid, required or optional.  I spent some time googling with bing and I stumbled upon a solution called CommandLine.  They have do have 2 version of the application available via NuGet.  The Original One and the New One which is the one I am leveraging and will be talking about here. It is still in beta but works the way I would want a command line parser to work (Much the same way the DotNet and GIT’s CLI work). Below are some of tips and tricks you can take from my experience in developing a CLI.

Naming your Executable

When I created the console application I named it CLI so the name of my executable was CLI.exe. This was not what I wanted nor did it really make sense for our entry point into our CLI. The one thing that I did like about the name CLI was how short it was, just like GIT. So the first think I did was rename the “Assembly Name” on the application properties to something more meaningful and since this was for the UMV team we choose umv.exe but I did make the default namespace something consistent with the entireentier solution.  Quick, simple, easy and to the point.


Choosing the right Verbs

As you read the documentation for Command Line you see that there are many different ways to parse the command line and for our use case  since we really have lots of different entry points we choose to leverage the Verbs option and create custom class to parse and map to specific handlers for each option. One of the things we do is create vendor extract files so again naming these verbs became important. For our Vendor extracts we choose to name the verb “extract” so when typing at the command line it became fluid so it was “umv extract”. Setting this up was way


internal class VendorExtractOptions
    // More to come on the properties

Options, Options, Options

The last thing you will need to set up are the options for your verb. The nice thing about Command Line is that they give you the option for a single character as the switch or a a string for the name. So for our example we have extracts for many vendors so we choose to have ‘v’ and “vendor” as our possible options. So our command line looks like this “umv extract –v VendorName”.  This new CLI ended up being used for all our vendor extracts and became the contract we used in our scheduling application Cisco TIdal and allows us to change how we do our vendor extracts without having to change the calling application. It became the interface for extracts.

internal class VendorExtractOptions
       [Option('v', "vendor", HelpText = "The name of the vendor extract to create", Required = true)]
        public string Vendor { get; set; }

        [Option('a', "audit", HelpText = "Indicate if this is to create an aduit version of the file")]
        public bool Audit { get; set; }

I highly recommend that if you are needing to build a CLI or even if you want to just parse your command line arguments in a consistent way you investigate using Command Line for your implementations.

The Next Chapter of my Career

For the past 5 years I have been working at Swank Motion Pictures, and during that time I have developed great friendships with some very talented co-workers. I have grown with each challenge given to me and was able to develop software that I am very proud of including Swank HealthCare's LMS. I came in right when the LMS was in the first month of development and was able to guide a small team of developers and we created our first SaaS application. The LMS was a multi-tenant application that leverage sub-domains for per site implementation. It was completely hosted in Windows AZURE using SQL AZURE and Web Roles. It was by far to date my favorite application to develop and deliver.  I have how left Swank and headed out for new opportunities at Centene as a Microsoft Architect. I will try and do this more but depends on time. 

Threading Helper

Over the years I have make the application I worked on multi-threaded to help increase the performance. Back in early days of Dot Net I worked with a very talented developer and we came up with a producer consumer queue thread implementation that really helped manage workload. It allowed us to add work to the queue, remove work from the queue using a set number of threads and process the work in a somewhat reusable but not really generic manor. Since then Microsoft has introduced Parallel, Task and other multi-threaded models that allow us developers to write multi-threaded code easier, but I missed my days of being able to really control throughput of the applications I was working on. There where also some developers asking me to single thread some of the multi-treaded parts of the application for debugging purposes. So with the help of Google'ing on Bing  I found a really interesting implementation of the TaskScheduler class called the LimitedConcurrencyLevelTaskScheduler.  This allowed me to do what I wanted to do but, well, this was my first implementation and my code may be difficult to read...

// The number of threads and the delay between calls change depending on the what time of day it is 
var config = LimitedConcurrencyLevelTaskSchedulerConfigurationHelper.GetConfiguration();
var limitedConcurrencyLevelTaskScheduler = new LimitedConcurrencyLevelTaskScheduler(config.NumberOfThreads, config.DelayBetweenCalls);
var taskFactory = new TaskFactory(limitedConcurrencyLevelTaskScheduler);

Task.WaitAll({SomeListOfData}.Select(gs => taskFactory.StartNew(() =>
     // Doing my work here

I had that sprinkled throughout my code and I was getting tired of having to see all that code, so like all good developers I encapsulated that behavior and came up with the following

TaskManager.Process({SomeListOfData, x=> 
{ /* Do something with an item from the list */ });

That simple implementation would run a action for each item in the list. If I wanted to single thread that work all I have to do is add an additional parameter like

TaskManager.Process({SomeListOfData, x=> 
{ /* Do something with an item from the list */ }, ThreadingOption.Single);

If I wanted to throttle how many threads would run at once all I have to do is 

TaskManager.Process({SomeListOfData, x=> 
{ /* Do something with an item from the list */ }, ThreadingOption.Limited, 5);

This made my code cleaner and also allowed me to quickly single thread a task if I needed to for testing. It is a work in process and I have placed some example code on GitHub and look for feedback.

SQL Data Versioning

Over they years I have had various data versioning schemes. The one that I really like the best is placing triggers on tables for updates and deletes and take those snapshots of the modified data and place the it into another audit database. When we went to AZURE this would not work because you can not have linked resources in AZURE. This meant that we would have to place the audit data into the same database the live data was and our DBA did not like that and neither did I really. So we leverage Entity Framework to perform change tracking and capture the data on a per property(column) basis and pump that data to our audit database. This works great but getting this data back out in a manor that made sense, is less than desirable but none the less it works. 

Now for the great new. Temporal Tables in SQL Server 2016 and AZURE SQL.

We have had our applications database in SQL AZURE for the past 4 years and over those 4 years the growth of new features has been amazing. When I read about this I was super excited. I no longer had to do this type of custom work myself, I could get rid of the Entity Framework audit implementation and go with something out of the box and it would deliver better audit information.

I then started reading about what I had to do to set this up and, wow, was it simple. Here is an example of how simple it is to set this up on a table. 

First I created a new SCHEMA called History


Then I needed to alter a table to allow for the temporal table

CONSTRAINT p_ValidFromConstraint DEFAULT '2016.03.24',
CONSTRAINT p_ValidToConstratint DEFAULTCONVERT(DATETIME2,'9999.12.31 23:59:59'),

I could then perform some DML, updating and deleting data in that table 

update branch set name = 'fasdfas'where branchid = 2
delete from branch where branchid = 2

I could then query the data and the history data

SELECT * FROM dbo.Branch
SELECT * FROM History.Branch

In SQL server management studio 2016 you get a nice display of the history data

So you then keep reading and you come to find our that temporal tables do not support CASCADE DELETES!?!?!

Then you read the limitations page, and well I guess they are working on it.

With that said we decided to hold off on using this and we will keep monitoring the progress of this feature and we hope to be able to use this in the near future. 



Moving from Blogger to BlogEngine.Net

Well, I started my blog back up using Google's Blogger and found myself not digging it all that much, so I went ahead and got my domain back up and running and will start using this location for my blogging needs.   

I have have been thinking about what I want to be working on and have found the following items of interest. Look forward (that is really only for me right now) to some posts on the following things I have found interesting....

  • SQL Data Versioning
  • Entity Framework Core
  • Unit testing your code.

Enums and conditional logic

I have learned many a way to perform conditional logic using enums and well when I was asked by one of my junior developers this past week on “why where you doing this?” I decided to go ahead and document what I was going to help better explain myself.

The first thing is the setup of the enum. Take the following simple setup

// Eenum with Flags attribute
public enum SimpleExample
    None = 0,
    One = 1,
    Two = 1 << 1,
    Three = 1 << 2,
    Four = 1 << 3,
    All = One | Two | Three | Four
// Logic statement 
var flag = SimpleExample.One; 
if ((flag & SimpleExample.One) != 0)
// Command line passing the enum

Nothing to outlandish here but the developer asked the following questions.
What is the “Fag” attribute?
What is with the << (Bit Shifting) behind Two, Three and Four?
What is with All = One | Two | Three | Four?
What what with the if statement?
What is with the 1’s and 0’s we are passing into the command line?
He did google these on Bing before coming to me but most of the answers only contained part of the question and I wanted to help him out and put all the answers to his questions in one spot. So below are the answers to those questions.

Flags Attribute

Simply put it marks the enumeration as a bit field, this in turns allows us to combine the values of the enum giving us more flexibility for the underlying value of the enum instance. Read what Microsoft has to say about this here.

Bit Shifting

We then need to set the value of each enum, we could do this many ways. We could just assign a integer value to each enum. We do this for “None” and “One” and we could continue doing this picking 2, 4, 8, 16, 32, 64, 128, etc.. for each enum, but we could also use some bit shifting to get us to those numbers i.e. 1 <<1. What?!? What is that notation? This is really simple once you get your head around it. What we want to do is create a binary representation of the value 2. This is easy, it is “10” not ten but one zero. We attain that by moving the value one over one space to the left using a bit shift left operator << giving us a integer value in memory of two. We then can do the same thing again performing 1 << 2 giving us “100” not 100 but one, zero, zero which is the binary representation of the integer value of 4 and so on. Just a simple trick to get your results.

Bit | (or) and & (and) Operation

Again we are just using what is available to us in the framework. Like most languages you have the bitwise operators like |(OR) or &(AND) which allow us to easily combine and check values using simple binary operations. So if you took the binary value for one “1” and performed the OR operator on it with the binary value of two “10” you would get the binary value of “11” which is three. Then you can check to see if that value exists using the & operation. Here is a simple example


// A flag has both the One and Two Flag set to it
var flag = SimpleExample.One | SimpleExample.Two;

// We would see here that the integer value of our flag is 3

Now if you wanted to see if the flag value contains the Two flag we use the &amp; operator,
by combining it with the Two enum if the resulting value would not be 0, meaning the flag and the Two enum has the Two bit location set resulting in a value that is not binary zero. I know you could also use the “HasFlag” Method but if you do a simple performance check you will see that it takes about 10 times longer evaluating using “HasFlag” vs simple binary &amp; operation.

if ((flag & SimpleExample.Two) != 0)

Ones and Zeros

The last question had to do with sending into a console application a parameter of ones and zeros. Simply put this was just the binary representation of the |(or) values of the enum that we would use on startup.

// Again a flag has both the One and Two Flag set to it
var flag = SimpleExample.One | SimpleExample.Two;

// The string value of those ones and zeros
var binaryFlagString = (Convert.ToString((int)flag, 2));

// Would give you “11”, again not eleven but one - one which is really 3.

Of course I could have just passed in a int value but it was easier for me to cross reference a list on enums running the range of 24 to their location in the representation of “0110110000111”….. I hope you get my point. I would know that in the 4th location from the right meant that the application had Four selected. It was easier for me to read that value than some integer value.

At the end of the day we all have our tips and tricks and I hope this helps someone out there in the ethose of the internet.

Performance Issues, Documentation and solutions

I have been using Fluent Validation (FV) for one of my websites since 2010. It allowed me to encapsulate all my validation logic into a simple and concise class for all my business and data rules. I started to experience weird performance issues on my production website that is using FV and I was not sure what the culprit was, so I installed Jet Brains dotTrace and much to my surprise there were a huge number of calls to "RuleFor", which is the method you call to create the rules inside your derived class of the AbstractValidator.  Since my application is a web application leveraging Ninject IoC I was creating new instances of the validators for every request.  Some of the calls would save multiple object graphs, in turn creating multiple validators for that request. It wasn't until I doubled my number of clients that I noticed this performance issue. In just thinking about it logically I knew that I had to implement some kind of singleton pattern but there was no really clear documentation for this solution. It was only after googling around I found an old IoC example posted back in 2010 that was using StructureMap for Fluent Validation and I noticed those examples where all leveraging the singleton pattern. It took me some digging around but I found what I needed. I have since filed a issue with the owner of Fluent Validation for some better documentation and we will see where that goes but the moral to this story is.....

Proactively load test your application. I have load tested many an application over my career for scalability, and much to my surprise it was not until I had issues in my own application that I started to load test the application.  Just running unit tests is not enough. This goes for all applications you develop not just the ones you get paid to develop.

Taking the next step

Over the years I have bogged in various locations but have not been diligent about maintaining them or posting relevant information.  That stops today!  I want to do more and help others out with what I have learned and what I learn on a daily basis. So what does that mean? For today it means "How do you better yourself using other peoples experiences?"

Follow leaders in your field. For me I like to follow people like Scott Hanselman and Jon Gallant and their blogs.  I listen to podcasts from Leo Laporte and Paul Thurrott(but that has ended).  I have also been following live streams like over the past year and even using Channel 9.   But these are just some of the various people and media I follow. I love using Twitter to get my daily news feed. I find people that are leaders in my field and see what they do and who they follow and how they relate to me and what I do.  If I find what they do and say valuable I continue to follow them, otherwise I stop and maybe I will check back on their site/podcast sometime in the future.

Find someone to mentor you, someone that will push and challenge you.  I like to think that if you are comfortable then you are not pushing yourself.  You should be constantly pushing yourself, learning, growing, gaining new experiences.  I like to use sites like CodeKata and Project Euler to help sharpen my thinking. Think about things that maybe my job does not require me to think about.  Whatever it is, find someone or something to push you to better yourself.  This can apply to you and your life in every aspect.

Lastly is, just take time to think.  I do some of my best thinking when I step away, when I am able to clear my head, take a moment to myself and focus my energy. Don't be afraid to get up and walk around and think.  You will be amazed at what this can do for you if you.

With all that said I am planning out what my next posts will be for the next few months and look forward to putting down in this blog some helpful information for those that come and read it.