Just A Programmer We're just programmers

9Apr/143

The case for open sourcing the SQL Saturday Website

My name is Justin Dearing. I write software for a living. I also write software for free as hobby and for personal development. When I’m not writing code, I speak at user groups, events and conferences about code and code related topics. Once such event is SQL Saturday. I haven’t spoken in a while because I became a dad in June. However, my daughter is 9 months old now and the weather is warm. I feel comfortable attending a regional SQL Saturday or two.

So last night I submitted to SQL Saturday Philadelphia. The submission process (I mean the mechanical process of using the website to submit my abstract) was annoying, as usual. What really got me going though was when I realized two things:

  • My newlines were not being preserved so that my asterisks that were supposed to punctuate bullet points were not at the beginnings of lines.
  • I could not edit my submission once submitted.

I like bullet points, a lot. However, I digress. In response to my anger, I complained on twitter that the site should be open sourced, so I the end user could create a better experience for myself and my fellow SQL Saturday Speakers.

I got three retweets. At least I wasn’t completely alone in my sentiment. I complained again in the morning, started a conversation and eventually Tim sent this out this:

So the site was being rewritten, but it would not be open sourced.

Should I have been happy at that point, or at least patiently await the changes? One could presume that session editing and submission would be improved. At the very least, things would get progressively better as there were revisions to the code. If the federal government could pull off the ObamaCare site, with some hiccups, why can’t a group of DBAs launch a much smaller website, with much simpler requirements and lower load?

I’d be willing to bet they will. I’d be willing to bet that this site will suck a lot less than the old site, and that it will continue to progress. I’m sure smart people are working on it, and a passionate BoD are guiding the process. At the very least I’ll withhold judgement until the new site is live.

Despite my confidence in the skills of the unknown (to me) parties working on the site, there are so many hours in the day and only so many things a team of finite size can do. However, a sizable minority of PASS’s membership are .NET developers. Many of them speak at SQL Saturdays. They have to submit to the site. Some of them will no doubt be annoyed at some aspect of the site. Some of them might fix that annoyance, or scratch their itch in OSS parlance, if the site was open source and there was a process to accept pull requests.

I’m not describing a hypothetical nirvana. I’ve seen the process I describe work because I’m submitted a lot of patches to a lot of OSS projects. I’ve submitted a patch to the (not actually open source, as Brent will be the first to state) sp_blitz and Brent accepted it. I’ve contributed to NancyFX. I once contributed a small patch to PHP to make it consume WCF services better. I’ve contributed to several other OSS projects as well.

Perhaps your saying SQL Server is a Microsoft product, not some hippie Linux thing. Perhaps you share the same sentiment as Noel McKinney:

However, as I pointed out to Noel, the mothership’s (i.e, Microsoft’s Editors Note: Noel has stated to me he meant Microsoft) beliefs are not anti OSS. Microsoft has fully embraced Open Source. You can become an MVP purely for OSS without any speaking or forum contributions. One of the authors of NancyFX is an example of such a recipient. F#, ASP.NET and Entity Framework are all open source. Just this week Microsoft Open Sourced Roslyn. As a matter of fact I’ve even submitted a patch to the nuget gallery website, which is operated by Microsoft and owned by the OuterCurve foundation. The patch was accepted and my code, along with the code of others was pushed to nuget.org. So I’ve already submitted source code for a website owned and operated by an independent organization  setup by Microsoft, they’ve already accepted it, and the world seems a slightly better place as a result.

So I ask the PASS BoD to consider releasing the SQL Saturday Website source code on github, and I ask the members of PASS to ask their BoD to release the source code as well.

Filed under: Uncategorized 3 Comments
16Feb/140

Creating a minimally viable CentOS OpenLogic rapache instance

Recently I’ve been dealing with R and rapache at work. R is a language for statisticians. rapache is an apache module for executing R scripts in apache. Its like mod_perl or mod_php for R. I’ve been writing simple RESTful scripts that return graphics and JSON, and calling them from static html pages. I’ve been also using my MSDN Azure subscription to engage in R self study at home. In the spirit of my last post, I’ve posted the setup notes here to get you stated with a new Azure VM for running an rapache instance. Azure used a special cloud enabled version fo CentoS 6.3 called OpenLogic. However, it seems to work similarly to the vanilla CentoOS 6.4 instances I’ve used at work. So everything should apply there. If something doesn’t work leave a comment.

  • First, CentOS is very conservative, but Fedora makes EPEL to give you a more modern set of RPMs
    • rpm -Uvh http://epel.mirror.freedomvoice.com/6/i386/epel-release-6-8.noarch.rpm
  • Now lets install the packages we need. The kernel will be updated, so we will need to reboot.
    • yum update -y
    • yum install -y vim-x11 vim-enhanced xauth R terminator xterm rxvt R httpd git httpd-devel gcc cairo cairo-devel libXt-devel
    • yum groupinstall -y fonts
    • ldconfig
    • shutdown -r now
  • Now as a regular user lets compile rapache.
    • mkdir ~/src
    • cd ~/src
    • git checkout https://github.com/jeffreyhorner/rapache.git
    • cd rapache
    • ./configure && make && sudo make install
  • Now lets configure rapache. Create a file called /etc/httpd/conf.d/rapache.conf with the following:
# rapache configuration by Justin Dearing <zippy1981@gmail.com>
LoadModule R_module modules/mod_R.so
<Location /RApacheInfo>
 SetHandler r-info
</Location>
AddHandler r-script .R
RHandler sys.source
  • Now restart apache.  Make sure it’t working by running 
    elinks http://localhost/RApacheInfo.

Azure doesn’t configure swap space by default. You’re going to absolutely need some swap space if you’re using an extra small instance. A good howto for that is here.

10Nov/130

Split testing using nginx proxy cache

My company recently discovering the joys of using nginx as a reverse proxy cache server. This allowed us to significantly reduce the load on our application servers. Of course, as soon as we got this setup working nicely, a request for A/B testing came down the pipeline.

There are some obstacles to conducting A/B testing while using nginx as a reverse proxy cache server.

Obstacle 1: Lack of "sticky" sessions in free nginx product. While there is support for session affinity as part of the nginx commercial subscription, the product didn’t suit our needs. Without sticky sessions each page load would potentially go to a different upstream server. This would render many tests unusable, and would make the site feel disjointed.

Obstacle 2: Since pages are being cached by nginx, all requests received the same cached response. This means you couldn’t serve different versions of the same page.

Obstacle 3: To keep code complexity down, we didn’t want to have to modify our application to be aware of other tests we were performing.

We were able to overcome these obstacles using only the default modules that were part of nginx 1.4.x.

The following are snippets of our server config. The file exists entirely in the nginx http context. I won’t go into the configuration of nginx outside of this file as that information is readily available elsewhere. I’m going to jump around a bit to ease in explanation. The file will be shown in its entirety at the bottom.

   1: upstream upstreamServerA {

   2:     server upstreamServerA.net;

   3: }

   4:  

   5: upstream upstreamServerB {

   6:     server upstreamServerB.net;

   7: }

The first thing is to define our upstream server groups. In this setup we have defined two server groups (upstreamServerA and upstreamServerB) each with a single server. Each upstream server group represents a version of the site we are testing. We could increase the number of tests by adding more upstream server groups. The server definition is shown with a standard .net domain name for ease of reading, this should be the IP address or location of your application server.

   1: split_clients "seedString${remote_addr}${http_user_agent}${date_gmt}" $upstream_variant {

   2:     50%               upstreamServerA;

   3:     50%               upstreamServerB;

   4: }

Here we make use of one of nginx’s default modules; ngx_http_split_clients_module. The idea here is to setup the split percentage for our tests. What’s actually happening here is that nginx is creating a string composed of the seed string "seedString" concatenated with the client IP address, the client’s user agent, and the current time. Nginx then hashes this string into a number. The lower 50% of the number range gets assigned upstreamServerA and the upper 50% of the number range gets assigned upstreamServerB. This gets saved into the $upstream_variant variable. This segment is only used for each client’s first request.

   1: map $cookie_sticky_upstream $upstream_group {

   2:     default             $upstream_variant;

   3:     upstreamServerA     upstreamServerA;

   4:     upstreamServerB     upstreamServerB;

   5: }

With this segment we are going to check for the presence of a cookie named "sticky_upstream" in the client request. The goal here is to set the variable named $upstream_group based on this cookie. If the value of the cookie is "upstreamServerA" we set $upstream_group to "upstreamServerA". We do similarly if the value is "upstreamServerB". If the value of the cookie is neither of these, or if the cookie is not present, we use the value of the $upstream_variant variable as we defined in the previous segment.

Now we can define our server context.

   1: server {

   2:     listen       80;

   3:     server_name  upstreamServer.com;

   4:  

   5:     location / {

   6:         #Snipped for brevity

   7:     }

   8:  

   9:     location /admin {

  10:         #Snipped for brevity

  11:     }

  12:  

  13:     error_page   500 502 503 504  /50x.html;

  14:     location = /50x.html {

  15:         root   /usr/share/nginx/html;

  16:     }

  17: }

We are defining two locations here "/" and "/admin". We treat "/admin" differently as we want all admin requests to go to a single upstream server. This may not be needed in all setups but I thought I’d show how to accomplish it.

The first thing we want to do in the "location /" context is to set the "sticky_upstream" cookie.

   1: add_header Set-Cookie "sticky_upstream=$upstream_group;Path=/;";

This will make all subsequent requests from the client "stick" to the same upstream server group.

   1: proxy_pass http://$upstream_group;

Now we tell nginx to use the value of the $upstream_group variable as the upstream server group.

   1: proxy_cache_key "$scheme$host$request_uri$upstream_group";

This segment allows us to cache responses based on the $scheme, $host, $request_uri and (the important bit for this post) the $upstream_group. So that we have different caches for each test.

As I discussed briefly, what if we want to send all admin interactions to a single upstream server group? Let’s look at the "location /admin" context:

   1: set $upstream_admin upstreamServerB;

   2: add_header Set-Cookie "sticky_upstream=$upstream_admin;Path=/;";

   3:  

   4: proxy_pass http://$upstream_admin;

We are defining the variable $upstream_admin and setting it to "upstreamServerB". Then setting the client’s "sticky_upstream" cookie equal to it. The final bit is to tell nginx to use the value of $upstream_admin as the upstream server.

The file in its entirety can be found below:

   1: upstream upstreamServerA {

   2:     server upstreamServerA.net;

   3: }

   4:  

   5: upstream upstreamServerB {

   6:     server upstreamServerB.net;

   7: }

   8:  

   9: #split clients by the following percentages 

  10: #  according to remote IP, user agent, and date

  11: split_clients "seedString${remote_addr}${http_user_agent}${date_gmt}" $upstream_variant {

  12:     50%               upstreamServerA;

  13:     50%               upstreamServerB;

  14: }

  15:  

  16: #override if "sticky_upstream" cookie is present in request

  17: #  this assures clients session are "sticky"

  18: #  this also allows us to manually set an upstream with a cookie

  19: map $cookie_sticky_upstream $upstream_group {

  20:     default             $upstream_variant;    #no cookie present use result of split_clients

  21:     upstreamServerA     upstreamServerA;    #use cookie value

  22:     upstreamServerB     upstreamServerB;    #use cookie value

  23: }

  24:  

  25: server {

  26:     listen       80;

  27:     server_name  upstreamServer.com;

  28:  

  29:     location / {

  30:         #Set the client cookie so they always get the same upstream server

  31:         #  during this session

  32:         add_header Set-Cookie "sticky_upstream=$upstream_group;Path=/;";

  33:  

  34:         #Set the upstream server group as defined in the above map

  35:         proxy_pass http://$upstream_group;

  36:         proxy_redirect          off;

  37:  

  38:         # Cache

  39:         proxy_cache one; #use the "one" cache

  40:         proxy_cache_valid  200 302  60m;

  41:         proxy_cache_valid  404      1m;

  42:         add_header X-Cache-Status $upstream_cache_status;

  43:         proxy_ignore_headers X-Accel-Expires Expires Cache-Control;

  44:  

  45:         # Don't cache if our_auth cookie is present

  46:         proxy_no_cache $cookie_our_auth;

  47:         proxy_cache_bypass $cookie_our_auth;

  48:  

  49:         proxy_set_header        X-Real-IP       $remote_addr;

  50:         proxy_set_header        Host            $host;

  51:         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

  52:         

  53:         #Set Cache Key based on scheme, host, request uri and upstream group

  54:         proxy_cache_key "$scheme$host$request_uri$upstream_group";

  55:     }

  56:  

  57:     location /admin {

  58:         set $upstream_admin upstreamServerB;

  59:         add_header Set-Cookie "sticky_upstream=$upstream_admin;Path=/;";

  60:  

  61:         proxy_pass http://$upstream_admin;

  62:         proxy_redirect  off;

  63:  

  64:         proxy_set_header        X-Real-IP       $remote_addr;

  65:         proxy_set_header        Host            $host;

  66:         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

  67:     }

  68:  

  69:     # redirect server error pages to the static page /50x.html

  70:     error_page   500 502 503 504  /50x.html;

  71:     location = /50x.html {

  72:         root   /usr/share/nginx/html;

  73:     }

  74: }

19May/130

Creating a minimally viable Centos instance for SSH X11 Forwarding

I recently need to setup a CentOS 6.4 vm for development Java development. I wanted to be able to run Eclipse STS and on said vm and display the X11 Windows remotely on my Windows 7 desktop via XMing. I saw no reason for the CentOS VM to have a local X11 server. I’m quite comfortable with the Linux command line. I decided to share briefly on how to go from a CentOS minimal install to something actually useful for getting work done.

  • /usr/bin/man The minimal install installs man pages, but not the man command. This is an odd choice. yum install man will fix that.
  • vim There is a bare bones install of vim included by default that is only accessible via vi. If  you want a more robust version of vim, yum install vim.
  • X11 forwarding You need the xauth package and fonts. yum install xauth will allow X11 forwarding to work. yum groupinstall fonts will install a set of fonts.
  • A terminal for absolute minimal viability yum install xterm will give  you a terminal. I prefer terminator, which is available through rpmforge.
  • RpmForge (now repoforge) Centos is based on Red Hat Enterprise Linux. Therefore it focuses on being a good production server, not a developer environment. You will probably need rpmforge to get some of the packages you want. The directions for adding Rpmforge to your yum repositories are here.
  • terminator This is my terminal emulator of choice. One you added rpmforge, yum install rpmforge
  • gcc, glibc, etc Honestly, you can usually live without these if you stick to precompiled rpms, and you’re not using gcc for development. If you need to build a kernel module, yum install kernel-devel gcc make should get you what out need.

From here, you can install the stuff you need for your development environment for your language, framework, and scm of choice.

Tagged as: , , No Comments
10May/130

When your PowerShell cmdlet doesn’t return anything, use -PassThru

The other day I was mounting an ISO in Windows 8 via the Mount-DiskImage command. Since I was mounting the disk image in a script, I needed to know the drive letter it was mounted to so the script could access the files contained within. However, Mount-DiskImage was not returning anything. I didn’t want to go through the hack of listing drives before and after I mounted the disk image, or explicitly assigning the drive letter. Both would leave me open to race conditions if another drive was mounted by another process while my script ran. I was at a loss for what to do.

Then, I remembered the -PassThru parameter, which I am quite fond of using with Add-Type. See certain cmdlets, like Mount-DiskImage, and Add-Type don’t return pipeline output by default. For Add-Type, this makes sense. You rarely want to see a list of the types you just added, unless your exploring the classes in a DLL from the command like. However, for Mount-DiskImage, defaulting to no output was a questionable decision IMHO.

Now in the case of Mount-DiskImage, -PassThru doesn’t return the drive letter. However, it does return an object that you can pipe to Get-Volume which does return an object with a DriveLetter property. To figure that out, I had to ask on stackoverflow.

tl;dr: If your PowerShell cmdlet doesn’t return any output, try -PassThru. If you need the drive letter of a disk image mounted with Mount-DiskImage, pipe the output through Get-Volume.

For a more in depth treatise of -PassThru, check out this script guy article by Ed Wilson(blog|twitter).

Filed under: Uncategorized No Comments
10May/130

Getting the Drive Letter of a disk image mounted with WinCdEmu

In my last post, I talked about mounting disk images in Windows 8. Both Windows 8 and 2012 include native support for mounting ISO images as drives. However, in prior versions of Windows you needed a third party tool to do this. Since I have a preference for open source, my tool of choice before Windows 8 was WinCdEmu. Today, I decided to see if it was possible to determine the drive letter of an ISO mounted by WinCdEMu with PowerShell.

A quick search of the internet revealed that WinCdEmu contained a 32 bit command line tool called batchmnt.exe, and a 64 bit counterpart called batchmnt64.exe. These tools were meant for command line automation. While I knew there would be no .NET libraries in WinCdEmu, I did have hope there would be a COM object I could use with New-Object. Unfortunately, all the COM objects were for Windows Explorer integration and popped up GUIs, so they were inappropriate for automation.

Next I needed to figure out how to use batchmnt. For this I used batchmnt64 /?.

C:\Users\Justin>"C:\Program Files (x86)\WinCDEmu\batchmnt64.exe" /?
BATCHMNT.EXE - WinCDEmu batch mounter.
Usage:
batchmnt <image file> [<drive letter>] [/wait] - mount image file
batchmnt /unmount <image file>         - unmount image file
batchmnt /unmount <drive letter>:      - unmount image file
batchmnt /check   <image file>         - return drive letter as ERORLEVEL
batchmnt /unmountall                   - unmount all images
batchmnt /list                         - list mounted

C:\Users\Justin>

Mounting and unmounting are trivial. The /list switch produces some output that I could parse into a PSObject if I so desired. However, what I really found interesting was batchmnt /check. The process returned the drive letter as ERORLEVEL. That means the ExitCode of the batchmnt process. If you ever programmed in a C like language, you know your main function can return an integer. Traditionally 0 means success and a number means failure. However, in this case 0 means the image is not mounted, and a non zero number is the ASCII code of the drive letter. To get that code in PowerShell is simple:

$proc = Start-Process  -Wait `
    "C:\Program Files (x86)\WinCDEmu\batchmnt64.exe" `
    -ArgumentList '/check', '"C:\Users\Justin\SQL Server Media\2008R2\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665.iso"' `
    -PassThru;
[char] $proc.ExitCode

The Start-Process cmdlet normally returns immediately without output. The -PassThru switch makes it return information about the process it created, and -Wait make the cmdlet wait for the process to exit, so that information includes the exit code. Finally to turn that ASCII code to the drive letter we cast with [char].

2May/130

Setting the Visual Studio TFS diff and merge tools with PowerShell

I recently wrote this script to let me quickly change the diff and merge tools TFS uses from PowerShell. I plan to make it a module and add it to the StudioShell Contrib package by Jim Christopher (blog|twitter). For now, I share it as a gist and place it on this blog.

The script supports Visual Studio 2008-2012 and the following diff tools:

Enjoy!

Filed under: Uncategorized No Comments
6Jan/130

Announcing SevenZipCmdLine.MSBuild

This was a quick and dirty thing born out of necessity, and need to make zip files of PoshRunner so I could make its chocolatey package.

I made MSBuild tasks for creating 7zip and zip files out of the $(TargetDir) of an MSBuild project. There is a nuget package for it. Simply include it in your project via nuget and build it from the command line with the following command line:

%windir%\microsoft.net\framework\v4.0.30319\msbuild __PROJECT_FOLDER__\__PROJECT_FILE__ /t:SevenZipBin,ZipBin

This will create project.zip and project.7z in __PROJECT_FOLDER__\bin\Target. To see how to override some of the defaults, look at this msbuild file in PoshRunner.

Source code is available via a github repo, and patches are welcome!

5Jan/130

PoshRunner now on SourceForge and Chocolatey

I’ve been periodically hacking away at PoshRunner. I have lots of plans for it. Some of these are rewriting some of it in C++, allowing you to log output to MongoDB and total world domination! However, today’s news is not as grand.

The first piece of news is I made a PoshRunner sourceforge project to distribute the binaries. To download the latest version, click here. Secondly, there is now a PoshRunner chocolatey package, so you can install it via chocolatey. Finally, there is not a lot of documentation on PoshRunner.exe, so here is the output of poshrunner -help.

Usage: poshrunner.exe [OPTION] [...]

Options:
   --appdomainname=NAME                                     Name to give the AppDomain the PowerShell script executes in.
   --config=CONFIGFILE                                      The name of the app.config file for the script. Default is scriptName.config
   -f SCRIPT, --script=SCRIPT                               Name of the script to run.
   -h, --help                                               Show help and exit
   --log4netconfig=LOG4NETCONFIGFILE                        Override the default config file for log4net.
   --log4netconfigtype=LOG4NETCONFIGTYPE                    The type of Log4Net configuration.
   --shadowcopy                                             Enable Assembly ShadowCopying.
   -v, --version                                            Show version info and exit
26Dec/121

“Forking” a long running command to a new tab with ConEmu. The magic of -new_console:c

Here’s a quick tip I’d thought I’d share after being quite rightly told to RTFM by the author of ConEmu.

Suppose you are running FarManager from ConEmu and want to update all your chocolatey packages. You can do so with the command cup all. However, that will block your FarManager session until the cup all completes. You have four options to fix this:

  1. You can start a new tab in ConEmu with the menu. This is undesirable because you’re obviously a command line guy.
  2. You press Shift+Enter after the cup all command. This is undesirable because unless you configure ConEmu to intercept every new command window, a regular console window will appear. Also, the console will close automatically upon completion.
  3. You can type cup all & pause and hit Shift+Enter to allow the window to stay open. Or
  4. You can type cup all -new_console:c to open a new tab that will execute the command, and not close upon completion.

Obviously I recommend option 4.