My company recently discovering the joys of using nginx as a reverse proxy cache server. This allowed us to significantly reduce the load on our application servers. Of course, as soon as we got this setup working nicely, a request for A/B testing came down the pipeline.
There are some obstacles to conducting A/B testing while using nginx as a reverse proxy cache server.
Obstacle 1: Lack of "sticky" sessions in free nginx product. While there is support for session affinity as part of the nginx commercial subscription, the product didn’t suit our needs. Without sticky sessions each page load would potentially go to a different upstream server. This would render many tests unusable, and would make the site feel disjointed.
Obstacle 2: Since pages are being cached by nginx, all requests received the same cached response. This means you couldn’t serve different versions of the same page.
Obstacle 3: To keep code complexity down, we didn’t want to have to modify our application to be aware of other tests we were performing.
We were able to overcome these obstacles using only the default modules that were part of nginx 1.4.x.
The following are snippets of our server config. The file exists entirely in the nginx http context. I won’t go into the configuration of nginx outside of this file as that information is readily available elsewhere. I’m going to jump around a bit to ease in explanation. The file will be shown in its entirety at the bottom.
The first thing is to define our upstream server groups. In this setup we have defined two server groups (upstreamServerA and upstreamServerB) each with a single server. Each upstream server group represents a version of the site we are testing. We could increase the number of tests by adding more upstream server groups. The server definition is shown with a standard .net domain name for ease of reading, this should be the IP address or location of your application server.
Here we make use of one of nginx’s default modules; ngx_http_split_clients_module. The idea here is to setup the split percentage for our tests. What’s actually happening here is that nginx is creating a string composed of the seed string "seedString" concatenated with the client IP address, the client’s user agent, and the current time. Nginx then hashes this string into a number. The lower 50% of the number range gets assigned upstreamServerA and the upper 50% of the number range gets assigned upstreamServerB. This gets saved into the $upstream_variant variable. This segment is only used for each client’s first request.
With this segment we are going to check for the presence of a cookie named "sticky_upstream" in the client request. The goal here is to set the variable named $upstream_group based on this cookie. If the value of the cookie is "upstreamServerA" we set $upstream_group to "upstreamServerA". We do similarly if the value is "upstreamServerB". If the value of the cookie is neither of these, or if the cookie is not present, we use the value of the $upstream_variant variable as we defined in the previous segment.
Now we can define our server context.
We are defining two locations here "/" and "/admin". We treat "/admin" differently as we want all admin requests to go to a single upstream server. This may not be needed in all setups but I thought I’d show how to accomplish it.
The first thing we want to do in the "location /" context is to set the "sticky_upstream" cookie.
This will make all subsequent requests from the client "stick" to the same upstream server group.
Now we tell nginx to use the value of the $upstream_group variable as the upstream server group.
This segment allows us to cache responses based on the $scheme, $host, $request_uri and (the important bit for this post) the $upstream_group. So that we have different caches for each test.
As I discussed briefly, what if we want to send all admin interactions to a single upstream server group? Let’s look at the "location /admin" context:
We are defining the variable $upstream_admin and setting it to "upstreamServerB". Then setting the client’s "sticky_upstream" cookie equal to it. The final bit is to tell nginx to use the value of $upstream_admin as the upstream server.
The file in its entirety can be found below:
I recently need to setup a CentOS 6.4 vm for development Java development. I wanted to be able to run Eclipse STS and on said vm and display the X11 Windows remotely on my Windows 7 desktop via XMing. I saw no reason for the CentOS VM to have a local X11 server. I’m quite comfortable with the Linux command line. I decided to share briefly on how to go from a CentOS minimal install to something actually useful for getting work done.
- /usr/bin/man The minimal install installs man pages, but not the man command. This is an odd choice.
yum install manwill fix that.
- vim There is a bare bones install of vim included by default that is only accessible via vi. If you want a more robust version of vim,
yum install vim.
- X11 forwarding You need the xauth package and fonts.
yum install xauthwill allow X11 forwarding to work.
yum groupinstall fontswill install a set of fonts.
- A terminal for absolute minimal viability
yum install xtermwill give you a terminal. I prefer terminator, which is available through rpmforge.
- RpmForge (now repoforge) Centos is based on Red Hat Enterprise Linux. Therefore it focuses on being a good production server, not a developer environment. You will probably need rpmforge to get some of the packages you want. The directions for adding Rpmforge to your yum repositories are here.
- terminator This is my terminal emulator of choice. One you added rpmforge,
yum install rpmforge
- gcc, glibc, etc Honestly, you can usually live without these if you stick to precompiled rpms, and you’re not using gcc for development. If you need to build a kernel module,
yum install kernel-devel gcc makeshould get you what out need.
From here, you can install the stuff you need for your development environment for your language, framework, and scm of choice.
The other day I was mounting an ISO in Windows 8 via the Mount-DiskImage command. Since I was mounting the disk image in a script, I needed to know the drive letter it was mounted to so the script could access the files contained within. However, Mount-DiskImage was not returning anything. I didn’t want to go through the hack of listing drives before and after I mounted the disk image, or explicitly assigning the drive letter. Both would leave me open to race conditions if another drive was mounted by another process while my script ran. I was at a loss for what to do.
Then, I remembered the -PassThru parameter, which I am quite fond of using with Add-Type. See certain cmdlets, like Mount-DiskImage, and Add-Type don’t return pipeline output by default. For Add-Type, this makes sense. You rarely want to see a list of the types you just added, unless your exploring the classes in a DLL from the command like. However, for Mount-DiskImage, defaulting to no output was a questionable decision IMHO.
Now in the case of Mount-DiskImage, -PassThru doesn’t return the drive letter. However, it does return an object that you can pipe to Get-Volume which does return an object with a DriveLetter property. To figure that out, I had to ask on stackoverflow.
tl;dr: If your PowerShell cmdlet doesn’t return any output, try -PassThru. If you need the drive letter of a disk image mounted with Mount-DiskImage, pipe the output through Get-Volume.
In my last post, I talked about mounting disk images in Windows 8. Both Windows 8 and 2012 include native support for mounting ISO images as drives. However, in prior versions of Windows you needed a third party tool to do this. Since I have a preference for open source, my tool of choice before Windows 8 was WinCdEmu. Today, I decided to see if it was possible to determine the drive letter of an ISO mounted by WinCdEMu with PowerShell.
A quick search of the internet revealed that WinCdEmu contained a 32 bit command line tool called batchmnt.exe, and a 64 bit counterpart called batchmnt64.exe. These tools were meant for command line automation. While I knew there would be no .NET libraries in WinCdEmu, I did have hope there would be a COM object I could use with New-Object. Unfortunately, all the COM objects were for Windows Explorer integration and popped up GUIs, so they were inappropriate for automation.
Next I needed to figure out how to use batchmnt. For this I used batchmnt64 /?.
C:\Users\Justin>"C:\Program Files (x86)\WinCDEmu\batchmnt64.exe" /? BATCHMNT.EXE - WinCDEmu batch mounter. Usage: batchmnt <image file> [<drive letter>] [/wait] - mount image file batchmnt /unmount <image file> - unmount image file batchmnt /unmount <drive letter>: - unmount image file batchmnt /check <image file> - return drive letter as ERORLEVEL batchmnt /unmountall - unmount all images batchmnt /list - list mounted C:\Users\Justin>
Mounting and unmounting are trivial. The /list switch produces some output that I could parse into a PSObject if I so desired. However, what I really found interesting was batchmnt /check. The process returned the drive letter as ERORLEVEL. That means the ExitCode of the batchmnt process. If you ever programmed in a C like language, you know your main function can return an integer. Traditionally 0 means success and a number means failure. However, in this case 0 means the image is not mounted, and a non zero number is the ASCII code of the drive letter. To get that code in PowerShell is simple:
$proc = Start-Process -Wait ` "C:\Program Files (x86)\WinCDEmu\batchmnt64.exe" ` -ArgumentList '/check', '"C:\Users\Justin\SQL Server Media\2008R2\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665.iso"' ` -PassThru; [char] $proc.ExitCode
The Start-Process cmdlet normally returns immediately without output. The -PassThru switch makes it return information about the process it created, and -Wait make the cmdlet wait for the process to exit, so that information includes the exit code. Finally to turn that ASCII code to the drive letter we cast with [char].
I recently wrote this script to let me quickly change the diff and merge tools TFS uses from PowerShell. I plan to make it a module and add it to the StudioShell Contrib package by Jim Christopher (blog|twitter). For now, I share it as a gist and place it on this blog.
The script supports Visual Studio 2008-2012 and the following diff tools:
I made MSBuild tasks for creating 7zip and zip files out of the
$(TargetDir) of an MSBuild project. There is a nuget package for it. Simply include it in your project via nuget and build it from the command line with the following command line:
%windir%\microsoft.net\framework\v4.0.30319\msbuild __PROJECT_FOLDER__\__PROJECT_FILE__ /t:SevenZipBin,ZipBin
This will create
__PROJECT_FOLDER__\bin\Target. To see how to override some of the defaults, look at this msbuild file in PoshRunner.
Source code is available via a github repo, and patches are welcome!
I’ve been periodically hacking away at PoshRunner. I have lots of plans for it. Some of these are rewriting some of it in C++, allowing you to log output to MongoDB and total world domination! However, today’s news is not as grand.
The first piece of news is I made a PoshRunner sourceforge project to distribute the binaries. To download the latest version, click here. Secondly, there is now a PoshRunner chocolatey package, so you can install it via chocolatey. Finally, there is not a lot of documentation on PoshRunner.exe, so here is the output of poshrunner -help.
Usage: poshrunner.exe [OPTION] [...] Options: --appdomainname=NAME Name to give the AppDomain the PowerShell script executes in. --config=CONFIGFILE The name of the app.config file for the script. Default is scriptName.config -f SCRIPT, --script=SCRIPT Name of the script to run. -h, --help Show help and exit --log4netconfig=LOG4NETCONFIGFILE Override the default config file for log4net. --log4netconfigtype=LOG4NETCONFIGTYPE The type of Log4Net configuration. --shadowcopy Enable Assembly ShadowCopying. -v, --version Show version info and exit
Suppose you are running FarManager from ConEmu and want to update all your chocolatey packages. You can do so with the command
cup all. However, that will block your FarManager session until the
cup all completes. You have four options to fix this:
- You can start a new tab in ConEmu with the menu. This is undesirable because you’re obviously a command line guy.
- You press Shift+Enter after the cup all command. This is undesirable because unless you configure ConEmu to intercept every new command window, a regular console window will appear. Also, the console will close automatically upon completion.
- You can type
cup all & pauseand hit
Shift+Enterto allow the window to stay open. Or
- You can type
cup all -new_console:c to open a new tab that will execute the command, and not close upon completion.
Obviously I recommend option 4.
In the past I’ve written about using the Windows Registry to reference assembly paths in Visual Studio. In it I made reference to the seminal article New Registry syntax in MSBuild v3.5, which is the dialect Visual Studio 2008 speaks. That syntax has served me well until recently.
See fate lead me to writing a small C++/CLI program. In it I had to refer to some .NET assemblies that were not installed in the GAC. They were however installed as part of a software package that wrote its install path to the registry. So I figured out which value it wrote the install directory to and referenced it in the .vcxproj file using the
$(Registry:HKEY_LOCAL_MACHINE\Software\Company\Product@TargetDir). Unfortunately, it didn’t work!
I did some troubleshooting and discovered it worked when I build the vcxproj from the command line with msbuild.exe. It seemed logical to blame it one the fact that I was using C++. Devenv.exe (the Visual Studio executable) must have been treating .vcxproj files differently than csproj and vbproj files. Then suddenly it dawned it me, the problem was I was running on a 64 bit version of Windows! This was a relief, because it meant that .vcxproj were not special or subject to unique bugs.
To make a long story short, Visual Studio is a 32 bit application, and by default when a 32 bit process interacts with the registry on a 64 bit version of Windows,
HKEY_LOCAL_MACHINE\Software gets redirected to
HKEY_LOCAL_MACHINE\Software\Wow6432Node. This MSDN article explains the gory details.
At first it seemed the only workaround was a custom MSBuild task line the MSBuild Extension Pack. I complained on twitter to Scott Hanselman (blog|twitter). He replied with this article talking about how the page faults, addressable memory space, etc was not an issue. That article made some good points. However, it didn’t address my (at the time) very real and legitimate concern. Scott said he’d ask around internally if I filed a connect bug and got David Kean (blog|twitter) involved in the conversation. I filed a connect bug. Then David pointed out a link to the MSBuild 4.0 key GetRegistryValueFromView.
<Target Name="BeforeBuild"> <!-- Read the registry using the native MSBUILD 3.5 method: http://blogs.msdn.com/b/msbuild/archive/2007/05/04/new-registry-syntax-in-msbuild-v3-5.aspx --> <PropertyGroup> <MsBuildNativeProductId>$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion@ProductId)</MsBuildNativeProductId> <MsBuildNativeProductName>$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion@ProductName)</MsBuildNativeProductName> <MsBuild4NativeProductId>$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion', 'ProductId', null, RegistryView.Registry64))</MsBuild4NativeProductId> <MsBuild4NativeProductName>$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion', 'ProductName', null, RegistryView.Registry64))</MsBuild4NativeProductName> </PropertyGroup> <!-- Lets use the MSBuild Extension Pack (still no joy) http://www.msbuildextensionpack.com/help/184.108.40.206/html/9c8ecf24-3d8d-2b2d-e986-3e026dda95fe.htm --> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductId"> <Output PropertyName="MsBuildExtProductId" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductName"> <Output PropertyName="MsBuildExtProductName" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <!-- And now RegistryView: http://msdn.microsoft.com/en-us/library/microsoft.win32.registryview.aspx --> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductId" RegistryView="Registry64"> <Output PropertyName="MsBuildExt64ProductId" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductName" RegistryView="Registry64"> <Output PropertyName="MsBuildExt64ProductName" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <!-- All messages are of high importance so Visual Studio will display them by default. See: http://stackoverflow.com/questions/7557562/how-do-i-get-the-message-msbuild-task-that-shows-up-in-the-visual-studio-proje --> <Message Importance="High" Text="Using Msbuild Native: ProductId: $(MsBuildNativeProductId) ProductName: $(MsBuildNativeProductName)" /> <Message Importance="High" Text="Using Msbuild v4 Native: ProductId: $(MsBuild4NativeProductId) ProductName: $(MsBuild4NativeProductName)" /> <Message Importance="High" Text="Using Msbuild Extension Pack: ProductId: $(MsBuildExtProductId) ProductName: $(MsBuildExtProductName)" /> <Message Importance="High" Text="Using Msbuild Extension Pack: ProductId: $(MsBuildExt64ProductId) ProductName: $(MsBuildExt64ProductName)" /> </Target>
That MSBuild code has been tested via this github project on two machines running Visual Studio 2010 SP1. One has Windows XP3 32 bit and the other runs Windows 8 64 bit. I’ve verified that
$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\whatever', 'value', null, RegistryView.Registry64)) will give you the same value as you see in regedit.exe
Yes MSBuild 4.0, and therefore Visual Studio 2010 solved this problem and I simply didn’t google hard enough for the answer. However, I googled pretty hard, and I’m pretty good at googling. I didn’t think I was particularly rash in “pulling the Hanselman card.” The best I can do is write this blog post, comment on other blogs and answer questions on StackOverflow to fill the internet with references to the MSBuild syntax.
Recently I’ve decided to purchase a Visual Studio 2012 Professional MSDN subscription. There are several reasons for this. First of all, my Visual Studio 2012 30 day trial ran out and I absolutely need the non-express edition of it for a side project. Secondly, I’d like to be able to test poshrunner in older versions of Windows. Thirdly, Having access to checked builds of Windows would allow me to lean more in my Windows Internals study group.
I started my journey to an MSDN subscription on Saturday December 8th 2012. I was able to access my benefits Thursday December 12th. The four day journey was not pleasant.
On Saturday I sat down credit card in hand and placed my order. I didn’t save the receipt (stupid I know). I got no confirmation email, and I did not see an authorization on my credit card. I waited. On Sunday I got notification that my order was pending. Perhaps they wanted to verify I wasn’t a software pirate. It seemed annoying that this wasn’t an instant process, but I remained patient and understanding. Then Tuesday I woke up to an email stating that my order was canceled.
MSDN customer support hours are from 5:30PST to 17:30PST. I am on EST so I had to wait until 8:30 to call. I was already in the office at that time. I was told the bank did not accept my charge, but that if I placed the order again in 48 hours, the security check would be overridden and I would be able to download the software instantaneously I tried buying the MSDN license again. It failed, but instantaneously. I called my bank. I was told both authorizations were successful on their end. So I called Microsoft again. They claimed a system glitch prevented them from accepting the payment. The specific phrase “system glitch” was used consistently by several MSDN customer support representatives over several phone calls to describe instances when my bank authorized a charge but Microsoft rejected it. I never uttered that phrase once. I’m suspicious this is a common enough occurrence that there are procedures and guidelines in place documenting the “system glitch”.
At this point they asked if I placed the second order from a computer on the same network as the first. I said no. The first order was placed at home and the second order was placed in the office. I was told to try again from the same network. I don’t have remote access to my home computer (take away my geek card) so I had to wait till I got home. I asked what would happen if it didn’t work when I tried again. I was told the only other option was to place the order over the phone, and that phone orders take three business days to process. I didn’t get home until after midnight so I didn’t try Tuesday night.
Wednesday I awoke and attempted to place the order. It failed. I went into the office, called customer support and attempted a phone order. It failed, because my bank decided three identical charges for $1,305.41 (Microsoft collects sales tax in NY on top of the $1199 base price) seemed suspicious. Luckily I am able to fix that by responding to a text message CitiBank sent me. A chat session and a call later and the purchase seems to have been resolved. I would have my subscription on Monday.
Thursday I got a call saying my order was canceled. However, T-Mobile dropped the call before I could deal with it. When I had some free time I called CitiBank. The first operator gave me some free airline miles and transfered me to Ashley, the fraud department specialist. Ashley ensured me Microsoft could bang my credit card as often and as many times as they wanted to. I then called MSDN support and talked to Chris.
I summarized the situation for Chris. I told him I didn’t want to wait another three days for a phone order. He said he had no power to deal with that. He determined my order from Wednesday was still going through. After putting me on hold a few times, he said he would get me a welcome email that would let me download my MSDN products in 30 minutes. I got his name and a case number and he did just that. I got a call back to ensure I was able to access my download, and everything worked just fine. I’m a little curious as to why his tune changed and he was able to get me my subscription number in thirty minutes though.
First of all I have to thank CitiBank for their actions. At no point did they do anything wrong or fail to do anything. Secondly, the customer service staff at MSDN were very professional and understanding, despite my growing irateness. However, the fact is they were never able to tell me why my order was canceled. If they at some point explained that I was flagged as a pirate, or something else, I’d be a bit more understanding. Thirdly, why does the process take so long? I was able to buy a new car in about an hour. It took a few days for delivery because the package I wanted wasn’t on the lot. However, it took less than four days for the car to be driver off the lot (by someone else because it was the car I learned stick on).
The MSDN subscription sales model seems to make sense for businesses purchasing volumne licenses. They take checks, you can talk to a real person. Its not at all optimized for the person that wants to buy one MSDN license “right now”. People like me are on the lower end of the income bracket for Microsoft, but we are also the ones that are either really passionate hobbyists, entrepreneurs, or the people on the fence. While I’m still going to develop on the Microsoft stack for years, this experience has left a bad taste in my mouth for their purchase process, compared with for example JetBrains or RedGate.
In the end the real issue was the lack of transparency. Its generally safe to assume that when you are buying software for online delivery, you will have it within an hour. If Microsoft made it clear its not as simple for them, first time subscribers like me would be a little more understanding.