How to open async calls in a new tab instead of new window within an AngularJS app

I recently wanted to generate a PDF on users demand and show it in a new browser tab.
Sounds trivial, at first not for me :) I tried it with different “solutions” and on my way my google search result got better and better. With “open window new tab without popup blocker async” I finally found in this thread a nice and easy solution. The trick is to remember the reference to the new window and change the location of that window when your asynchron call completes.

$scope.generatePdf = function () {
  var tabWindowId ='about:blank', '_blank');

  $'/someUrl', data).then( function (response) {
    tabWindowId.location.href = response.headers('Location');

If you want to see it in action open this plunker. While testing this plunker, it seems that openWindow will open a tab, as long as the async call is quick enough (less than a second). The setTimeout is therefore set to 1001.

I hope you will find this solution quicker than I did. Please let me know if you have any questions or suggestions. Either per @leifhanack or comment here.

managing multiple ssh keys

Recently I wanted to connect to some remote server using different ssh keys. With the right ~/.ssh/config file this is easy and comfortable.


IdentityFile ~/.ssh/%h/%r/id_rsa
IdentityFile ~/.ssh/%h/id_rsa
IdentityFile ~/.ssh/id_rsa

%h and %r are placeholder for host and remote-user. ssh foo@bar will first check if ~/.ssh/bar/foo/id_rsa exists, next ~/.ssh/bar/id_rsa and finally ~/.ssh/id_rsa.


Host github
        HostName 123.45.678.90
        User myuser
        IdentityFile ~/.ssh/123.45.678.90/id_rsa

Instead of ssh myuser@123.45.678.90 the above config allows you to simply type

ssh github

You don’t need to remember all your IPs and remote-users any longer. For me this is a big time saver.

Please let me know if you have questions or suggestions. Either per @leifhanack or comment here.

A Continuous Deployment Pipeline with Gradle and Docker

This series of posts will show you some aspects of our continuous deployment pipeline for one of our products. It is built, tested and deployed to our servers by using Gradle, while the application itself runs inside Docker containers.

We want to show you how we use Gradle to implement a complete pipeline with minimal dependency on command line tools. We’ll also describe how to perform rollouts to production without the need for shell scripts or even remote shell access, by using the Docker remote API. All details regarding our AngularJS frontend, test concepts for multi-product compatibility and detailed code examples will be explained in upcoming posts. This post starts with a bird’s-eye view of our pipeline.


Our deployment pipeline is divided into six build goals, combined in a TeamCity Build Chain:

  • build, publish
  • e2e test
  • contract test
  • build image
  • deploy on dev
  • deploy on prod

Every git push to a shared Git repository triggers a new build and is automatically deployed to production.

The first step builds a multi module project and produces two Spring Boot jar files for our backend and frontend webapps. Both jars are published to our Nexus artifact repository. Building a Spring Boot application with Gradle is straight-forward, you’ll find examples in the Spring Boot guides. The gradle-grunt-plugin helps us building and unit testing the AngularJS frontend by delegating build steps to the Grunt task runner.

Our e2e-test build step runs some integration tests on our frontend to ensure that it is compatible to our backend. The next step runs so-called contract tests, which runs cross-product tests to ensure our new release still plays well with the other services on our platform.

The fourth step builds a Docker image containing both frontend and backend webapps and pushes it to a private Docker registry. After that, we pull the newly built image to our development and production stages and run container instances. In order to maximize product availability, both stages use blue-green deployment.

Gradle and Groovy power

As already mentioned, the complete pipeline is implemented using Gradle. Running the build and publish tasks is quite trivial, some code snippets will be shown in the following posts. The integration of our frontend build using the gradle-grunt-plugin has been straight forward, too, while we added some configuration to let Gradle know about Grunt’s inputs and outputs. That way, we enable Gradle to use its cache and skip up to date tasks when there aren’t any code changes.

Running the e2e-tests and contract-tests wasn’t possible with existing plugins, so we had to create some special tasks. Since Gradle lets us write native Groovy code, we didn’t need to create dedicated shell scripts, but execute commands as simply as "command".execute(). That way we can perform the following steps to run our e2e-tests with Protractor:

  • start selenium-server
  • start e2e-reverse-proxy
  • start frontend and backend
  • run protractor e2e-tests
  • tear down

In contrast to the e2e-tests, where we only check our frontend and backend application, we have some contract-tests to check our interaction with other services. Our backend interacts with some other products of our platform, and we want to be sure that after deploying a new release of our product, it still works together with current versions of the other products. Our contract-tests are implemented as Spock framework and TestNG tests and are a submodule of our product. A dedicated contract-tester module in an own project performs all necessary steps to find and run the external webapps in their released versions and to perform our contract-tests against their temporary instances. Like with the e2e-tests, all steps are implemented in Gradle, but this time we could use plugins like Gradle Cargo plugin and Gradle Download Task, furthermore Gradle’s built in test runner and dynamic dependency resolution for our contract-tests artifact:

  • collect participating product versions
  • download each product’s webapp from Nexus
  • start the participating webapps and infrastructure services
  • run contract-tests
  • tear down

Gradle and Docker

With our artifacts being tested, we package them in Docker images, deploy the images to our private registries and run fresh containers on our servers. Docker allows us to describe the image contents by writing Dockerfiles as plain text, so that we can include all build instructions in our Git repository. Before using a Gradle Docker plugin, we used Gradle to orchestrate Docker clients, which had to be installed on our TeamCity agents and the application servers. Like described above, we used the Groovy command executor to access the Docker command line interface. We’re now in a transition to only use the Docker remote API, so that we don’t need a Docker client on every build server, but only need to point the plugin to any Docker enabled server.

Building and distributing our images, followed by starting the containers is only one part of our deployment. In order to implement continuous delivery without interrupting availability of our product, we implemented blue-green deployment. Therefore, our Gradle deployment script needs to ask our reverse proxy in front of our application servers for a deployable stage (e.g. green), perform the Docker container tasks and toggle a switch from the current to the new stage, e.g. from blue to green:

  • get the deployable stage
  • pull the new image from the Docker registry
  • stop and remove the old container
  • run a new container based on the new image
  • cleanup (e.g. remove unused images)
  • switch to the new stage with the fresh container


With this brief overview you should have an impression of the key elements of our pipeline. In the upcoming posts we’ll dive into each of these build steps, provide some code examples and discuss our experience regarding the chosen technologies and frameworks in context of our server setups.

If you’d like to know special details, please leave a comment or contact us via Twitter @gesellix, so that we can include your wishes in the following posts. Even if you’d like us to talk about non technical aspects, e.g. like our experience introducing the above technologies to our teams, just ask!

Dozer Plugin für IntelliJ IDEA

In einigen Projekten nutzen wir intensiv das Mapping Framework Dozer. Vor knapp 4 Jahren wurde ein Plugin für IntelliJ IDEA entwickelt, das uns beim Mappen stark unterstützt. Es bietet Code Completion und Error Highlighting in den XML-Mappingdateien von Dozer an.

Mit der Nutzung von IDEA 13 war es nötig, das Plugin an die neue Version der Entwicklungsumgebung anzupassen. Im Zuge der Anpassung haben wir beschlossen, das Plugin zu veröffentlichen und den Quelltext unter eine Open Source Lizenz zu stellen. Die Sourcen sind auf der GitHub-Seite von Hypoport zu finden. Das “Binary” kann über den Plugin-Repository Browser in IDEA bezogen werden bzw. auf der Plugin-Seite von JetBrains heruntergeladen werden.

Running Multiple Local Tomcats with Cargo and Gradle

We are currently using Cargo in combination with Gradle to implement consumer based tests for one of our projects. In order to do so, we created a Gradle script to deploy a service web app into a Tomcat container via Cargo and pass its URL to our consumer test suite.

As long as we run the build script locally, everything works fine. But we noticed that it failed every once in a while when running on certain build agents in our TeamCity build pipeline. The failures where always either caused by missing write permissions to the /tmp/cargo directory or because the Tomcat port was already in use.

So we took a closer look at the unreliable agents and realized to our surprise that they shared the same machine. Up until this point we just assumed that every build agent had its dedicated environment, so we didn’t really worry about things like conflicting ports or shared files.

Being fairly new to Gradle, Cargo and especially the Gradle version of the Cargo plugin, it took me some time to figure out how to isolate our Cargo run from the outside world. In the rest of this article I’m going to show you how I did it.

The Situation

There are two major problems we need to take care of. The first one is pretty obvious: all network ports need to be determined dynamically. This is a best-practice for build scripts that are shared between different environments anyway, so it is a welcome improvement.

The second problem is a bit more surprising. Cargo uses the as default working directory. Most of the time this will simply be /tmp. At least it was on our build server. Unless this path is changed, all Cargo runs will work on the same directory and consequently interfere with each other. So we need to figure out how to change this path.

Changing the Ports

As I mentioned before, I’m fairly new to Gradle, so I was pleasently surprised to find out that it comes with a class called AvailablePortFinder. As the name suggests, this little helper allows you to conveniently find available ports. Great! That’s exactly what we need in order to instruct Cargo to use different ports when firing up Tomcat. However there is a small caveat regarding its use coming directly from the Gradle guys:

If possible, it’s preferable to let the party creating the server socket select the port (e.g. with new ServerSocket(0)) and then query it for the port chosen. With this class, there is always a risk that someone else grabs the port between the time it is returned from getNextAvailable() and the time the socket is created.

Unfortunately that’s not an option for code we don’t control, so we have to live with the small risk that someone else could grab the port before our Tomcat can occupy it.

Now how many ports do we need to change and how do we tell Cargo to do so? In case of Tomcat the answer turns out to be three: the HTTP port, the AJP port and the RMI port.

A look into the Cargo documentation and this blog post reveals the properties we can use to change these ports:

  • cargo.servlet.port for HTTP
  • cargo.tomcat.ajp.port for AJP
  • cargo.rmi.port for RMI

They can be configured in the cargo.local.containerProperties section of the Cargo configuration. The resulting build script should look similar to this:

def availablePortFinder = AvailablePortFinder.createPrivate()
def tomcatDownloadUrl = 'http://.../'

cargo {
    containerId = 'tomcat7x'
    deployable {
    local {
        installer {
            installUrl  = tomcatDownloadUrl
            downloadDir = file("$buildDir/download")
            extractDir  = file("$buildDir/extract")
        containerProperties {
            property 'cargo.servlet.port', availablePortFinder.nextAvailable
            property 'cargo.tomcat.ajp.port', availablePortFinder.nextAvailable
            property 'cargo.rmi.port', availablePortFinder.nextAvailable

cargoStartLocal.finalizedBy cargoStopLocal

This sucessfully solves the port problem. So let’s move on to the next one.

Changing the Working Directory

Changing the working directory turned out to be a bit tricky. In theory it can be changed via the two configuration properties homeDir and configHomeDir in the local Cargo configuration. But for some reason changing the directory to a location in my $buildDir resulted in the following errors:

Directory '/my/project/home/build/cargo' specified for property 'homeDir' does not exist.
Directory '/my/project/home/build/cargo' specified for property 'configHomeDir' does not exist.

It looks like Cargo doesn’t automatically create these directories, so we have to do it manually by running a custom task right before cargoStartLocal:

def cargoHome = "$buildDir/cargo"
cargo {
    containerId = 'tomcat7x'
    local {
        homeDir         = file(cargoHome)
        configHomeDir   = file(cargoHome)

task createCargoHome() {
  doLast {
    if (!file(cargoHome).exists() && !file(cargoHome).mkdirs()) {
      println "Failed to create directory '${cargoHome}'"

// This will create the Cargo home directory before Cargo runs
cargoStartLocal.dependsOn createCargoHome

That’ll do it! Cargo will now create all its files in the project build directory, so it won’t interfere with other builds anymore. Here you can find an example build script which combines both solutions and adds some more context.

I hope this article saves you the time to figure this out all by yourself. If you have any questions or ideas how to improve this solution please contact me at @SQiShER or leave a comment.

How to make Puppet and Facter work on Docker enabled hosts

Docker provides a lightweight virtual environment by using Linux containers (LXC). We are establishing Docker in one of our projects to implement continuous delivery. For host management we use Puppet, which itself relies on some facts provided by their tool Facter.

Our Puppet modules make use of the ipaddress fact, determined by a built-in Facter script. We regard the ipaddress as the public IP address of the host. As described at, Facter doesn’t always collect the public IP address of the eth0 interface, but uses the IP address of docker0 on Docker hosts.

Finding the public IP address isn’t trivial, because it is a very environment specific datum so that Facter cannot always provide the best result. Daniel Pittman of Puppet Labs describes the problem at a forum post. We’ll show you two code examples how to find the best IP address for your specific demands. Other ideas are mentioned in a Puppet Labs forum answer.

Custom Facts

With Facter you can define custom facts, implemented in your preferred language. Custom facts help you define completely new facts or use existing facts. Since Docker adds a completely new network interface, we added three custom facts ipaddress_primary, macaddress_primary, and netmask_primary.

All of our dockerized hosts had the public interface named eth0, so we only had to get the fact named ipaddress_eth0 as our primary IP address. As fallback we used the original ipaddress:

Facter.add("ipaddress_primary") do
    setcode do
        if Facter.value('ipaddress_eth0')

The same logic has been used for the netmask and macaddress facts. In order to distribute the new facts on our hosts, we added the files in our Puppet sources at /modules/module_name/lib/facter/ipaddress_primary.rb. We could now use the new facts in our Puppet modules, just like the original ipaddress fact.

For consistency, we should have changed all existing Puppet modules to use the new ..._primary facts. Since we only wanted to update the dockerized hosts and their modules, we tried to override the original fact. Some posts describe how to implement fact overrides by only using the same fact name, but that didn’t work for us. So we tried another way of overriding existing facts by using environment variables.

Environment variables

The Puppet CookBook describes how to override existing facts. You simply add a variable with the prefix FACTER_ and append the fact name you’d like to override. In our example, the result looks like this:


Adding the environment variables on our Docker hosts through our Puppet modules looks like follows:

  augeas { "environments":
    context => "/files/etc/environment",
    changes => [
      "set FACTER_ipaddress ''",
      "set FACTER_netmask ''",
    onlyif  => 'match /files/etc/environment/FACTER_ipaddress/* size == 0',

The overrides through environment variables have been added only to our dockerized modules, so we didn’t have to update all other hosts.

Another way to make Facter work together with Docker would have been to change the docker0 interface name. Well, as mentioned above, you have to keep in mind that Docker wasn’t the main issue, but the generic way of Facter to set the ipaddress fact. Facter cannot know what you expect in your environment, so you have to describe your specific needs in explicit facts.

If you have found another way of overriding facts or if this post was helpful to you, we’d like to know. Just leave a comment or get in contact @gesellix!

Apache RewriteRule – Rewriting URLs With Already Encoded QueryStrings

Recently we renamed a URL which was publically available. The system uses Apache httpd, so it was quiet easy to create a RewriteRule:

RewriteRule ^/oldname/(.*) /newname/$1 [R,L]

Unfortunately that didn’t work as expected. A URL like myserver/oldname?myprop=name with spaces will be encoded to myserver/oldname?myprop=name%20with%20spaces. With the above RewriteRule the rewritten URL will be myserver/oldname?myprop=name%2520with%2520spaces. It got encoded two times!.

To fix this, you need the right keywords and Google. Searching for mod_rewrite url encode revealed that adding the NE flag (for No Encoding) does the trick:

RewriteRule ^/oldname/(.*) /newname/$1 [R,NE,L]