How to open async calls in a new tab instead of new window within an AngularJS app

I recently wanted to generate a PDF on users demand and show it in a new browser tab.
Sounds trivial, at first not for me :) I tried it with different “solutions” and on my way my google search result got better and better. With “open window new tab without popup blocker async” I finally found in this thread a nice and easy solution. The trick is to remember the reference to the new window and change the location of that window when your asynchron call completes.

$scope.generatePdf = function () {
  var tabWindowId = window.open('about:blank', '_blank');

  $http.post('/someUrl', data).then( function (response) {
    tabWindowId.location.href = response.headers('Location');
  });
};

If you want to see it in action open this plunker. While testing this plunker, it seems that openWindow will open a tab, as long as the async call is quick enough (less than a second). The setTimeout is therefore set to 1001.

I hope you will find this solution quicker than I did. Please let me know if you have any questions or suggestions. Either per @leifhanack or comment here.

managing multiple ssh keys

Recently I wanted to connect to some remote server using different ssh keys. With the right ~/.ssh/config file this is easy and comfortable.

Easy

IdentityFile ~/.ssh/%h/%r/id_rsa
IdentityFile ~/.ssh/%h/id_rsa
IdentityFile ~/.ssh/id_rsa

%h and %r are placeholder for host and remote-user. ssh foo@bar will first check if ~/.ssh/bar/foo/id_rsa exists, next ~/.ssh/bar/id_rsa and finally ~/.ssh/id_rsa.

Comfortable

Host github
        HostName 123.45.678.90
        User myuser
        IdentityFile ~/.ssh/123.45.678.90/id_rsa

Instead of ssh myuser@123.45.678.90 the above config allows you to simply type

ssh github

You don’t need to remember all your IPs and remote-users any longer. For me this is a big time saver.

Please let me know if you have questions or suggestions. Either per @leifhanack or comment here.

A Continuous Deployment Pipeline with Gradle and Docker

This series of posts will show you some aspects of our continuous deployment pipeline for one of our products. It is built, tested and deployed to our servers by using Gradle, while the application itself runs inside Docker containers.

We want to show you how we use Gradle to implement a complete pipeline with minimal dependency on command line tools. We’ll also describe how to perform rollouts to production without the need for shell scripts or even remote shell access, by using the Docker remote API. All details regarding our AngularJS frontend, test concepts for multi-product compatibility and detailed code examples will be explained in upcoming posts. This post starts with a bird’s-eye view of our pipeline.

Overview

Our deployment pipeline is divided into six build goals, combined in a TeamCity Build Chain:

  • build, publish
  • e2e test
  • contract test
  • build image
  • deploy on dev
  • deploy on prod

Every git push to a shared Git repository triggers a new build and is automatically deployed to production.

The first step builds a multi module project and produces two Spring Boot jar files for our backend and frontend webapps. Both jars are published to our Nexus artifact repository. Building a Spring Boot application with Gradle is straight-forward, you’ll find examples in the Spring Boot guides. The gradle-grunt-plugin helps us building and unit testing the AngularJS frontend by delegating build steps to the Grunt task runner.

Our e2e-test build step runs some integration tests on our frontend to ensure that it is compatible to our backend. The next step runs so-called contract tests, which runs cross-product tests to ensure our new release still plays well with the other services on our platform.

The fourth step builds a Docker image containing both frontend and backend webapps and pushes it to a private Docker registry. After that, we pull the newly built image to our development and production stages and run container instances. In order to maximize product availability, both stages use blue-green deployment.

Gradle and Groovy power

As already mentioned, the complete pipeline is implemented using Gradle. Running the build and publish tasks is quite trivial, some code snippets will be shown in the following posts. The integration of our frontend build using the gradle-grunt-plugin has been straight forward, too, while we added some configuration to let Gradle know about Grunt’s inputs and outputs. That way, we enable Gradle to use its cache and skip up to date tasks when there aren’t any code changes.

Running the e2e-tests and contract-tests wasn’t possible with existing plugins, so we had to create some special tasks. Since Gradle lets us write native Groovy code, we didn’t need to create dedicated shell scripts, but execute commands as simply as "command".execute(). That way we can perform the following steps to run our e2e-tests with Protractor:

  • start selenium-server
  • start e2e-reverse-proxy
  • start frontend and backend
  • run protractor e2e-tests
  • tear down

In contrast to the e2e-tests, where we only check our frontend and backend application, we have some contract-tests to check our interaction with other services. Our backend interacts with some other products of our platform, and we want to be sure that after deploying a new release of our product, it still works together with current versions of the other products. Our contract-tests are implemented as Spock framework and TestNG tests and are a submodule of our product. A dedicated contract-tester module in an own project performs all necessary steps to find and run the external webapps in their released versions and to perform our contract-tests against their temporary instances. Like with the e2e-tests, all steps are implemented in Gradle, but this time we could use plugins like Gradle Cargo plugin and Gradle Download Task, furthermore Gradle’s built in test runner and dynamic dependency resolution for our contract-tests artifact:

  • collect participating product versions
  • download each product’s webapp from Nexus
  • start the participating webapps and infrastructure services
  • run contract-tests
  • tear down

Gradle and Docker

With our artifacts being tested, we package them in Docker images, deploy the images to our private registries and run fresh containers on our servers. Docker allows us to describe the image contents by writing Dockerfiles as plain text, so that we can include all build instructions in our Git repository. Before using a Gradle Docker plugin, we used Gradle to orchestrate Docker clients, which had to be installed on our TeamCity agents and the application servers. Like described above, we used the Groovy command executor to access the Docker command line interface. We’re now in a transition to only use the Docker remote API, so that we don’t need a Docker client on every build server, but only need to point the plugin to any Docker enabled server.

Building and distributing our images, followed by starting the containers is only one part of our deployment. In order to implement continuous delivery without interrupting availability of our product, we implemented blue-green deployment. Therefore, our Gradle deployment script needs to ask our reverse proxy in front of our application servers for a deployable stage (e.g. green), perform the Docker container tasks and toggle a switch from the current to the new stage, e.g. from blue to green:

  • get the deployable stage
  • pull the new image from the Docker registry
  • stop and remove the old container
  • run a new container based on the new image
  • cleanup (e.g. remove unused images)
  • switch to the new stage with the fresh container

Summary

With this brief overview you should have an impression of the key elements of our pipeline. In the upcoming posts we’ll dive into each of these build steps, provide some code examples and discuss our experience regarding the chosen technologies and frameworks in context of our server setups.

If you’d like to know special details, please leave a comment or contact us via Twitter @gesellix, so that we can include your wishes in the following posts. Even if you’d like us to talk about non technical aspects, e.g. like our experience introducing the above technologies to our teams, just ask!

Dozer Plugin für IntelliJ IDEA

In einigen Projekten nutzen wir intensiv das Mapping Framework Dozer. Vor knapp 4 Jahren wurde ein Plugin für IntelliJ IDEA entwickelt, das uns beim Mappen stark unterstützt. Es bietet Code Completion und Error Highlighting in den XML-Mappingdateien von Dozer an.

Mit der Nutzung von IDEA 13 war es nötig, das Plugin an die neue Version der Entwicklungsumgebung anzupassen. Im Zuge der Anpassung haben wir beschlossen, das Plugin zu veröffentlichen und den Quelltext unter eine Open Source Lizenz zu stellen. Die Sourcen sind auf der GitHub-Seite von Hypoport zu finden. Das “Binary” kann über den Plugin-Repository Browser in IDEA bezogen werden bzw. auf der Plugin-Seite von JetBrains heruntergeladen werden.

Running Multiple Local Tomcats with Cargo and Gradle

We are currently using Cargo in combination with Gradle to implement consumer based tests for one of our projects. In order to do so, we created a Gradle script to deploy a service web app into a Tomcat container via Cargo and pass its URL to our consumer test suite.

As long as we run the build script locally, everything works fine. But we noticed that it failed every once in a while when running on certain build agents in our TeamCity build pipeline. The failures where always either caused by missing write permissions to the /tmp/cargo directory or because the Tomcat port was already in use.

So we took a closer look at the unreliable agents and realized to our surprise that they shared the same machine. Up until this point we just assumed that every build agent had its dedicated environment, so we didn’t really worry about things like conflicting ports or shared files.

Being fairly new to Gradle, Cargo and especially the Gradle version of the Cargo plugin, it took me some time to figure out how to isolate our Cargo run from the outside world. In the rest of this article I’m going to show you how I did it.

The Situation

There are two major problems we need to take care of. The first one is pretty obvious: all network ports need to be determined dynamically. This is a best-practice for build scripts that are shared between different environments anyway, so it is a welcome improvement.

The second problem is a bit more surprising. Cargo uses the java.io.tmpdir as default working directory. Most of the time this will simply be /tmp. At least it was on our build server. Unless this path is changed, all Cargo runs will work on the same directory and consequently interfere with each other. So we need to figure out how to change this path.

Changing the Ports

As I mentioned before, I’m fairly new to Gradle, so I was pleasently surprised to find out that it comes with a class called AvailablePortFinder. As the name suggests, this little helper allows you to conveniently find available ports. Great! That’s exactly what we need in order to instruct Cargo to use different ports when firing up Tomcat. However there is a small caveat regarding its use coming directly from the Gradle guys:

If possible, it’s preferable to let the party creating the server socket select the port (e.g. with new ServerSocket(0)) and then query it for the port chosen. With this class, there is always a risk that someone else grabs the port between the time it is returned from getNextAvailable() and the time the socket is created.

Unfortunately that’s not an option for code we don’t control, so we have to live with the small risk that someone else could grab the port before our Tomcat can occupy it.

Now how many ports do we need to change and how do we tell Cargo to do so? In case of Tomcat the answer turns out to be three: the HTTP port, the AJP port and the RMI port.

A look into the Cargo documentation and this blog post reveals the properties we can use to change these ports:

  • cargo.servlet.port for HTTP
  • cargo.tomcat.ajp.port for AJP
  • cargo.rmi.port for RMI

They can be configured in the cargo.local.containerProperties section of the Cargo configuration. The resulting build script should look similar to this:

def availablePortFinder = AvailablePortFinder.createPrivate()
def tomcatDownloadUrl = 'http://.../apache-tomcat-7.0.50.zip'

cargo {
    containerId = 'tomcat7x'
    deployable {
        ...
    }
    local {
        ...
        installer {
            installUrl  = tomcatDownloadUrl
            downloadDir = file("$buildDir/download")
            extractDir  = file("$buildDir/extract")
        }
        containerProperties {
            property 'cargo.servlet.port', availablePortFinder.nextAvailable
            property 'cargo.tomcat.ajp.port', availablePortFinder.nextAvailable
            property 'cargo.rmi.port', availablePortFinder.nextAvailable
        }
    }
}

cargoStartLocal.finalizedBy cargoStopLocal

This sucessfully solves the port problem. So let’s move on to the next one.

Changing the Working Directory

Changing the working directory turned out to be a bit tricky. In theory it can be changed via the two configuration properties homeDir and configHomeDir in the local Cargo configuration. But for some reason changing the directory to a location in my $buildDir resulted in the following errors:

Directory '/my/project/home/build/cargo' specified for property 'homeDir' does not exist.
Directory '/my/project/home/build/cargo' specified for property 'configHomeDir' does not exist.

It looks like Cargo doesn’t automatically create these directories, so we have to do it manually by running a custom task right before cargoStartLocal:

def cargoHome = "$buildDir/cargo"
...
cargo {
    containerId = 'tomcat7x'
    ...
    local {
        homeDir         = file(cargoHome)
        configHomeDir   = file(cargoHome)
    }
}

task createCargoHome() {
  doLast {
    if (!file(cargoHome).exists() && !file(cargoHome).mkdirs()) {
      println "Failed to create directory '${cargoHome}'"
    }
  }
}

// This will create the Cargo home directory before Cargo runs
cargoStartLocal.dependsOn createCargoHome
...

That’ll do it! Cargo will now create all its files in the project build directory, so it won’t interfere with other builds anymore. Here you can find an example build script which combines both solutions and adds some more context.

I hope this article saves you the time to figure this out all by yourself. If you have any questions or ideas how to improve this solution please contact me at @SQiShER or leave a comment.

How to make Puppet and Facter work on Docker enabled hosts

Docker provides a lightweight virtual environment by using Linux containers (LXC). We are establishing Docker in one of our projects to implement continuous delivery. For host management we use Puppet, which itself relies on some facts provided by their tool Facter.

Our Puppet modules make use of the ipaddress fact, determined by a built-in Facter script. We regard the ipaddress as the public IP address of the host. As described at gesellix.net, Facter doesn’t always collect the public IP address of the eth0 interface, but uses the IP address of docker0 on Docker hosts.

Finding the public IP address isn’t trivial, because it is a very environment specific datum so that Facter cannot always provide the best result. Daniel Pittman of Puppet Labs describes the problem at a forum post. We’ll show you two code examples how to find the best IP address for your specific demands. Other ideas are mentioned in a Puppet Labs forum answer.

Custom Facts

With Facter you can define custom facts, implemented in your preferred language. Custom facts help you define completely new facts or use existing facts. Since Docker adds a completely new network interface, we added three custom facts ipaddress_primary, macaddress_primary, and netmask_primary.

All of our dockerized hosts had the public interface named eth0, so we only had to get the fact named ipaddress_eth0 as our primary IP address. As fallback we used the original ipaddress:

Facter.add("ipaddress_primary") do
    setcode do
        if Facter.value('ipaddress_eth0')
            Facter.value('ipaddress_eth0')
        else
            Facter.value('ipaddress')
        end
    end
end

The same logic has been used for the netmask and macaddress facts. In order to distribute the new facts on our hosts, we added the files in our Puppet sources at /modules/module_name/lib/facter/ipaddress_primary.rb. We could now use the new facts in our Puppet modules, just like the original ipaddress fact.

For consistency, we should have changed all existing Puppet modules to use the new ..._primary facts. Since we only wanted to update the dockerized hosts and their modules, we tried to override the original fact. Some posts describe how to implement fact overrides by only using the same fact name, but that didn’t work for us. So we tried another way of overriding existing facts by using environment variables.

Environment variables

The Puppet CookBook describes how to override existing facts. You simply add a variable with the prefix FACTER_ and append the fact name you’d like to override. In our example, the result looks like this:

FACTER_ipaddress="192.168.42.42"
FACTER_netmask="255.255.255.0"

Adding the environment variables on our Docker hosts through our Puppet modules looks like follows:

  augeas { "environments":
    context => "/files/etc/environment",
    changes => [
      "set FACTER_ipaddress '192.168.42.42'",
      "set FACTER_netmask '255.255.255.0'",
      ],
    onlyif  => 'match /files/etc/environment/FACTER_ipaddress/* size == 0',
  }

The overrides through environment variables have been added only to our dockerized modules, so we didn’t have to update all other hosts.

Another way to make Facter work together with Docker would have been to change the docker0 interface name. Well, as mentioned above, you have to keep in mind that Docker wasn’t the main issue, but the generic way of Facter to set the ipaddress fact. Facter cannot know what you expect in your environment, so you have to describe your specific needs in explicit facts.

If you have found another way of overriding facts or if this post was helpful to you, we’d like to know. Just leave a comment or get in contact @gesellix!

Apache RewriteRule – Rewriting URLs With Already Encoded QueryStrings

Recently we renamed a URL which was publically available. The system uses Apache httpd, so it was quiet easy to create a RewriteRule:

RewriteRule ^/oldname/(.*) /newname/$1 [R,L]

Unfortunately that didn’t work as expected. A URL like myserver/oldname?myprop=name with spaces will be encoded to myserver/oldname?myprop=name%20with%20spaces. With the above RewriteRule the rewritten URL will be myserver/oldname?myprop=name%2520with%2520spaces. It got encoded two times!.

To fix this, you need the right keywords and Google. Searching for mod_rewrite url encode revealed that adding the NE flag (for No Encoding) does the trick:

RewriteRule ^/oldname/(.*) /newname/$1 [R,NE,L]

Use MockInjector and package protected scope for dependencies to reduce boilerplate code

We have been bored to write so much boilerplate code for mocking dependencies in our unit tests.
That’s why we have written MockInjector to automatically inject all mocks into our class under test.

Think of this class:

class MyClass {

  @Inject
  Foo foo;

  @Inject
  Bar foo;

  void doSomething() {
    foo.doSomething();
    bar.doAnything();
  }
}

If you want to test doSomething(), you need mock Foo and Bar. The traditional way to do this with Mockito is:

class MyClassTest {

  MyClass objectUnderTest;
  Foo foo;
  Bar bar;

  @BeforeMethod
  public void setUp() {
    foo = mock(Foo.class);
    bar = mock(Bar.class);
    objectUnderTest = new MyClass(foo, bar);
  }
}

There is an other way, to use the mockito annotations:

class MyClassTest {

  @InjectMocks
  MyClass objectUnderTest;
  @Mock
  Foo foo;
  @Mock
  Bar bar;

  @BeforeMethod
  public void setUp() {
    initMocks(this);
  }
}

It’s always the same thing we do: Declare all dependencies, create mocks for them, inject the mocks into the class under test.
We liked to do this in one statement. Here it is:

class MyClassTest {

  MyClass objectUnderTest;

  @BeforeMethod
  public void setUp() {
    objectUnderTest = MockInjector.injectMocks(MyClass.class);
  }
}

MockInjector.injectMocks() finds all annotated dependencies and injects Mocks for them.

There are no dependency variables in the test class, which saves us two lines of code for each dependency.

But how do you stub and verify the interactions with the mocks?
We just use package protected scope for our dependencies.

@Test
public void doSomething_calls_foo_and_bar() {
  // when
  objectUnderTest.doSomething();

  // then
  verify(objectUnderTest.foo).doSomething();
  verify(objectUnderTest.bar).doAnything();
}

Discussion:

You just reduced boilerplate code in the setup and added boilerplate code in test method ( verify(objectUnderTest.foo) instead of verify(foo) ) .

Good point, but there is one difference. The code in the test setup needs to be written manually but the added code in the test method is mostly written by IDE autocompletion.

Package protected scope is not really encapsulation. Anyone can mess up the dependencies by putting a class in the same package.

If you are afraid of other people breaking your encapsulation by working around package protected scope, you should not use MockInjector and package protected scope.
In our team there is the convention to treat package protected fields like private fields. Since we usually do not share our code beyond our team, we have never experienced any problems of this kind. Package protected is private enough to tell: “Don’t change it unless you know what you do.” If any team member wants to change something there he will not need to work around a package local scope. He can just edit the class itself and make public what he needs.

But there are more advantages:

  1. Think about the next iteration of MyClass: We need to add a new dependency “baz”. We start by writing a red unit test (compilation error):
    @Test
    public void doSomethingElse() {
      given(objectUnderTest.baz)
    }
    

    Now we can use the quickfix feature of our IDE to introduce the new field and then add the @Inject annotation. No need to modify the test setup. Everything works out of the box.

  2. We often refactor our code to meet new requirements. That includes the renaming of classes and fields. In former times we often forgot to rename the variables in the test setup. Then it was hard to understand, what the test really did, until you realize that you just needed to rename the variable of the mock. Because there is no field for the mock anymore we do not need to rename it :-)
  3. If we forget to annotate any dependency MockInjector will not mock it and the test will fail. No surprises with NullPointerExceptions on production.

Is it production ready?

We have been successfully working with MockInjector for several years. Besides that it’s not production, it’s test code. If MockInjector makes your red tests green, it obviously works.

Do I need a special test or dependency injection framework to use it?

No. There is a dependency to javax.inject.Inject but you can use it with Spring Framework or Google Guice, too. You can use it with any test framework you want.

Where can I get it?

MockInjector is available on https://github.com/hypoport/MockInjector

Package protected fields in combination with MockInjector for testing is the easiest and most straight forward way of coding we found. Give it a try!

One more thing for Intellij IDEA users:
With this file template for new class files you do not even need to write “injectMocks” manually.

#parse("File Header.java")
#if (${PACKAGE_NAME} != "")package ${PACKAGE_NAME};#end

#if ($NAME.endsWith("Test"))
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;

import static org.hypoport.mockito.MockInjector.injectMocks;
#end

#parse("Type Header.java")
public class ${NAME} {

#if ($NAME.endsWith("Test"))

  $NAME.replace("Test", "") $NAME.substring(0, 1).toLowerCase()$NAME.replace("Test", "").substring(1);

  @BeforeMethod
  public void setUp() throws Exception {
    $NAME.substring(0, 1).toLowerCase()$NAME.replace("Test", "").substring(1) = injectMocks($NAME.replace("Test", "") .class);
  }

#end
}

AngularJS 1.2 update hints

As you might have noticed, AngularJS 1.2.2 1.2.3 has been released lately, without dedicated announcement. Our update from an AngularJS 1.2-rc2 has been quite smooth, only two hints might be noteable in addition to the official migration guide.

  1. With the current version the AngularJS team has fixed some issues regarding the isolate scope as described in the changelog for the 1.2.0 release or at the relevant GitHub issues #1924 and #2500.
    In directive tests you might currently use the isolate scope of an element using the notion element.scope(). With the current release you have to use the newly introduced function element.isolateScope(). Just a simple find and replace task :-)

  2. Don’t forget to check third party libraries for compatibility updates. We used the Angular-UI Select2 directive in an older release 0.0.2. Running our e2e-tests produced some strange and non-deterministic error messages in PhantomJS, but not in Chrome or other standard browsers. The errors seemed to be triggered due to directive priority changes in Angular 1.2.0, so the update to the current release 0.0.4 of ui-select2 made the errors go away by setting a fixed priority.

Passing Functions to AngularJS Directives

I recently built a custom directive that needed to call a function in its parent scope and pass some interal variables back as arguments. To be more specific: an editable label – a simple text that turns into an input field when it’s clicked and turns back into plain text, when the focus leaves the input field. On top of that, I needed it to invoke a callback function, if – and only if – the value of the input field has actually been changed.

The main part of the directive was pretty straightforward to build. But implementing the callback to report the change back to the parent scope took me some time. The way it works turned out to be rather unintuitive, but is extremely powerful once you get a hang of it. In this article I’m going to show you how it’s done.

A Simplified Scenario

Let’s say we have function foo in our $scope:

$scope.foo = function (newValue, oldValue) {
    console.log('Value changed: ' + oldValue + ' to ' + newValue);
}

And a custom directive called bernd:

angular.module('app', []).directive('bernd', function () {
    return {
        restrict: 'E',
        scope: {
            callback: '&'
        }
    };
});

As you can see, the directive has an isolated scope and provides an attribute callback which will be bound to the scope as expression thanks to the &. This is very important, since it basically tells AngularJS to wrap the expression passed via the attribute into a magical function that allows us to do some fancy things. But more about that later.

How to Pass the Function

Now lets see how we can pass foo to bernd. The most intuitive way would be to simply pass the function like any other variable:

<bernd callback="foo"/><!-- Be aware: this doesn't work -->

Unfortunately that’s not how it works. Instead we need to pass the function as if we want to invoke it. Remember how I told you about the magical wrapper function AngularJS slaps around the function we pass? At a certain point said wrapper will actually evaluate the given expression. So passing the function like this brings us one step closer:

<bernd callback="foo(newValue, oldValue)"/>

That’s actually all you need to do from a client perspective. But of course our directive has to call the function at some point for this to work. And that’s where it get’s interesting:

var before = 4;
var after = 8;
$scope.callback({
    newValue: after,
    oldValue: before
});

Instead of passing the arguments to the callback directly, we create a map of argument name to value mappings. Why is that? Remember that we pass foo as expression in which we actually invoke it. AngularJS simply wraps this expression into a magical little function, which is responsible for providing the environment the expression needs before actually evaluating it. In this case, our expression expects two variables to be present: newValue and oldValue. The map we are giving to the wrapper function is simply a map of variable names to values, which we want to make available as local variables.

Some More Details

Internally, AngularJS uses the $parse service to wrap the given expression into a function that allows it to change the context in which it should be invoked to the parent scope and to pass local variables as map (locals). The magical wrapper function simply reduces this to passing the locals. So it’s actually not that magical after all.

This means that the variable names used in the expression are actually a part of your directives public API. They are basically internal variables that you make available to the outside world. You don’t actually pass a function to your directive but simply a piece of code to be executed in the parent scope, which gets access to some predefined local variables. Once you look at it like this, it’s actually a pretty simple concept.

Conclusion

Figuring this all out took me a while, though. Reading the documentation more thoroughly would’ve certainly helped, as the section about the directive definition object clearly states:

[...] Often it’s desirable to pass data from the isolated scope via an expression and to the parent scope, this can be done by passing a map of local variable names and values into the expression wrapper fn. For example, if the expression is increment(amount) then we can specify the amount value by calling the localFn as localFn({amount: 22}).

But somehow I always skipped over this part as I expected to be able to pass and invoke a function reference similar to the two-way binding. I was looking for the wrong thing in all the right places. On the bright side I learend a lot more about the way AngularJS works internally and hopefully created a helpful tutorial by writing this article for those of you, who are trying to figure out the same thing.

Please let me know if this was helpful to you. Either via comment or Twitter @SQiShER.