Alignment durch Konsens

Entscheidungen im klassischen hierarchischen Umfeld zu treffen, ist einfach und effizient. Die Führungskraft entscheidet im besten Wissen für das Team, damit es sich auf die Umsetzung konzentrieren kann.

Bei Entscheidungen im agilen Kontext mit selbstorganisierten Teams reden alle neuerdings immer von “Alignment and Autonomy”. Also sowas wie eine gemeinsame Ausrichtung unter gleichzeitiger Wahrung der Entscheidungsautonomie in den Teams.

Wie soll die Führung denn nun eine Ausrichtung vorgeben ohne zu genau den Weg und die Lösung vorzugeben?

Als Antwort breche ich in diesem Blog-Post mal eine Lanze für den guten, alten Konsens. Schlussendlich ist er meiner Meinung nach die beste Maßnahme, um eine gemeinsame Ausrichtung hinzubekommen. Noch dazu stellt der Konsens sicher, dass die Entscheidungen nachhaltig im ganzen Team getroffen werden. Das führt dazu, dass sich auch langfristig jeder daran hält und man endlich Ruhe vor immer wieder aufbrechenden Diskussionen mit den immer gleichen Themen durchleiden muss. Da lohnt sich auch der in Einzelfällen etwas höhere Aufwand der Entscheidungsfindung im Konsens.

Ein paar Tipps und Tricks können zusätzlich helfen:

  • volle Transparenz aller Aufgaben und deren Status – Machen sie regelmäßige Statusmeetings mit allen!
  • Jeder darf sich an allen Aufgaben beteiligen – Bauen sie ihre Teams/Arbeitsgruppen stets mit der Frage „…und wer will hier auch noch mitmachen?“ auf!
  • Nehmen sie alle Einwände auch spät im Entscheidungsprozess ernst und arbeiten sie diese im Konsens ein.
  • Sollten Sie nicht zum Konsens kommen, involvieren Sie mehr Kollegen und Meinungen – Nutzen Sie die Vielfalt in Ihrer Organisation!
  • Versuchen sie, Emotionen sofort und bis in die Persönlichkeit nachzuspüren – Schulen Sie alle Mitarbeiter in Coachingskills!
  • Ignorieren Sie Hierarchien und zwar egal ob vorgegeben oder systemisch gelebt – Involvieren Sie alle Kollegen auf Augenhöhe!
  • Definieren Sie Selbstorganisation als höchsten Wert in der Führung – Zwingen Sie Ihre Führungskräfte zum Abwarten!
  • Werten Sie die Überforderung der Mitarbeiter bzgl. zuviel Freiheit als Erfolg – Halten Sie an Ihren Grundsätzen fest!
  • Für Fortgeschrittene: Stimmen Sie Ihre Werte, nach denen Sie im Unternehmen und privat handeln, im Konsens mit allen Kollegen ab!

Aber es gibt auch natürliche Feinde des Konsens. Dabei handelt es sich um die Eingangs beschriebenen Ideen von Alignment and Autonomy, die mit Begeisterung, Vision, Sinn, Richtung usw. zu tun haben. Diese auf der Meta-Ebene wirkenden Prinzipien sind nur schwer von Führungskräften umzusetzen und deshalb gefährlich. Im schlimmsten Fall wird dann keiner mehr in Ihrer Organisation den Konsens anstreben. Das gilt es zu verhindern.

Sogar auf der Ebene von Betriebssystemen für Organisationen lauern Gefahren: Die neuste und gerade gehypte Alternative hört auf den Namen „Holokratie“ oder auch „Soziokratie“. Wie ein Kollege sagte: „Was soll ich von Leuten erwarten, die etwas mit dem Namen –kratie vertreten?“ Recht hat er, denn hier ist völlig neues Denken angesagt und das ganze passt auch noch perfekt zu dem ganzen agilen Zeugs. Darauf wollen Sie sich nicht einlassen, das stellt alles auf den Kopf und definiert ein neues Miteinander! Soweit muss es nicht kommen!

Es lebe der Konsens!

Stabile Grüße,
Evil Coach

Microxchg Special: Seneca Node JS μServices Framework and Docker Orchestration

On Wednesday Feb 11, the evening before microxchg – the micro services conference, Hypoport is hosting the microservices meetup Berlin with talks from two of the conference speakers.

Richard Roger will talk about Seneca Node JS μServices Framework

Peter Rossbach will talk about Docker Orchestration

Please register on microservices meetup Berlin.

See you there, Leif

Continuous Deployment with Gradle and Docker – Part 3

This is the third part in our series about our deployment of a JVM and AngularJS based application using Gradle as build tool and Docker as deployment vehicle. You’ll find the overview on all posts in the introductury post.

As seen in the overview, our next step in the deployment pipeline performs so called e2e tests. The common Gradle project setup has already been described in part 2, so we can start with the Gradle script for the e2e test submodule. To follow the descriptions below, you can use the code examples at the GitHub project in branch “part3″.

E2E Test Basics

We already compared different levels of code and application testing in a dedicated post, where e2e tests had been described as a way to test from a user’s perspective. You’ll see that the concepts described there haven’t changed, but the tooling has been greatly improved. Though the AngularJS e2e testing guide mentions the deprecated way of using the Angular Scenario Runner, it recommend the new test runner Protractor as the way to go.

If you’re already familiar with WebDriver or Selenium tests, Protractor will feel very familiar. You can imagine Protractor as an AngularJS specific extension on top of a JavaScript implementation of WebDriver. In fact, you might use the native WebDriverJs tool to write e2e tests, but Protractor allows you to hook into AngularJS specific events and element locators, so that you can focus on test code and less on technical details.

Since e2e tests are executed from a user’s perspective, you run your application in a way similar to your production environment. With our example project, we need to run the Spring Boot frontend and backend applications. For a real project, you’ll probably need to mock external dependencies or use another database, so that your tests cannot be influenced by external changes or instabilities.

Example Project Setup

Looking at the example project, you’ll find a Gradle submodule for our e2e tests. The Gradle script should only need to run the application and the Protractor e2e tests. As simple as it seems, we also wanted the e2eTests task to run on our TeamCity agents with a dedicated build version. That means, we wanted to pass an application version and let Gradle fetch the desired artifacts and run them, instead of using the current build artifacts of the Gradle project. This allows us to parallelize the e2e tests with other build steps, which decreases the overall deployment time.

Selecting the application version

The mechanism to pass our application version from the first TeamCity build goal to the e2e test build goal works with text files being written before starting the Gradle build. See the TeamCity Artifact Dependencies docs for details. In essence, our Gradle script expects a file to contain the application version that the e2e tests have to test. The file is available with a default version at /application-version.txt and needs to be overwritten with an existing version. The file is read before downloading the application artifacts at readVersionFile.

Running our services as background tasks

Before running Protractor, we need to start a Selenium server and the application artifacts. Additionally, we run a reverse proxy to make both webapps available on a single port. All four services need to be started and to be kept running until the Gradle e2eTest task has finished.

There are several ways to run background tasks during a Gradle task. As you see in our script, we chose to manually manage our background services by calling the execAsync function. It manages the environments of our commands, allows optional logging of a command’s output, and allows to wait for an expected output by adding a CountDownLatch.

The startSeleniumServer task uses the nice selenium-server-standalone-jar Node module to fetch the required jar file, then runs it on the default port 4449 with a simple java -jar ... call.

Our reverse proxy to combine both frontend and backend webapp ports on a single port is implemented as a small node module, which delegates every request to the according webapp. Requests with the first path segment “example-backend” are delegated to the backend port, requests with the path segment “example” are delegated to the frontend port. The important part is our default proxy port 3010, which needs to be used in the base uri of our e2e tests.

Both webapps are run including the mentioned CountDownLatch, which expects a Tomcat specific message to appear in the output before continuing the Gradle task.

Running the e2e tests with Protractor

The last step in the e2eTests task triggers Protractor to perform our e2e tests. Protractor allows to provide your individual configuration in a JavaScript file. Our example configuration is taken from our TeamCity config. It sets some defaults like the application’s base uri, configures the specs pattern to tell Protractor where to find our tests, and also allows us to hook into the Protractor lifecycle to prepare or enhance the Browser object, which is used in the test code. That’s the place where you might override or configure the Angular application, e.g. by disabling animations.

Some more technical hooks are extracted to our protractorUtils.js, where we add some features like passing browser console logs to our shell or creating browser screenshots in case of failed tests. Those helper features need to be enabled in every test suite, so we add a globalSetup function to the Browser object.

A TeamCity specific part of our configuration enables better logging and a tests count feature in the TeamCity builds. We only needed to add a TeamCity reporter, which is available as Node module.

Protractor can now find our tests at the configured path. The e2e tests look similar to the typical unit tests, but the first action in our e2e tests is executing the globalSetup on the Browser instance.

The actual tests use the convenience methods provided by Protractor to locate elements by their id or by their Angular binding. We won’t go into a detailed description of the supported features here, but recommend to browse through the Protractor reference.

Task cleanup

When the e2eTests task is finished we need to stop the background services. To accomplish this, we used the Gradle feature to finalize the e2eTests task with a stopProcesses task. It simply calls destroy() on our background threads. Only in rare cases where we break the Gradle script with an Exception, so that Gradle doesn’t have a chance to call our stopProcesses task, we end up with old running processes on our TeamCity agent. To kill those processes, we added a simple shell script to our TeamCity build goals to find and kill all processes on our well known ports (8080, 8090, 3010, 4449).

Summary

A test run started with ./gradlew e2eTests in the project root looks like the screenshot below. You get some information how many tests have passed; in case of a failure you get some more details and even stacktraces.

e2e tests output

Though e2e tests are easy to write, and the Protractor features help on locating elements and minimizing timing issues, they can slow down your deployment pipeline a lot. In our case, the initial build/publish step needs less than two minutes, other steps in our pipeline also need more or less two minutes, but the e2e tests step needs at least seven minutes to execute 54 tests.

In fact, we often discuss how to minimize the number of e2e test cases, but since one mostly needs a more integrative way to ensure everything is working as expected, there will always be some good use cases for e2e tests.

Our e2e tests focus on the user interface with mocked external dependencies. But another aspect to check your application before deploying it to production is the interoperability with other services. This is where the Contract Tests come into play, which is the theme for the next part of this series.

If you have questions so far, please use the comments feature below, or contact us via Twitter @gesellix.

Docker Meetup at Hypoport with “Why you’ll love managing containers with Docker” & “Docker on AWS” on Jan, 19th

Docker Berlin is back! You can now follow us on Twitter too @DockerBerlin

To init and containerize this new year properly we have two great speakers lined-up at Hypoport on Jan, 19th.

Johannes Ziemke [Docker Inc.] and Sascha Möllering [ZANOX.de AG]

Managing containers with Docker…and why you’ll love it – Johannes Ziemke

What are the challenges of today’s infrastructures, why containers are the right building blocks and Docker the right tool to manage those.

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

I’ll talk about the challenges of managing complex infrastructures based on my past experiences, why live state mutation sucks and configuration management is doomed, self-sufficient and light-weight containers are the answer and how Docker manage those.

Last but not least I’ll show how to build, ship and run containers and real world examples of what Docker is already used for.

Speaker profile: http://de.linkedin.com/in/johannesziemke

Docker in the Cloud on AWS – Sascha Möllering

In this talk, I’ll describe how to leverage the potential of Docker and Amazon Web Services to deploy Docker Containers in AWS, connect to managed services from your application and implement the immutable server pattern.

Speaker profile: https://www.linkedin.com/pub/sascha-m%C3%B6llering/2b/268/403

Agenda:

6:00 – 6:30: Networking

6:30 – 7:15: Managing containers with Docker

7:15 – 7:30: Break

7:30 – 8:00: Docker in the Cloud on AWS

8:00 – open end: Networking

This meetup is kindly sponsored by Hypoport, which provides venue, food and drinks! It would be nice if you sign up on the meetup Docker Berlin event.

Marry Xmas and a Happy New Year. See you there.

Nie wieder Monolithen! Micro Services in der Praxis.

Bei der Hypoport AG haben wir bereits drei verschiedene Modularisierungsinkarnationen erlebt. Jede Inkarnation brachte uns näher an das Ideal einer flexiblen, wartbaren Architektur. Und dennoch stellten wir nach wenigen Jahren der Produktweiterentwicklung wieder fest: Die Anwendung ist voll von unbeabsichtigter Komplexität, Innovationen sind nur schwer möglich und die Umsetzung von Funktionalität dauerte kontinuierlich länger. Der Micro-Service-Architekturstil verheißt durch die Zerlegung eines Systems in kleine, unabhängige Services nachhaltige Besserung. Wir haben’s ausprobiert und sind begeistert.

Der Artikel erschien im Java Magazin 8.2014.

Zum Artikel: http://jaxenter.de/artikel/nie-wieder-monolithen-176652

Mein Fazit von der manage agile 2014 in Berlin

manageagile14
Letzte Woche fand die Konferenz manage agile vom 27.-30.10.2014 in Berlin statt. Am Montag gab es Ganztagesworkshops, am Dienstag und Mittwoch dann Frontalvorträge und am Donnerstag Halbtagesworkshops. Ich habe mangels für mich interessanter Themen den Montag ausgelassen und bin am Dienstag eingestiegen.
Bei mir sind ein paar Aussagen hängengeblieben, die ich mehr oder weniger in allen von mir besuchten Vorträgen (17 Stück!) gehört habe.
Am eindrücklichsten war die Aussage, dass die Jobbeschreibung für Manager neu geschrieben werden muss. Und natürlich gab es viele Anregungen, damit Agilität im Unternehmen nicht nur “gespielt” wird. Allen voran ist es die Beantwortung der Sinnfrage des Mitarbeiters: Warum soll ich in dieser Firma und an diesem Produkt arbeiten? Die Antwort gibt dann die nötige Ausrichtung (Alignment), durch die ein zielgerichtetes und autonomes Arbeiten in Teams überhaupt erst möglich wird. Desweiteren entwickelt sich die Aufgabe eines Managers immer mehr in Richtung eines Coaches. Nicht Selber-Machen, sondern Befähigen zum Machen, ist die Devise. Das dabei trotzdem aber Erwartungen an und Ziele für den Mitarbeiter abgestimmt werden sollen, zeigt für mich sehr deutlich auf, dass die Agile Veränderungswelle im Management angekommen ist und wir mitten in einer Transition zu einem neuen Bild von Führung sind.
Überhaupt kam kein Zweifel während der Konferenz auf, dass Agilität als Thema für die Mitarbeiter durch, also zum Mainstream geworden ist.
Konkrete Hilfe für den veränderungswilligen Manager gab es nicht viele. Gleichwohl haben die Vorträge einige Möglichkeiten aufgezeigt, wie man mit konkreten Werkzeugen versuchen kann, die Zusammenarbeit neu zu gestalten, z. B. Peer Feedback, Management 3.0, Design Thinking, Lean Change oder Lean Startup. Ich denke, hier kann es im nächsten Jahr gut weitergehen. Das zumindest wäre mein Vortrag für nächstes Jahr ;-)
Ein wenig Konkretes gab es für mich am Donnerstag: Halbtagesworkshop in Soziokratie. Das war für mich ein willkommener Auffrischer und hat mich motiviert, davon auch etwas bei Hypoport auszuprobieren. Ein Teilnehmer nannte es gar “Agilität für die Governance”. Soweit würde ich jetzt nicht gehen, aber es fügt sich sehr wohltuend in das Agile Mindset ein, finde ich.
Insgesamt hat sich die Konferenz für mich gelohnt, wenngleich eine zweitägige Frontalbeschallung echt anstrengend und auch nicht mehr zeitgemäß ist. Das haben auch die Veranstalter gemerkt und wollen das Format nächstes Mal offener gestalten.
Bis nächstes Jahr!

Continuous Deployment with Gradle and Docker – Part 2

After a quite long holiday break we now continue our series about the Continuous Deployment Pipeline with Gradle and Docker.

This post is about the first step where our build chain creates the Spring Boot packages and publishes them to our Nexus repository manager. As shown in the high-level overview below, it is only a quite small part of the complete pipeline:
Deployment Pipeline with Gradle and Docker

Gradle and Spring Boot provide you a very convenient build and plugin system and work out of the box for standard builds. Yet, the devil is in the details. Our project consists of a multi module setup with the following subprojects:

  • backend
  • frontend
  • common
  • contract-test
  • e2e-test

The projects backend and frontend are our main modules with each being deployed as a standalone application. They share the common project which contains the security and web config. The contract-test and e2e-test projects contain more integrative tests and will be discussed later in dedicated posts.

We’ll now take a deep dive into our build scripts and module structure. You can find the example source code on GitHub, where we provide a minimal, but working project with the important parts being described here.

Gradle project setup

A build on our CI-Server TeamCity uses the Gradle Wrapper by running the tasks build and publish. These tasks are called on the root level of our project. Our Gradle root project contains the common configuration so that the subprojects only need to configure minimal aspects or special plugins.

Shared dependency versions are defined in the root project, so that all subprojects use the same dependency versions. Gradle also allows you to define sets of dependencies, so that you can reference them as complete package without known its details. We call these sets libraries and you can find an example at the root build.gradle along with its usage in the dependency closure.

Using a common definition of dependencies sometimes isn’t enough, because you also have to handle transitive dependencies. You have the option to manage transitive dependencies by manually excluding or even redefining them. Another option we often use is to override clashing dependency versions by configuring the build script’s configuration. The resolutionStrategy can be configured to fail when version conflicts are recognized. The example project shows you how we globally manage our dependencies.

Spring Boot configuration

Building a Spring Boot application with Gradle is simplified with the help of the Spring Boot Gradle Plugin. The plugin configures your build script so that running gradle build depends on the bootRepackage task.

You’ll see in the backend and frontend build.gradle scripts, that we configure Gradle to replace a token in our source files with the artifactVersion. This special token replacement aims at setting the actual version in our application.properties file, which is used to configure Spring Boot. By adding a line like info.build.version=@example.version@ we enable the /info endpoint so that we can ask a running application about its version. The version will be used later in our deployment pipeline. Details on our artifact versioning scheme will be described in the section about publishing below.

Performing Node.js build tasks

Our backend build isn’t very spectacular, but our frontend build needs some more explanation. We implemented our frontend with AngularJS, but use Spring Boot to deliver the static resources and to implement security. Before packaging the AngularJS resources in the frontend artifact, we let Gradle perform a grunt release task. Grunt is a Node.js based task runner, which lets us run unit tests, minimize our frontend code or even images and package everything. Its result then needs to be copied to the public resources folder of Spring Boot.

Configuring a Node.js build in a platform neutral way isn’t one of the trivial tasks, but we use the gradle-grunt-plugin and the gradle-node-plugin which helps a lot. Apart from delegating the grunt release to the plugin we also configure the according grunt_release task to recognize inputs and outputs in the Gradle build script. The inputs and outputs help Gradle to decide if the task needs to be executed. If there haven’t been any source changes and the output still exists, the task is regarded up to date and will be skipped.

Publishing and versioning Gradle artifacts

With both frontend and backend being packaged as artifacts, we would like to publish them to our Nexus artifact repository. Nexus needs the well known set of groupId, artifactId and version to identify an artifact. The Gradle maven-publish plugin can be configured in a very convenient way to use the project’s group, name and version as Maven coordinates. As you can see in the example source code, we already configure the group in our root project. The subproject’s name fits our needs as artifactId, which leads us to the final property, the version.

We wanted the version to be unique and sortable by the artifact’s build time. We also didn’t want to maintain a version.txt in our project. Long story short, we defined our version to look like the scheme: yyyy-MM-dd'T'HH-mm-ss_git-commit-hash. The part before the _ corresponds to the build timestamp and the second part corresponds to the latest commit hash of the project’s git repository. That way we can quickly recognize when the artifact has been build with which commit in the project’s history.

The artifact version is generated on every build. Apart from updating our application.properties, we also use the artifact version to configure the publish task in our root project. The rest works out of the box, we only need to configure the Nexus publish url with username and password.

Build on a CI-Server

Our CI Server TeamCity now only needs to execute the gradlew clean build publish tasks to compile, perform all unit tests, package the Spring Boot applications and publish them to the artifact repository. That wouldn’t be enough, because we also want to perform integration tests and deploy the applications to our internal and production stages.

TeamCity provides a feature to declare so-called build artifacts, which can be used by subsequent build goals in our build chain. We want the other build goals to know the application version, so we write it into a text file on the build agent and pass it to all build goals in our pipeline. Every build goal then uses the version to fetch the artifact from Nexus. The image below shows all build goals of our build chain:

Build Chain

The selected yellow box in the build chain corresponds to the build step we described in this article. As promised, the next article in our series will describe you in detail how we perform our integrative e2e- and contract-tests. Comments and feedback here or @gesellix are welcome!