Peer Feedback as topic at the Berlin Culture Hacking Meetup

Since I postet my first articles on Peer Feedback in 2014, I noticed that people want to learn more about this approach. When I had a workshop with Olaf last year, we talked about sharing my thoughts on this topic with the Berlin Culture Hacking MeetUp. This was the starting point for inviting the culture hackers to Hypoport and hosting the event.

So last friday I took the opportunity to present Peer Feedback to the community. We had enlightened discussions around the topic and it was a really cool evening. Thanks to all participants for input and discussion!

Here are the slides: Peer Feedback – Culture Hacking 2015 – printable slides (german).

As a helpful starting point for your individual peer feedback dialogue you can use the Peer Feedback Sheet (only in German right now).

Looking forward to the next Culture Hacking Meetup!

Andreas

GOTO Night at Hypoport: From the Monolith to Microservices – Randy Shoup

Hypoport is happy to host an interesting GOTO Night with Randy Shoup talking about “From the Monolith to Microservices”.

On behalf of GOTO Berlin and Microservices Meetup Berlin we welcome you at Hypoport.

Venue: Hypoport, Klosterstr. 71, 10179 Berlin
Date: April 20, 2015
Entrance: 18:30 / 6:30PM
Talk: 19:00 / 7PM

“From the Monolith to Microservices: Lessons from Google and eBay”
by Randy Shoup

Abstract:
Most large-scale web companies have evolved their system architecture from a monolithic application and monolithic database to a set of loosely coupled microservices. Using examples from Google, eBay, and other large-scale sites, this talk outlines the pros and cons of these different stages of evolution, and makes practical suggestions about when and how other organizations should consider migrating to microservices. It concludes with some more advanced implications of a microservices architecture, including SLAs, cost-allocation, and vendor-customer relationships within the organization.

Bio:
Randy has worked as a senior technology leader and executive in Silicon Valley at companies ranging from small startups, to mid-sized places, to eBay and Google. In his consulting practice, he applies this experience to scaling the technology infrastructures and engineering organizations of his client companies. He served as CTO of KIXEYE, a 500-person maker of real-time strategy games for web and mobile devices. Prior to KIXEYE, he was Director of Engineering in Google’s cloud computing group, leading several teams building Google App Engine, the world’s largest Platform as a Service. Previously, he was CTO and Co-Founder of Shopilly, an ecommerce startup, and spent 6 1/2 years as Chief Engineer and Distinguished Architect at eBay. Randy is a frequent keynote speaker and consultant in areas from scalability and cloud computing, to analytics and data science, to engineering culture and DevOps. He is particularly interested in the nexus of people, culture, and technology.
Twitter: @randyshoup

Registration
Please register here. For any questions do not hesitate to contact Dajana Günther.

Microxchg Special: Seneca Node JS μServices Framework and Docker Orchestration

On Wednesday Feb 11, the evening before microxchg – the micro services conference, Hypoport is hosting the microservices meetup Berlin with talks from two of the conference speakers.

Richard Roger will talk about Seneca Node JS μServices Framework

Peter Rossbach will talk about Docker Orchestration

Please register on microservices meetup Berlin.

See you there, Leif

Continuous Deployment with Gradle and Docker – Part 3

This is the third part in our series about our deployment of a JVM and AngularJS based application using Gradle as build tool and Docker as deployment vehicle. You’ll find the overview on all posts in the introductury post.

As seen in the overview, our next step in the deployment pipeline performs so called e2e tests. The common Gradle project setup has already been described in part 2, so we can start with the Gradle script for the e2e test submodule. To follow the descriptions below, you can use the code examples at the GitHub project in branch “part3″.

E2E Test Basics

We already compared different levels of code and application testing in a dedicated post, where e2e tests had been described as a way to test from a user’s perspective. You’ll see that the concepts described there haven’t changed, but the tooling has been greatly improved. Though the AngularJS e2e testing guide mentions the deprecated way of using the Angular Scenario Runner, it recommend the new test runner Protractor as the way to go.

If you’re already familiar with WebDriver or Selenium tests, Protractor will feel very familiar. You can imagine Protractor as an AngularJS specific extension on top of a JavaScript implementation of WebDriver. In fact, you might use the native WebDriverJs tool to write e2e tests, but Protractor allows you to hook into AngularJS specific events and element locators, so that you can focus on test code and less on technical details.

Since e2e tests are executed from a user’s perspective, you run your application in a way similar to your production environment. With our example project, we need to run the Spring Boot frontend and backend applications. For a real project, you’ll probably need to mock external dependencies or use another database, so that your tests cannot be influenced by external changes or instabilities.

Example Project Setup

Looking at the example project, you’ll find a Gradle submodule for our e2e tests. The Gradle script should only need to run the application and the Protractor e2e tests. As simple as it seems, we also wanted the e2eTests task to run on our TeamCity agents with a dedicated build version. That means, we wanted to pass an application version and let Gradle fetch the desired artifacts and run them, instead of using the current build artifacts of the Gradle project. This allows us to parallelize the e2e tests with other build steps, which decreases the overall deployment time.

Selecting the application version

The mechanism to pass our application version from the first TeamCity build goal to the e2e test build goal works with text files being written before starting the Gradle build. See the TeamCity Artifact Dependencies docs for details. In essence, our Gradle script expects a file to contain the application version that the e2e tests have to test. The file is available with a default version at /application-version.txt and needs to be overwritten with an existing version. The file is read before downloading the application artifacts at readVersionFile.

Running our services as background tasks

Before running Protractor, we need to start a Selenium server and the application artifacts. Additionally, we run a reverse proxy to make both webapps available on a single port. All four services need to be started and to be kept running until the Gradle e2eTest task has finished.

There are several ways to run background tasks during a Gradle task. As you see in our script, we chose to manually manage our background services by calling the execAsync function. It manages the environments of our commands, allows optional logging of a command’s output, and allows to wait for an expected output by adding a CountDownLatch.

The startSeleniumServer task uses the nice selenium-server-standalone-jar Node module to fetch the required jar file, then runs it on the default port 4449 with a simple java -jar ... call.

Our reverse proxy to combine both frontend and backend webapp ports on a single port is implemented as a small node module, which delegates every request to the according webapp. Requests with the first path segment “example-backend” are delegated to the backend port, requests with the path segment “example” are delegated to the frontend port. The important part is our default proxy port 3010, which needs to be used in the base uri of our e2e tests.

Both webapps are run including the mentioned CountDownLatch, which expects a Tomcat specific message to appear in the output before continuing the Gradle task.

Running the e2e tests with Protractor

The last step in the e2eTests task triggers Protractor to perform our e2e tests. Protractor allows to provide your individual configuration in a JavaScript file. Our example configuration is taken from our TeamCity config. It sets some defaults like the application’s base uri, configures the specs pattern to tell Protractor where to find our tests, and also allows us to hook into the Protractor lifecycle to prepare or enhance the Browser object, which is used in the test code. That’s the place where you might override or configure the Angular application, e.g. by disabling animations.

Some more technical hooks are extracted to our protractorUtils.js, where we add some features like passing browser console logs to our shell or creating browser screenshots in case of failed tests. Those helper features need to be enabled in every test suite, so we add a globalSetup function to the Browser object.

A TeamCity specific part of our configuration enables better logging and a tests count feature in the TeamCity builds. We only needed to add a TeamCity reporter, which is available as Node module.

Protractor can now find our tests at the configured path. The e2e tests look similar to the typical unit tests, but the first action in our e2e tests is executing the globalSetup on the Browser instance.

The actual tests use the convenience methods provided by Protractor to locate elements by their id or by their Angular binding. We won’t go into a detailed description of the supported features here, but recommend to browse through the Protractor reference.

Task cleanup

When the e2eTests task is finished we need to stop the background services. To accomplish this, we used the Gradle feature to finalize the e2eTests task with a stopProcesses task. It simply calls destroy() on our background threads. Only in rare cases where we break the Gradle script with an Exception, so that Gradle doesn’t have a chance to call our stopProcesses task, we end up with old running processes on our TeamCity agent. To kill those processes, we added a simple shell script to our TeamCity build goals to find and kill all processes on our well known ports (8080, 8090, 3010, 4449).

Summary

A test run started with ./gradlew e2eTests in the project root looks like the screenshot below. You get some information how many tests have passed; in case of a failure you get some more details and even stacktraces.

e2e tests output

Though e2e tests are easy to write, and the Protractor features help on locating elements and minimizing timing issues, they can slow down your deployment pipeline a lot. In our case, the initial build/publish step needs less than two minutes, other steps in our pipeline also need more or less two minutes, but the e2e tests step needs at least seven minutes to execute 54 tests.

In fact, we often discuss how to minimize the number of e2e test cases, but since one mostly needs a more integrative way to ensure everything is working as expected, there will always be some good use cases for e2e tests.

Our e2e tests focus on the user interface with mocked external dependencies. But another aspect to check your application before deploying it to production is the interoperability with other services. This is where the Contract Tests come into play, which is the theme for the next part of this series.

If you have questions so far, please use the comments feature below, or contact us via Twitter @gesellix.

Nie wieder Monolithen! Micro Services in der Praxis.

Bei der Hypoport AG haben wir bereits drei verschiedene Modularisierungsinkarnationen erlebt. Jede Inkarnation brachte uns näher an das Ideal einer flexiblen, wartbaren Architektur. Und dennoch stellten wir nach wenigen Jahren der Produktweiterentwicklung wieder fest: Die Anwendung ist voll von unbeabsichtigter Komplexität, Innovationen sind nur schwer möglich und die Umsetzung von Funktionalität dauerte kontinuierlich länger. Der Micro-Service-Architekturstil verheißt durch die Zerlegung eines Systems in kleine, unabhängige Services nachhaltige Besserung. Wir haben’s ausprobiert und sind begeistert.

Der Artikel erschien im Java Magazin 8.2014.

Zum Artikel: http://jaxenter.de/artikel/nie-wieder-monolithen-176652

Continuous Deployment with Gradle and Docker – Part 2

After a quite long holiday break we now continue our series about the Continuous Deployment Pipeline with Gradle and Docker.

This post is about the first step where our build chain creates the Spring Boot packages and publishes them to our Nexus repository manager. As shown in the high-level overview below, it is only a quite small part of the complete pipeline:
Deployment Pipeline with Gradle and Docker

Gradle and Spring Boot provide you a very convenient build and plugin system and work out of the box for standard builds. Yet, the devil is in the details. Our project consists of a multi module setup with the following subprojects:

  • backend
  • frontend
  • common
  • contract-test
  • e2e-test

The projects backend and frontend are our main modules with each being deployed as a standalone application. They share the common project which contains the security and web config. The contract-test and e2e-test projects contain more integrative tests and will be discussed later in dedicated posts.

We’ll now take a deep dive into our build scripts and module structure. You can find the example source code on GitHub, where we provide a minimal, but working project with the important parts being described here.

Gradle project setup

A build on our CI-Server TeamCity uses the Gradle Wrapper by running the tasks build and publish. These tasks are called on the root level of our project. Our Gradle root project contains the common configuration so that the subprojects only need to configure minimal aspects or special plugins.

Shared dependency versions are defined in the root project, so that all subprojects use the same dependency versions. Gradle also allows you to define sets of dependencies, so that you can reference them as complete package without known its details. We call these sets libraries and you can find an example at the root build.gradle along with its usage in the dependency closure.

Using a common definition of dependencies sometimes isn’t enough, because you also have to handle transitive dependencies. You have the option to manage transitive dependencies by manually excluding or even redefining them. Another option we often use is to override clashing dependency versions by configuring the build script’s configuration. The resolutionStrategy can be configured to fail when version conflicts are recognized. The example project shows you how we globally manage our dependencies.

Spring Boot configuration

Building a Spring Boot application with Gradle is simplified with the help of the Spring Boot Gradle Plugin. The plugin configures your build script so that running gradle build depends on the bootRepackage task.

You’ll see in the backend and frontend build.gradle scripts, that we configure Gradle to replace a token in our source files with the artifactVersion. This special token replacement aims at setting the actual version in our application.properties file, which is used to configure Spring Boot. By adding a line like info.build.version=@example.version@ we enable the /info endpoint so that we can ask a running application about its version. The version will be used later in our deployment pipeline. Details on our artifact versioning scheme will be described in the section about publishing below.

Performing Node.js build tasks

Our backend build isn’t very spectacular, but our frontend build needs some more explanation. We implemented our frontend with AngularJS, but use Spring Boot to deliver the static resources and to implement security. Before packaging the AngularJS resources in the frontend artifact, we let Gradle perform a grunt release task. Grunt is a Node.js based task runner, which lets us run unit tests, minimize our frontend code or even images and package everything. Its result then needs to be copied to the public resources folder of Spring Boot.

Configuring a Node.js build in a platform neutral way isn’t one of the trivial tasks, but we use the gradle-grunt-plugin and the gradle-node-plugin which helps a lot. Apart from delegating the grunt release to the plugin we also configure the according grunt_release task to recognize inputs and outputs in the Gradle build script. The inputs and outputs help Gradle to decide if the task needs to be executed. If there haven’t been any source changes and the output still exists, the task is regarded up to date and will be skipped.

Publishing and versioning Gradle artifacts

With both frontend and backend being packaged as artifacts, we would like to publish them to our Nexus artifact repository. Nexus needs the well known set of groupId, artifactId and version to identify an artifact. The Gradle maven-publish plugin can be configured in a very convenient way to use the project’s group, name and version as Maven coordinates. As you can see in the example source code, we already configure the group in our root project. The subproject’s name fits our needs as artifactId, which leads us to the final property, the version.

We wanted the version to be unique and sortable by the artifact’s build time. We also didn’t want to maintain a version.txt in our project. Long story short, we defined our version to look like the scheme: yyyy-MM-dd'T'HH-mm-ss_git-commit-hash. The part before the _ corresponds to the build timestamp and the second part corresponds to the latest commit hash of the project’s git repository. That way we can quickly recognize when the artifact has been build with which commit in the project’s history.

The artifact version is generated on every build. Apart from updating our application.properties, we also use the artifact version to configure the publish task in our root project. The rest works out of the box, we only need to configure the Nexus publish url with username and password.

Build on a CI-Server

Our CI Server TeamCity now only needs to execute the gradlew clean build publish tasks to compile, perform all unit tests, package the Spring Boot applications and publish them to the artifact repository. That wouldn’t be enough, because we also want to perform integration tests and deploy the applications to our internal and production stages.

TeamCity provides a feature to declare so-called build artifacts, which can be used by subsequent build goals in our build chain. We want the other build goals to know the application version, so we write it into a text file on the build agent and pass it to all build goals in our pipeline. Every build goal then uses the version to fetch the artifact from Nexus. The image below shows all build goals of our build chain:

Build Chain

The selected yellow box in the build chain corresponds to the build step we described in this article. As promised, the next article in our series will describe you in detail how we perform our integrative e2e- and contract-tests. Comments and feedback here or @gesellix are welcome!

Docker Global Hack Day #2 – Berlin Edition at Hypoport

We are proud to announce that we are part of the Docker Global Hack Day #2. Join other members of the Docker community to hack on Docker projects using the next big Docker release! You’re all invited to Hypoport HQ in Berlin for a hacking session while sharing a meal/drink with fellow Dockers. This hackathon is your last chance to win a ticket to the sold out DockerCon Europe. Please register using our meetup event page.

See you then.
Leif