GOTO Night: Data Modeling for Elasticsearch at Hypoport Oct. 22

Hypoport invites you to an interesting GOTO Night with Florian Hopf on behalf of GOTO Berlin.

The session will start at 7PM on October 22, 2015 at Hypoport headquarter Klosterstr. 71, 10179 Berlin. Please come by and give us a visit. Food and Drinks will complement the session.


One of the factors for the huge success of Elasticsearch is that it is really easy to get started with. Format your data in JSON, push it to Elasticsearch and instantly you have a full blown search server available. But sometimes it is not that easy and you have to tell Elasticsearch how to manage the data.

In this talk we will explore how Elasticsearch stores its data and learn about the basic principles of index based search. We will look at some of the algorithms and data structures that are responsible for making Elasticsearch as blazingly fast as it is and see examples of how we can benefit from them in our projects.


Florian works as a freelance software developer in Karlsruhe and likes to work on search systems built on Lucene, Solr and Elasticsearch. He helps to organize the local Java User Group as well as the Search Meetup in Karlsruhe and he blogs at

Twitter: @fhopf


Please register here!

JavaScript Forensics by Todd Gardner at Hypoport (Sep. 24th !!!)

Todd Gardner of TrackJS will be at Hypoport THIS Thursday Sep. 24th.

Something terrible happened here. Traces of errors litter the floor; memory leaking from cracks in the ceiling. Someone lost their object context in the corner. Everything reeks of jank. In this session, a JavaScript error tracking expert breaks down a series of common and complex crimes against web applications. You’ll leave the session armed with techniques and tools to detect, diagnose, and fix your JavaScript web applications. Bring your bugs and let’s fix up our web.


Get together at 6pm
Talk starts at 6:30pm

Before, meanwhile and after the talk you can grep a drink and some food.

It would be great if you register here.

Kind regards, Leif

“Entscheidungsfindung in der Gruppe” erntet große Resonanz bei der IS24 Pecha Kucha Night

Kennst du das auch? Wenn es darum geht, gute Entscheidungen in der Gruppe zu treffen wird es manchmal ziemlich kompliziert. Während einige Kollegen Feuer und Flamme für den Vorschlag sind, sehen andere schon Risiken oder Verbesserungspotential und schon wird lustig debattiert, anstatt effizient zu entscheiden. Wir haben das in diversen Situationen erlebt. Versteht mich nicht falsch: natürlich sollen alle diese Rückmeldungen aus der Gruppe berücksichtigt werden. Die Frage ist vielmehr, wie wir Gruppenentscheide systematisch verbessern können um gemeinsam effiziente Entscheidungen zu treffen.

Einige spannende Ansätze fanden wir bei der Holokratie, genauer gesagt mit dem darin enthaltenen Itegrative Decision Model (IDM). Wir haben diesen Prozess in verschiedenen Meetings ausprobiert. Das hat bei uns einiges an Erkenntnis gebracht. Zusammenfassend lässt sich feststellen, das wir bei Entscheidungsfindungen nun viel stärker auf Folgendes achten:

  • Sei dir klar darüber, ob es jetzt darum geht eine Idee besprechen bzw. entwickeln oder einen konkreten Vorschlag zu entscheiden.
  • Formuliere deinen Vorschlag schriftlich und lasse ihn während der Entscheidung sichtbar.
  • Unterscheide die Rückmeldungen aus der Gruppe nach Verständnisfragen, Reaktionen oder echten Einwänden.
  • Hinterfrage die Einwände hinsichtlich ihrer Relevanz und frage dich gemeinsam mit dem Einwandgeber, wie diese in den Vorschlag integriert werden können.

Unsere Erfahrungen mit dem IDM haben wir kürzlich im Rahmen der Berlin DoSE bei der Pecha Kucha Night von IS24 vorgestellt. Offenbar ist das Thema nicht nur für uns relevant denn wir haben sehr große Resonanz erfahren und nebenbei mit unserem Vortrag auch den ersten Platz belegt :)

Microgames for Wetware Developers by Julia Dellnitz & Stefan Zörner

Hypoport invites you to an interesting GOTO Night with Julia Dellnitz & Stefan Zörner on behalf of GOTO Berlin.

The session will start at 7PM on July 2, 2015 at our headquarter Klosterstr. 71, 10179 Berlin. Please come by and give them a visit. Food and Drinks will complement the session.

Microgames are small decoupled learning entities about a specific topic such as software architecture. They help teams and companies to get up to speed in a specific field and can easily be integrated in their daily practices. Microgames implement the idea that the wetware of our brain develops best when we are alert, link our learning to our day-to-day work, learn with positive emotions and distribute small learning units over time.

Julia Dellnitz (Learnical) and Stefan Zörner (embarc) will provide a playful and interactive session with findings from neuroscience and practical examples from software architecture.
Join us and be ready to play!


Julia Dellnitz creates playful and interactive learning formats. Her passion is to support experts in developing and implementing innovative (IT-) products and processes. She has managed large change and IT implementation initiatives over the last decade and has worked with over 4.500 people on learning and innovation topics – especially in international contexts.

Twitter: @learnical

Stefan Zörner from the Bayer AG via IBM and oose to embarc. He embodies twenty years of experience in IT and he still looks forward eagerly. He supports clients in solving architecture and implementation issues. In lively workshops he demonstrates practical design tools and spreads enthusiasm for real architecture work.

Twitter: @stefanzoerner

Continuous Deployment with Gradle and Docker – Production Deploy

This is the final part of the article series about our continuous deployment pipeline. The previous articles showed you how we build and publish our Spring Boot based application, perform AngularJS end-to-end tests with Protractor, and how we perform contract tests to external services as consumer and provider as well.

What’s missing are the description of how we package our application in Docker images, and how we distribute and deploy them to our production servers.

Though we originally had a dedicated Docker build and push step in our pipeline, things have changed since last year. The Docker image build has been integrated into the very first step, so that we not only have the Spring Boot artifacts in our repository, but the corresponding Docker image in our registry as early as possible.

The Docker build and push code isn’t very large, so this article will also show you how we use the Docker images to deploy and run our application on our production hosts. The example code is available at GitHub.

Docker for Application Packaging

You’ve certainly heard about Docker, so we won’t go into any detailed Docker concepts here. If you’re new to Docker and would like to learn some basics, please head over to the 10-minute tutorial at the official web site.

We’re using Docker to package, distribute, and run our application. Similar to the executable Spring Boot .jar files, Docker helps us wrapping all runtime dependencies in so called images and run image instances as Linux containers. The encapsulation of Docker containers allows developers to define a huge part of the runtime environment (like the Java runtime) instead of being dependent on the tools installed on the host.

With such a more explicitly defined environment we can also expect the application to behave in a consistent way on different hosts. Due to the simplicity of a reduced Docker image we also have a smaller scope to consider when changing or updating the environment. Even changing the hosts’ operating system from Oracle Linux to Ubuntu didn’t have any effect on our application.

The Docker daemon on our build infrastructure is usually available via its HTTP remote api, so that we can use any Docker client library instead of the Docker command line binary. Our Gradle scripts leverage the communication to the Docker daemon with the help of a Gradle Docker plugin. It adds several tasks to our build scripts which can be configured quite easily to create new Docker images, and push them to our private Docker registry. We also use other tasks to pull and run Docker images in our contract test build step, like already described in part 4.

The Docker build task depends on some preparation tasks which copy the necessary application jar and the Dockerfile to a temporary directory. That directory is considered as build context, which is sent to the Docker daemon as source for the final image.

Our Docker images are tagged with the same version like the application jar, which allows us to use the same version text file through the whole pipeline. The Gradle publish task is configured to automatically trigger the Docker image push task.

With the Docker images in our Docker registry we can complete our pipeline by deploying the application to our staging and production hosts.

Ansible for Application Deployment

Our tool of choice to orchestrate the deployments is Ansible. We use Ansible to provision and maintain our infrastructure, and it also allows us to perform ad hoc tasks like application deployments or cleanup tasks. Ansible uses tasks, roles, and playbooks to describe a desired system state.

Relevant in the context of our application deployment are such details like blue-green deployment and load balancing of the same application version on different hosts. We use a HAProxy as load balancer and as switch between our blue and green versions in front of our application webapps. Our application isn’t aware of those aspects, which increases scalablility and flexibility. So, the Ansible playbook has to decide which version (blue or green) needs to be replaced by the newly build release. In summary, the Ansible playbook needs to perform the following tasks:

  • determine which version to replace (blue or green)
  • pull the new Docker image to our hosts
  • stop and remove the old containers
  • run new container instances based on the new image
  • update the HAProxy config to route new requests to the new containers

Additionally to these essentials, there are some book keeping and cleanup tasks necessary.

Example Playbook and Tasks

We won’t share our complete Ansible repository here, so the examples won’t work out of the box, but to help you get an idea how Ansible tasks can look like, please have a look at the ansible directory.

In the hosts directory you’ll only find an inventory file with our hosts and their aliases. Hosts can also be grouped together, here we added both loadbalanced webapp hosts to the example-backend group. The loadbalancer host as been aliased as example-backend-loadbalancer.

Normally, you would find more hosts and different environments like development or production. The beauty of Ansible lies in the possibility to keep several internal, staging, and production inventories seperated, while tasks are usually applicable on any host.

The library directory contains scripts or Ansible modules which can be executed in the context of a task. Since Ansible tasks should be declarative and less imperative, we moved some shell commands to gather container runtime information to the library.

A good entrypoint when working with an Ansible project is the playbooks directory. Playbooks configure tasks, the affected hosts and other environment specific details. We added a playbook to deploy the example-backend on our webapp hosts and configure the loadbalancer to switch to the new target stage (blue or green). The other tasks in the playbook collect the active stage by asking the loadbalancer and set the relevant facts for the deployment task.

Most work is performed in the roles directory, though. You’ll see that the generic docker-service task configures an Ansible Docker module to communicate with a Docker daemon. The other steps only prepare the actual deployment: a new image is pulled from our registry, the previous image id of the old container is saved for a cleanup step at the end, and the old container is removed. Some steps are obsolete since Ansible 1.9, where a reloaded state has been introduced to automatically replace a container based on a new image.

Ansible doesn’t only make sense as task runner for deployments, but we also use it to provision our hosts: there’s not much difference between a regular deployment and the very first deployment. In a microservice oriented culture, adding new services with their satellites, loadbalancers, and pipelines needs to be simple and efficient. Ansible helps us to extract a common “dockerized service” role and only configure some service specific values. That way our deployments become more declarative and maintainable.

Since we’re completely Docker infected, our Ansible deployment project is available for our CI as Docker image. You can find our simple Dockerfile at the root of the ansible directory. Additionally to our deployment tasks and playbooks, it only provides Ansible itself. On TeamCity our builds perform docker run commands like shown below. Accessing our hosts is allowed by volume mounting ssh keys into the container:

docker run -it --rm -v ~/.ssh/id_rsa:/root/.ssh/id_rsa hypoport/ansible ansible-playbook -i inventory playbooks/deploy-example-backend.yml


We’ve now reached the end of the article series about our continuous deployment pipeline.

You learned how we build and package our application in Docker images, perform tests on different levels and with different scopes, and how we deploy new releases on our hosts.

Over the past year we learned a lot about other use cases and concepts in the DevOps universe. Docker helped us to define clear interfaces between ours and other services. Our Gradle scripts now focus on the build and publish tasks, while Ansible is our tool of choice for provisioning, deployment, and maintenance tasks.

Building pipelines hasn’t become trivial, but with the right tools we feel quite confident and can handle new requirements very easily.

Though many pipelines end with the deployment of a release in the production environment, the rollout of new features doesn’t end. Dynamically updating releases wtithout downtime requires feature toggles, backwards compatibility to other services, flexible database schemata, and good monitoring.

We’ll cover some aspects in future posts, so just keep following our blog.

For feedback or questions on this article series, please contact us @gesellix or @hypoport, or add a comment below. Thanks!