Microgames for Wetware Developers by Julia Dellnitz & Stefan Zörner

Hypoport invites you to an interesting GOTO Night with Julia Dellnitz & Stefan Zörner on behalf of GOTO Berlin.

The session will start at 7PM on July 2, 2015 at our headquarter Klosterstr. 71, 10179 Berlin. Please come by and give them a visit. Food and Drinks will complement the session.

Abstract
Microgames are small decoupled learning entities about a specific topic such as software architecture. They help teams and companies to get up to speed in a specific field and can easily be integrated in their daily practices. Microgames implement the idea that the wetware of our brain develops best when we are alert, link our learning to our day-to-day work, learn with positive emotions and distribute small learning units over time.

Julia Dellnitz (Learnical) and Stefan Zörner (embarc) will provide a playful and interactive session with findings from neuroscience and practical examples from software architecture.
Join us and be ready to play!

Bio

Julia Dellnitz creates playful and interactive learning formats. Her passion is to support experts in developing and implementing innovative (IT-) products and processes. She has managed large change and IT implementation initiatives over the last decade and has worked with over 4.500 people on learning and innovation topics – especially in international contexts.

Twitter: @learnical

Stefan Zörner from the Bayer AG via IBM and oose to embarc. He embodies twenty years of experience in IT and he still looks forward eagerly. He supports clients in solving architecture and implementation issues. In lively workshops he demonstrates practical design tools and spreads enthusiasm for real architecture work.

Twitter: @stefanzoerner

Continuous Deployment with Gradle and Docker – Production Deploy

This is the final part of the article series about our continuous deployment pipeline. The previous articles showed you how we build and publish our Spring Boot based application, perform AngularJS end-to-end tests with Protractor, and how we perform contract tests to external services as consumer and provider as well.

What’s missing are the description of how we package our application in Docker images, and how we distribute and deploy them to our production servers.

Though we originally had a dedicated Docker build and push step in our pipeline, things have changed since last year. The Docker image build has been integrated into the very first step, so that we not only have the Spring Boot artifacts in our repository, but the corresponding Docker image in our registry as early as possible.

The Docker build and push code isn’t very large, so this article will also show you how we use the Docker images to deploy and run our application on our production hosts. The example code is available at GitHub.

Docker for Application Packaging

You’ve certainly heard about Docker, so we won’t go into any detailed Docker concepts here. If you’re new to Docker and would like to learn some basics, please head over to the 10-minute tutorial at the official docker.com web site.

We’re using Docker to package, distribute, and run our application. Similar to the executable Spring Boot .jar files, Docker helps us wrapping all runtime dependencies in so called images and run image instances as Linux containers. The encapsulation of Docker containers allows developers to define a huge part of the runtime environment (like the Java runtime) instead of being dependent on the tools installed on the host.

With such a more explicitly defined environment we can also expect the application to behave in a consistent way on different hosts. Due to the simplicity of a reduced Docker image we also have a smaller scope to consider when changing or updating the environment. Even changing the hosts’ operating system from Oracle Linux to Ubuntu didn’t have any effect on our application.

The Docker daemon on our build infrastructure is usually available via its HTTP remote api, so that we can use any Docker client library instead of the Docker command line binary. Our Gradle scripts leverage the communication to the Docker daemon with the help of a Gradle Docker plugin. It adds several tasks to our build scripts which can be configured quite easily to create new Docker images, and push them to our private Docker registry. We also use other tasks to pull and run Docker images in our contract test build step, like already described in part 4.

The Docker build task depends on some preparation tasks which copy the necessary application jar and the Dockerfile to a temporary directory. That directory is considered as build context, which is sent to the Docker daemon as source for the final image.

Our Docker images are tagged with the same version like the application jar, which allows us to use the same version text file through the whole pipeline. The Gradle publish task is configured to automatically trigger the Docker image push task.

With the Docker images in our Docker registry we can complete our pipeline by deploying the application to our staging and production hosts.

Ansible for Application Deployment

Our tool of choice to orchestrate the deployments is Ansible. We use Ansible to provision and maintain our infrastructure, and it also allows us to perform ad hoc tasks like application deployments or cleanup tasks. Ansible uses tasks, roles, and playbooks to describe a desired system state.

Relevant in the context of our application deployment are such details like blue-green deployment and load balancing of the same application version on different hosts. We use a HAProxy as load balancer and as switch between our blue and green versions in front of our application webapps. Our application isn’t aware of those aspects, which increases scalablility and flexibility. So, the Ansible playbook has to decide which version (blue or green) needs to be replaced by the newly build release. In summary, the Ansible playbook needs to perform the following tasks:

  • determine which version to replace (blue or green)
  • pull the new Docker image to our hosts
  • stop and remove the old containers
  • run new container instances based on the new image
  • update the HAProxy config to route new requests to the new containers

Additionally to these essentials, there are some book keeping and cleanup tasks necessary.

Example Playbook and Tasks

We won’t share our complete Ansible repository here, so the examples won’t work out of the box, but to help you get an idea how Ansible tasks can look like, please have a look at the ansible directory.

In the hosts directory you’ll only find an inventory file with our hosts and their aliases. Hosts can also be grouped together, here we added both loadbalanced webapp hosts to the example-backend group. The loadbalancer host as been aliased as example-backend-loadbalancer.

Normally, you would find more hosts and different environments like development or production. The beauty of Ansible lies in the possibility to keep several internal, staging, and production inventories seperated, while tasks are usually applicable on any host.

The library directory contains scripts or Ansible modules which can be executed in the context of a task. Since Ansible tasks should be declarative and less imperative, we moved some shell commands to gather container runtime information to the library.

A good entrypoint when working with an Ansible project is the playbooks directory. Playbooks configure tasks, the affected hosts and other environment specific details. We added a playbook to deploy the example-backend on our webapp hosts and configure the loadbalancer to switch to the new target stage (blue or green). The other tasks in the playbook collect the active stage by asking the loadbalancer and set the relevant facts for the deployment task.

Most work is performed in the roles directory, though. You’ll see that the generic docker-service task configures an Ansible Docker module to communicate with a Docker daemon. The other steps only prepare the actual deployment: a new image is pulled from our registry, the previous image id of the old container is saved for a cleanup step at the end, and the old container is removed. Some steps are obsolete since Ansible 1.9, where a reloaded state has been introduced to automatically replace a container based on a new image.

Ansible doesn’t only make sense as task runner for deployments, but we also use it to provision our hosts: there’s not much difference between a regular deployment and the very first deployment. In a microservice oriented culture, adding new services with their satellites, loadbalancers, and pipelines needs to be simple and efficient. Ansible helps us to extract a common “dockerized service” role and only configure some service specific values. That way our deployments become more declarative and maintainable.

Since we’re completely Docker infected, our Ansible deployment project is available for our CI as Docker image. You can find our simple Dockerfile at the root of the ansible directory. Additionally to our deployment tasks and playbooks, it only provides Ansible itself. On TeamCity our builds perform docker run commands like shown below. Accessing our hosts is allowed by volume mounting ssh keys into the container:

docker run -it --rm -v ~/.ssh/id_rsa:/root/.ssh/id_rsa hypoport/ansible ansible-playbook -i inventory playbooks/deploy-example-backend.yml

Summary

We’ve now reached the end of the article series about our continuous deployment pipeline.

You learned how we build and package our application in Docker images, perform tests on different levels and with different scopes, and how we deploy new releases on our hosts.

Over the past year we learned a lot about other use cases and concepts in the DevOps universe. Docker helped us to define clear interfaces between ours and other services. Our Gradle scripts now focus on the build and publish tasks, while Ansible is our tool of choice for provisioning, deployment, and maintenance tasks.

Building pipelines hasn’t become trivial, but with the right tools we feel quite confident and can handle new requirements very easily.

Though many pipelines end with the deployment of a release in the production environment, the rollout of new features doesn’t end. Dynamically updating releases wtithout downtime requires feature toggles, backwards compatibility to other services, flexible database schemata, and good monitoring.

We’ll cover some aspects in future posts, so just keep following our blog.

For feedback or questions on this article series, please contact us @gesellix or @hypoport, or add a comment below. Thanks!

Continuous Deployment with Gradle and Docker – Contract Tests

In part four of our series about our continuous deployment pipeline you’ll learn about how we perform contract tests to ensure our service stays compatible with other service producers and our consumers as well.

Please read the introductury post to learn about the other articles and the overall context of our deployment pipeline.

This article contains both an introduction to contract testing, and our individual implementation of contract testers. If you’re new to the contract testing concept, just read on. If you’re already familiar to the overall concept and want to start with our code, you can skip to the Contract Test Orchestration section.

What are Contract Tests?

The comprehensive overview on Testing Strategies in a Microservice Architecture introduces contract testing as complementary method to increase test coverage:

Apart from unit, integration, component, and end-to-end tests, the contract tests aim at checking service boundaries.

Every consumer defines a set of criteria or requirements which need to be fulfilled by a service producer. The sum of all requirements defines the overall service contract. With contract testing, consumers can check the producer’s contract or their own requirements before a new release is deployed in production.

The contract tests shouldn’t check the producer’s behaviour, but only verify that the API can be consumed. Checking behaviour would result in component tests, which should be performed on the producer’s side and are not the responsibility of its consumers.

Contract tests should be performed either when the consumer changes or when the producer changes. While it should be easy for every consumer to perform their own contract tests, they should also provide a test package for the producer’s pipeline or environment. That way the producer can preview its own changes and their effect on every consumer.

Overview Consumer/Producer/Contract Tests

The figure above shows an overview for a combination of consumer, producer and contract tests. Contract tests are shown as tests from the consumer’s perspective.

Contract Tests in Real Life

As easy as it sounds, performing contract tests in continuous deployment pipelines isn’t trivial.

In our case services are written in Java, so we write our contract tests as Java unit tests, using test runners like JUnit or TestNG and execute them with shell or Gradle scripts. Packaging such test classes as jar files and publishing them in an artifact repository belongs to the simple aspects. But making a producer available for contract tests can become very exciting and produces several questions, e.g.:

  • can the tests be performed against the production service?
  • what happens when tests need some setup (e.g. a user account) or need to have a valid session to consume the service API?
  • do the tests have an effect on the overall service availablility?
  • what about database entries being generated by the tests – do we need a cleanup?

With both consumer and producer in one team communication becomes easier when concepts or requirements are introduced or changed. When consumer and producer are split between different teams or even companies, it becomes more important to define a clear API. Consumer-Driven Contracts help the producer to align their implementation at the consumer’s needs.

To address the questions above, the producer might avoid to let all consumers perform tests against the production service – unless they want to stress test theim. Often, the producer has a staging concept, with a mirror to production system being available for different kind of tests. Such systems should behave as much as possible like the production system. An alternative to a dedicated test stage are services being started spontaneously, and only for the actual test run. Such systems should boot very fast to support a fast feedback cycle and increase the developers’ acceptance.

Sometimes neither test stages nor ad hoc services are possible. Then one can change the production behaviour dependent on the logged in user role or based on request parameters. Nevertheless, the producer should still try to support the consumer writing tests independent on producer specific issues: request parameters to only toggle a under test behaviour on the producer side would become a part of the contract test – and ultimately a part of the contract. Changing the producer’s test setup would then break the contract tests, though the API as test subject could stay the same.

The questions above show that runtime dependencies on a database can make things complicated. Similarly to the options we have with services, databases can also be provided via staging, but can also be started as needed. Starting a database spontaneously often implies that they can be thrown away quite easily, so one doesn’t need to care about cleanups.

Contract Tests in a CI Environment

In our team we have several combinations of our services being consumer of other services at our company and also being a producer for other services. Our contract test setup isn’t limited to our own pipeline, but the other involved services need to perform the contract tests in their pipelines, too.

Contract tests can be triggered either when the producer or when the consumer changes. In case any tests fail, the newly built service shouldn’t be deployed to production. Since APIs evolve over time, the contract tests also change over time, so that they always match a combination of the consumer and producer version. The combination needs to be considered in the pipelines of the consumer and the producer as well:

  1. When our service as consumer changes, we need to perform our contract tests against all producers whose API we consume. We need to perform the tests against the productive version of the producers, because we’d like to ensure that our newly built consumer will be working on our production system together with the producers’ services.
    Overview Contract Tests with a changed Consumer
  2. When a producer changes, it should trigger our contract tests to run against their newly built service. It should choose those tests which match our service in the production environment.
    Overview Contract Tests with a changed Producer

Both combinations are shown in very similar figures, where only service versions are changed (in blue), but the overall concept stays the same.

Contract Test Orchestration

We didn’t explain how a producer can find and execute the consumer’s contract tests. We also didn’t explain how a consumer can ask for a testable producer in cases where the production service shouldn’t be used. In our case, we try to run a producer on demand, but sometimes we needed to use staged services of other teams.

To solve those orchestrating aspects, we introduced specific contract tester projects. Over time, we added a specific contract tester for every combination of consumer and producer in our responsibility. Both sides of a contract test need to collaborate so that neither of them is enforced to know too many details on how to run a service or perform the contract tests. In our case we started with both consumer and producer being developed in our team, so that we didn’t differentiate between both sides very much.

Initial Contract Tester Implementation

Our first iteration of a contract tester had cross cutting knowledge about consumer and producer, so that we combined all necessary tasks in one Gradle script. To ease the use of the contract tester, we added two tasks which are considered as entrypoints for the producer’s and the consumer’s pipelines, respectively. Only one of both tasks should be run at a time, with the entrypoints named like:

  • performContracttestsTriggeredByConsumer
  • performContracttestsTriggeredByProducer

Dependent on which of both tasks is run, the contract tester uses the productive or the newly built version of consumer and producer. So, running the performContracttestsTriggeredByConsumer task results in the following steps:

  1. resolve the productive version of a producer
  2. download or pull the productive producer
  3. download the consumer’s contract tests of the newly built version
  4. run the producer
  5. perform the contract tests
  6. tear down the producer and cleanup

Running the performContracttestsTriggeredByProducer only changes the first three steps:

  1. resolve the productive version of a consumer
  2. download or pull the newly built producer
  3. download the consumer’s contract tests of the productive version

You’ll recognize that only the consumer’s and the producer’s version are input values. We use the TeamCity Artifact Dependencies feature to pass versions of newly built artifacts to the contract test build. The productive versions need to be resolved in a way the service allows us to. Sometimes we can perform a simple HTTP GET on a dedicated URL, sometimes a “resolve” only means to select a stage (dev or prod) where a service is always running.

A stripped down implementation of our initial contract tester implementation if available at GitHub. The example contract tests validate that the GitHub status API can be consumed. Though we won’t discuss the actual code here, feel free to give it a try and if you have problems running it with the example-project, please ask.

Current Contract Tester Implementation

While the first implementation is still being used and has already been copied for other combinations of producer and consumer, we recently had to provide one of our services as producer to a consuming service of another team. The other team already had their own contract testing concept with their own “Consumer Driven Test Suite”. Our contract tester didn’t need to know how to fetch and perform the contract tests anymore, it only needed to prepare a testable producer service.

We could now consider the consumer as an external service and agree on a clear definition of responsibilities. The common point was the CI-Server TeamCity with our individual pipelines, where both teams needed to add a contract test build goal. Build goals in TeamCity can perform several steps, in our case similarly to a unit test:

  • setup/prepare
  • run/perform
  • tear down/cleanup

The image below shows an overview of our build steps. In the blue box (setup) and as cleanup (tear down) you’ll see the tasks which are implemented for the producing service. The red box (run) shows the task which actually performs the contract tests of the consuming service. Please see the “contracttester-new” subfolder for the actual code.

contract test tasks

The details in the blue box show how we setup our service. Since we use a Docker based setup, our database CouchDB and our service are available as Docker images and can be pulled from a private registry. Both CouchDB and service images have tags, which allow us to fetch the desired version to be run. Both images are used on our production server so that we can be quite sure that the service behaves like in production. The difference to the production environment are mocks for external services, which shouldn’t be relevant for contract tests.

After running the CouchDB and our service, we collect the actual service URL and save it on the TeamCity agent in a well known properties file (named url.properties in the green box of the image). The properties file will then be used in the TeamCity run step as input parameter to the contract tests so that they can reach the producer service without hard coding any URL.

Running the tests in this case is simplified by executing a TestNG runner with the contract tests and their dependencies on the classpath, which happens externally to our contract tester.

The cleanup task is very easy with a Docker based setup: we only need to stop and remove the Docker containers. That way we don’t even need to think about any database changes, because the database container is thrown away after every successful test run, too.

Gradle goodies

In addition to the contract tester concept, we used some more advanced Gradle features. Gradle isn’t only a wrapper around Groovy, but it also tries enable you to use a more build domain specific language in your script files. We’ll only show you some codepointers to the improvements of our most recent contract tester compared to the first iteration.

Build Sources

Gradle knows about a special directory buildSrc, which will be compiled (and tested) before running the build.gradle script. Classes in the buildSrc module will be available in the build script classpath, which allows us to move implementation details to a plugin like submodule.

In our example contract tester, you’ll see that we moved a ProducerVersionResolver and a HealthCheckService to the build source module. Both classes are used in the build script, the former one to find the correct service version, the latter one helps us waiting for the service startup before leaving control back to the caller (i.e. TeamCity).

You can consider the build source module like a minimal plugin implementation without the hazzle of publishing one. The Gradle user guide about organizing build logic discusses the pros and cons of custom tasks, build source, and dedicated plugins and we would recommend you to read it when you’re interested in keeping your build.gradle clear and readable.

Task Rules

In the contract tester we again provided special tasks as entrypoints to the contract tester. This time we didn’t explicitly define both tasks, but used Gradle Task Rules to gain more flexibility and generate the necessary entrypoint on demand. The ./gradlew tasks command lists such rules in the Rules section:

Gradle Task Rules example

Gradle task rules behave similar to normal tasks, so that they can be called like e.g. ./gradlew prepareContracttestsTriggeredByProducer. Adding such rules is quite simple, you only need to define a closure to handle newly added task names. Our handler dynamically adds dependent on the called task name a new task and configures it to wait for the running producer and finishes with saving the mentioned url.properties file.

Splitting .gradle Scripts

Similar to the build sources directory we tried to clean up our build.gradle by extracting some tasks to other .gradle files. Combined with the task rules, this also enabled us to dynamically apply only necessary files.

The build-setup-producer.gradle contains everything we need to run a producer service and is quite self contained, so that we might reuse the same file for other scenarios.

Dependent on which pipeline triggered our contract tester, we apply another build-triggered-by-....gradle script. It only determines the service version using the version resolver class and configures the runProducerContainer task to use the desired version.

Applying other .gradle files sadly does’t behave like a simple include, so that we need to keep some little tweaks in mind. One example is the usage of plugins and how to apply them: you cannot use their plugin id, but have to apply their main class. Nevertheless, splitting the main build.gradle script makes sense, when it comes to readability and reuse.

Summary and Outlook

If you read up to this point you’re probably curious about the next articles. As you already know, the next step in our deployment pipeline would be the Docker image build and afterwards the actual deployment on our servers.

Things have changed since last year, in particular the Docker image build step has been integrated in the very first step: we not only publish our Spring Boot artifact, but also build and push our Docker image quite early. You can already see the necessary code in the example-backend Gradle script, but we’ll go into detail in the next article.

We’re glad about feedback, questions, and suggestions! So feel free to add a comment, add a pull request with improvements, or contact us via Twitter @gesellix!

Peer Feedback as topic at the Berlin Culture Hacking Meetup

Since I postet my first articles on Peer Feedback in 2014, I noticed that people want to learn more about this approach. When I had a workshop with Olaf last year, we talked about sharing my thoughts on this topic with the Berlin Culture Hacking MeetUp. This was the starting point for inviting the culture hackers to Hypoport and hosting the event.

So last friday I took the opportunity to present Peer Feedback to the community. We had enlightened discussions around the topic and it was a really cool evening. Thanks to all participants for input and discussion!

Here are the slides: Peer Feedback – Culture Hacking 2015 – printable slides (german).

As a helpful starting point for your individual peer feedback dialogue you can use the Peer Feedback Sheet (only in German right now).

Looking forward to the next Culture Hacking Meetup!

Andreas

GOTO Night at Hypoport: From the Monolith to Microservices – Randy Shoup

Hypoport is happy to host an interesting GOTO Night with Randy Shoup talking about “From the Monolith to Microservices”.

On behalf of GOTO Berlin and Microservices Meetup Berlin we welcome you at Hypoport.

Venue: Hypoport, Klosterstr. 71, 10179 Berlin
Date: April 20, 2015
Entrance: 18:30 / 6:30PM
Talk: 19:00 / 7PM

“From the Monolith to Microservices: Lessons from Google and eBay”
by Randy Shoup

Abstract:
Most large-scale web companies have evolved their system architecture from a monolithic application and monolithic database to a set of loosely coupled microservices. Using examples from Google, eBay, and other large-scale sites, this talk outlines the pros and cons of these different stages of evolution, and makes practical suggestions about when and how other organizations should consider migrating to microservices. It concludes with some more advanced implications of a microservices architecture, including SLAs, cost-allocation, and vendor-customer relationships within the organization.

Bio:
Randy has worked as a senior technology leader and executive in Silicon Valley at companies ranging from small startups, to mid-sized places, to eBay and Google. In his consulting practice, he applies this experience to scaling the technology infrastructures and engineering organizations of his client companies. He served as CTO of KIXEYE, a 500-person maker of real-time strategy games for web and mobile devices. Prior to KIXEYE, he was Director of Engineering in Google’s cloud computing group, leading several teams building Google App Engine, the world’s largest Platform as a Service. Previously, he was CTO and Co-Founder of Shopilly, an ecommerce startup, and spent 6 1/2 years as Chief Engineer and Distinguished Architect at eBay. Randy is a frequent keynote speaker and consultant in areas from scalability and cloud computing, to analytics and data science, to engineering culture and DevOps. He is particularly interested in the nexus of people, culture, and technology.
Twitter: @randyshoup

Registration
Please register here. For any questions do not hesitate to contact Dajana Günther.

Alignment durch Konsens

Entscheidungen im klassischen hierarchischen Umfeld zu treffen, ist einfach und effizient. Die Führungskraft entscheidet im besten Wissen für das Team, damit es sich auf die Umsetzung konzentrieren kann.

Bei Entscheidungen im agilen Kontext mit selbstorganisierten Teams reden alle neuerdings immer von “Alignment and Autonomy”. Also sowas wie eine gemeinsame Ausrichtung unter gleichzeitiger Wahrung der Entscheidungsautonomie in den Teams.

Wie soll die Führung denn nun eine Ausrichtung vorgeben ohne zu genau den Weg und die Lösung vorzugeben?

Als Antwort breche ich in diesem Blog-Post mal eine Lanze für den guten, alten Konsens. Schlussendlich ist er meiner Meinung nach die beste Maßnahme, um eine gemeinsame Ausrichtung hinzubekommen. Noch dazu stellt der Konsens sicher, dass die Entscheidungen nachhaltig im ganzen Team getroffen werden. Das führt dazu, dass sich auch langfristig jeder daran hält und man endlich Ruhe vor immer wieder aufbrechenden Diskussionen mit den immer gleichen Themen durchleiden muss. Da lohnt sich auch der in Einzelfällen etwas höhere Aufwand der Entscheidungsfindung im Konsens.

Ein paar Tipps und Tricks können zusätzlich helfen:

  • volle Transparenz aller Aufgaben und deren Status – Machen sie regelmäßige Statusmeetings mit allen!
  • Jeder darf sich an allen Aufgaben beteiligen – Bauen sie ihre Teams/Arbeitsgruppen stets mit der Frage „…und wer will hier auch noch mitmachen?“ auf!
  • Nehmen sie alle Einwände auch spät im Entscheidungsprozess ernst und arbeiten sie diese im Konsens ein.
  • Sollten Sie nicht zum Konsens kommen, involvieren Sie mehr Kollegen und Meinungen – Nutzen Sie die Vielfalt in Ihrer Organisation!
  • Versuchen sie, Emotionen sofort und bis in die Persönlichkeit nachzuspüren – Schulen Sie alle Mitarbeiter in Coachingskills!
  • Ignorieren Sie Hierarchien und zwar egal ob vorgegeben oder systemisch gelebt – Involvieren Sie alle Kollegen auf Augenhöhe!
  • Definieren Sie Selbstorganisation als höchsten Wert in der Führung – Zwingen Sie Ihre Führungskräfte zum Abwarten!
  • Werten Sie die Überforderung der Mitarbeiter bzgl. zuviel Freiheit als Erfolg – Halten Sie an Ihren Grundsätzen fest!
  • Für Fortgeschrittene: Stimmen Sie Ihre Werte, nach denen Sie im Unternehmen und privat handeln, im Konsens mit allen Kollegen ab!

Aber es gibt auch natürliche Feinde des Konsens. Dabei handelt es sich um die Eingangs beschriebenen Ideen von Alignment and Autonomy, die mit Begeisterung, Vision, Sinn, Richtung usw. zu tun haben. Diese auf der Meta-Ebene wirkenden Prinzipien sind nur schwer von Führungskräften umzusetzen und deshalb gefährlich. Im schlimmsten Fall wird dann keiner mehr in Ihrer Organisation den Konsens anstreben. Das gilt es zu verhindern.

Sogar auf der Ebene von Betriebssystemen für Organisationen lauern Gefahren: Die neuste und gerade gehypte Alternative hört auf den Namen „Holokratie“ oder auch „Soziokratie“. Wie ein Kollege sagte: „Was soll ich von Leuten erwarten, die etwas mit dem Namen –kratie vertreten?“ Recht hat er, denn hier ist völlig neues Denken angesagt und das ganze passt auch noch perfekt zu dem ganzen agilen Zeugs. Darauf wollen Sie sich nicht einlassen, das stellt alles auf den Kopf und definiert ein neues Miteinander! Soweit muss es nicht kommen!

Es lebe der Konsens!

Stabile Grüße,
Evil Coach

Microxchg Special: Seneca Node JS μServices Framework and Docker Orchestration

On Wednesday Feb 11, the evening before microxchg – the micro services conference, Hypoport is hosting the microservices meetup Berlin with talks from two of the conference speakers.

Richard Roger will talk about Seneca Node JS μServices Framework

Peter Rossbach will talk about Docker Orchestration

Please register on microservices meetup Berlin.

See you there, Leif