Author: JIT

My start in professional life as a developer

“My start in professional life as a developer.”

Hey, my name is Tom and I have a problem…but let’s start at the beginning. It was a rainy november day 20 years ago when I saw the light of day…oh, one moment, too far?

Last summer I decided that it would be time to look for the first full time job in my life. In the final spurt of my study (or in the last holidays of my life *sniff*) I was looking for an adequate job and there a funny announcement catched my attention. From IT heroes and super powers was talked about! My nerd heart began to beat and so I browsed through the official JIT website and searched for other information about my potential boss (many a person would call it stalking).
After powerful consideration I tried my luck and applied for the job. I knew that many employers are anxious for a finished study but my unconventional application didn’t miss the point.

And why should I lead you to believe in? Of course it was a big change from the student dissolute lifestyle to a fulltime job. But the motivation is something completely else. Apart from more money in my pocket the basic attitude is still the same: constant advancement.

At university you learn the work with legacy technologies only conditionally – often you smile about it when a phrase in this direction comes out. Near antique relicts from a time where computers were run by steam also the work mit new technologies doesn’t miss out. The right mix is the key to success.

Whereof I will never get used of, is a certain volatility according to the working packages and prioritisation of tasks. By and by I’m sure this will be easier for me.

What I really love about my job is the mixed crowd of colleagues

What I really love about my job is the motley bunch of colleagues who are more reminiscent of a big family (am I really the black sheep?). Hardly an hour goes by, in which one does not laugh of heart and passes every still gloomy day, if one works together on a goal. No matter how difficult a deadline may be to comply, our (gallows) humor has yet to take us any customer or release date. The most monotonous works (yes, there are also) become bearable when you sit in a room with people you like. (*)

(*) No, I was not forced to write this and oo … not tweak!


PS: Help, I was forced to write that!

Microservices – the Insider Perspective. Part 5:


Lightweight BPM engines, especially designed to orchestrate Microservices, are the latest outcomes of the Microservice revolution. An excellent example for their anatomy and capabilities is Zeebe, recently introduced by the German BPM pioneer Camunda.

Camunda created this BPM engine with a total focus on Microservice orchestration, optimizing it for horizontal scalability, low latency, and high throughput.

Zeebe operates with event sourcing, writing all process changes as events. Its instances work together in a cluster. A replication mechanism provides fault tolerance in case of node failure.

Zeebe can be integrated into existing event-based Microservice architectures for process-mining purposes. In such a pure observer role, the engine registers all events and correlates them with the business processes defined by BPMN.

This supports the fast and detailed monitoring of business processes, and their visualization, even – and this is new – in fully choreographed architectures.

Zeebe can also work actively integrated as an orchestration and / or communication layer, writing commands in event form either in an existing message bus or on Zeebe’s own messaging platform.

In short: we think Zeebe is very good – for all the reasons presented here, and because this engine proves once again that BPMs and Microservices are not mutually exclusive, but in fact ideal companions.

Microservices – the Insider Perspective. Part 4:


The basic principle of Microservice architecture is: “Light and agile is good and efficient”.

It drove all efforts to streamline the choreography solutions described in Part 3 of this series. It also explains why many Microservice pioneers frowned upon using BPM engines to upgrade Microservice systems: too heavyweight. Too much initial effort. Tedious launching. Everything Microservices sould not be. That´s what they said back then. Now they have changed their tunes, and with good reason.

It quickly became clear that orchestrated systems with centralized control aren´t more complex, cumbersome and difficult than choreographed solutions – on the contrary: Orchestrators literally sort out the inherent problems of choreographed systems:

Business processes no longer disappear in abstract codes. All parameters are monitored centrally, in real time, visualized and modeled in the BPMN context. Logical errors are obvious at a glance.

Processing timeouts and similar errors happens automatically.

The visualization makes the processes transparent for all stakeholders.

The current state of the art recommends choreographed systems only for simple standard processes, with the caveat that implementing additional new, more complex processes will create exponentially growing problems for the systems and their operators. In this context, it is also worth mentioning that it can make sense to mix choreography and orchestration in the same system.

The more so because BPM engines do not necessarily have to be heavyweight monolithic central authorities. On the contrary – it can make sense to integrate a trimmed-down engine directly into an individual Microservice. For example, a payment service in the context of an order service could use an on-board BPM engine to control the payment process.

Leading Microservice pioneers like Netflix have long recognized the value of these orchestrators and have created their own streamlined BPM engines. Conductor is a famous example.

Other new lightweight BPM engines like Camunda or Zeebe are now freely available. They run as an embedded part of a Microservice and can also be used in the cloud.

The growing popularity of such solutions has led to changes in licensing models, with fees calculated by transaction rates instead of counting servers or CPUs.

We will discuss the anatomy and benefits of a typical streamlined BPM engine in the next and final part of our blog series.

Microservices – the Insider Perspective. Part 3:


Welcome to part 3 of our blog series on Microservices – a young approach to IT architecture that replaces monolithic system dinosaurs with highly agile swarms of compact, autonomous services. The trick is to organize numerous individual services to create a meaningful, efficient, agile system.

No car can drive on one single wheel, and no business process consists of one single service: Microservices such as Billing, Payment and Shipping need to collaborate to create a value adding business process.

Such collaborations are either orchestrated or choreographed. What you have to know when making you choice is:

Choreographed Microservices will behave like dancing couples in a ballroom that, while keeping their individual routines, will react to each other, dodge each other, and all that without any centralized control. This is asynchronous, event-based communication:

Instead of directly contacting other services, choreographed Microservices will write relevant events into a message bus, e.g. Apache Kafka, which other services monitor for relevant events. For instance, if the Customer Service publishes the event ‘Customer Created’ while creating a new customer profile, the Loyalty Service will respond with the corresponding next process steps.

This adds up to a step-by-step event chain that covers the entire business process. Central control systems become obsolete and the coupling between individual services is as loose as ever possible – an ideal state in Microservice architecture.

So why not choreograph every Microservice architecture? Because achieving ideal states carries costs. Users of choreographed services pay with increased complexity due to asynchronous communication, with a total abstraction that either reflects business processes only implicitly as events or hides them completely in lines of code, and with challenges in process monitoring, because choreographed systems provide no information on running processes, error occurrence, etc. Monitoring has to be installed separately as costly custom solutions.

Furthermore, choreographed systems will only partially fulfill the promise of fully autonomous individual services: the message input and output of the services will create an indirect coupling that requires adapting multiple services whenever implementing new requirements – in blatant contradiction to the Microservice philosophy.

Solving these shortcomings would require an event command transformation outside the responsibility of the services involved. A separate service would be needed for this task, for example by a BPM engine, which in turn would mark a first step towards central orchestration.

In view of these complications, it becomes clear why orchestrated Microservice systems are now widely popular: in contrast to choreography, they are equipped with a central control unit, which directs the overall flow like a conductor directs orchestra musicians. REST is a frequently used communication line between control units and services. Lightweight BPM engines such as Camunda, Zeebe or Conductor are suitable central orchestrators.

Such orchestrated solutions initially had to struggle with prejudice. Now they are about to overtake the originally highly popular choreography solutions. Learn why – in Part 4 of our blog series.

Microservices – the Insider Perspective. Part 2:


Welcome to part 2 of our blog series on Microservices – a new approach to IT architecture that replaces monolithic system dinosaurs with highly agile swarms of compact, autonomous, collaborative services, with the coupling between services as loose as possible and cohesion within the individual service as strong as possible.

Such an architecture offers remedies for the immobility and uncontrollability of complex, organically grown systems. JIT has accompanied many clients during such projects and indeed recommends Microservices, provided that all technical and organizational prerequisites are really met.

Organizationally, every thinking should start with “Conway’s Law”. Although already half a century old, it is still valid: Melvin Conway teaches that whenever an organization designs a system, it will inevitably produce a copy of their team and communication structures. A company preparing to migrate to a Microservice architecture will therefore need to think deeply about business process anatomy and the distribution of tasks and responsibilities.

There more so because to achieve minimized coupling between Microservices and maximized cohesion within each service, the services and the underlying data models must be modeled with great care along clear domain boundaries and responsibilities.

Domain-driven design (DDD) is the tool of choice for modeling of this type, especially because it uses bounded contexts to divide the business domain into clearly defined zones like, e.g., Billing, Payment, Shipping etc.

Each “bounded context” defines an explicit interface used by collaborating services from other zones.

If you manage to organize your team structure along such clearly defined domains, your reward will be a clear, efficient Microservice architecture. If not, then not.

With the organizational requirements come the technical prerequisites. The first of those is a high level of automation: Continuous integration/delivery is essential in Microservice-based systems.

Of equal importance is continuous monitoring individual services as well as the entire system. This will prove somewhat more complex than monitoring conventional applications, because involves aggregating multiple servers, log files and metrics like CPU utilization or response times in one central location. Tools like Graylog, Graphite and Nagios will be helpful here.

Once you have checked all these preconditions off the list, the one remaining question is: do you want your future system to play like the Philharmonics? Or should it dance like Nijinsky? What does that mean, imply, and require? Read the answers in part 3 of our blog series.

Microservices – the Insider Perspective. Part 1:


Fourteen years of working in the IT industry have changed everything, starting with our perspectives. Our experiences has grown, and our naive enthusiasm for hypes and fashions has faded. Great, because IT is a tool, nothing more and nothing less, and the value of newly designed tools is not their being new, it is their capacity to solve problems.

Take Microservices for an example: They have been the talk of the industry for years, and at times praised as the ultimate solution for really, really everything IT. For J-IT, they have gathered steadily growing importance. Wherever we apply them, customer reactions are indeed enthusiastic, mostly because we recommend migration only if the problem justifies the effort and only after all organizational and technical requirements are covered.

We recommend Microservices for their huge potential in revitalizing IT landscapes that have grown organically over years and decades and have become too large, too complicated and immobile to adapt to changing environmental conditions. Jurassic Systems, so to speak.

The most glaring symptom of such an acute system growth crisis is a skyrocketing error rate, with varying teams are tampering with an essentially unmanageable application, implementing new features that have little to do with each other. Which does not matter anyway, because nobody in the company really understands the business logic of the application anymore, at least not in its entirety. The escalating complexity makes coding interventions in development and maintenance difficult, expensive and risky: even smallest changes cause unexpected effects in seemingly uninvolved, remote regions of the system. The growing risks inhibit necessary development steps. The company loses its agility, soon to be left behind by the market.

Microservices offer a perfect remedy here, being compact, autonomous services that collaborate with each other, minimizing the coupling between services while maximizing cohesion within each service. This IT architecture really fulfills the principle of single responsibility: “Do one thing and do it well”. Its advantages are obvious:

Compact, manageable services allow for easier, quicker, low-risk overhauls, throughout their whole lifecycles and by always the same team, marking a leap towards realizing the dev-ops vision. Implementing new services and modernizing individual services using specialized technologies is quite effortless. Performance-critical services are individually scalable. Ongoing, rapid, low-risk deployments reduce the time-to-market. Services can be linked to create new functionalities.

Figuratively speaking, Microservices replace cumbersome dinosaur systems with highly agile swarms of bees.

That’s what we tell our customers, but never without mentioning that this new agility carries a pricetag, and no, we are not talking initial investments, rather a wholly changed risk landscape: Migrations to Microservice architectures will always involve the characteristic challenges of complex distributed systems – think distributed transactions, ACID, and consistencies. Consult Bill Joy and his classic “Eight Errors Regarding Distributed Systems.”

Mind that we haven´t even begun discussing the organizational and technical requirements for Microservice migrations – but in part 2 of this blog series, we will do just that.


“Transparent fixed price offer”

When it comes to custom software development, many IT service providers struggle to avoid fixed-price offers. It is also the case that the compilation of the effort does not always reveal in the clarity expected by the customer.

With regard to the method of allocation, processing is often focused on the actual expenditure – that is, time and material. This means in plain language the complete assumption of the risk by the customer.

From the point of view of the IT service provider, this is the perfect solution – with the exception of one essential aspect: meeting the customer’s planning and budget security requirements.

If you take a closer look at the topic of settlement on a fixed price basis, this will result in simple rules for the IT service provider, which will have a positive effect on risk minimization:

Regardless of the overall project size, it must be ensured that individual atomic tasks (for example: functional requirements) must not exceed a maximum amount of effort. If this is the case, the corresponding task must be split again.
The IT service provider must be given the opportunity to become familiar with the customer’s infrastructure, technologies and acting persons.
Clear communication regarding the quality of the underlying documents (specifications, etc.). If necessary, a revision must be made. If there is no agreement between the two parties, this is the right time to devote to another project.
You do not have to complicate matters as they are anyway.

In addition to the binding price, the traceability of the underlying expenses is of the highest priority for the customer.

Hand on heart, how often did you, as a decision maker, receive multiple offers for the same request and could not compare them? In addition to this, it often makes the mentioned allocation of costs impossible for individual positions to calculate the ROI.

Again, there is a simple solution: listing according to professional requirements with the respective costs. Ideally traceable by an official metric, such as the Function Point method.


If you pay attention to some behaviors and rules in addition to the required technical skill set, and if you are also considering how an offer should be designed from a customer’s perspective, then you have a good chance of winning the contract.


“Passierschein A38 – BPM is already not a gallish village more.”

What Asterix and Obelix, the pass A38 and BPM have in common? Read it yourself:

In the cult animation “Asterix conquers Rome” Asterix and Obelix are faced with the task to obtain the pass A38 from a Roman prefecture (Note: Office). This exam is anything but easy: the two Gauls are constantly passing from department to department, but the required document remains denied them. Only when Asterix uses a cunning and requires the fictitious pass A39, he manages to use the donated confusion in his favor and thus still obtain the required pass A38.

This story shows – albeit exaggeratedly – that as soon as an institution reaches a certain size, process automation is essential to ensure the efficiency of handling various processes. The step towards automation offers the following advantages compared to manual process processing:

  • lower process costs
  • faster process execution
  • less process errors
  • better traceability

However, anyone who believes that the automatic execution of processes only has relevance for large companies with corresponding financial resources is wrong. Small and medium-sized enterprises (SMEs) can also benefit from process automation without having to incur financial costs. By means of so-called “human tasks” automatic processes can be selectively enriched with manual tasks. Thus, e.g. In the process “Registration of a new employee” the individual steps such as registration with the Social Security, creation of a user account and handover of key and access card manually. However, process automation ensures that none of the above steps are forgotten.

Introducing BPM into the enterprise is easier than ever. Lightweight open source solutions such as Camunda meanwhile enable a very low entry barrier. Setting up a process engine for a pilot project can be done within a very short time without license costs by an experienced Java Developer. Subsequently, in coordination with the department, the decision is made on the processes to be automated. Only after completing a successful pilot phase does the use of paid features gain in importance.

If the prefecture had already referred to BPM in relation to the history of Asterix and Obelix, obtaining the A38 pass would have been easy. Thus, even in ancient times, BPM makes life easier!


“Best buddy”

What awaits a freshly baked IT Hero when he’s on his first JIT workday? The answer can be found here:

Anticipation coupled with some stage fright often accompany the first working day in a new company. In order to prevent a successful start, new JIT employees are expecting an already prepared laptop, system access and workplace as well as a personal buddy who will not give them their first weeks off the page.

The task of the Buddies is the training of the Heroes in the respective project. In addition to the project-specific work processes and organizational structures, this includes above all the provision of technical and domain-specific knowledge. The aim is to raise the new employee as quickly as possible to a high level of productivity and to enable an independent way of working.

The enrollment is roughly divided into the following phases:

1. High-level overview

The buddy sketches a picture of the domain. The system architecture, the tasks of the different components and their interaction are explained. Here it has proven to present the actual use of the system from the end user’s point of view.

2. Low-level details of a component

In order to avoid an information overload, the emphasis is placed on a component whose functionality is examined in detail by means of the source code. Subsequent tasks are limited to this part of the code.

3. Cooperative work

Together with the buddy, the first tasks are performed as a pair-programming task, i. Two developers work on one computer. While a developer takes the active role and writes the source code, the second developer remains in an observational role. He outlines the solution and identifies potential problems. The roles are changed regularly. This agile way of working not only accelerates and accelerates knowledge transfer and team building, but also increases the quality of the code.

4. Autonomous work

After the first few weeks, tasks are solved on one’s own responsibility. The support wheels are not removed, the buddy is supporting in the background. Fixing bugs is ideal for these tasks. After completing a task, a code review will be performed, presenting the solution, suggesting improvements, and identifying any issues with the buddy.

5. Expansion to other system parts

Continuously further components are introduced and tasks aimed at it are solved by means of pair programming as well as independently. The needed assistance of the Buddies is reduced continuously.

Everything learned is always written in the form of glossaries, instructions and system descriptions. On the one hand this strengthens the new knowledge, on the other hand these resources serve as training material for future heroes whose potential buddy is then successfully trained employee.



“How to unite the strengths of scrum and function points in the fixed price model.”

More important than the right metric is the common estimation of complexity

Scrum is now widely used and well known in IT. Everyone knows the popular agile framework, everyone knows what roles there are and the Daily Stand-up has at least already been heard.

But Scrum is not just a framework for project development. Scrum supports you in implementing the values ​​of agile methods. These values ​​were defined in the Agile Manifesto:

Individuals and interactions are more important than processes and tools
Functioning software is more important than comprehensive documentation
Cooperation with the customer is more important than contract negotiation
Responding to change is more important than following a plan

Scrum focuses on the individuals involved and on achieving a successful outcome in the end. It prevents bureaucratic processes at the expense of employee motivation. Obstacles should be overcome as soon as possible through intensive cooperation – both within the team and externally.

At JIT, we use a Scrum process for fixed-price projects whose requirements have been estimated using Function Points. This means we need to adapt the Scrum framework to fit our needs. In two ways, our method deviates somewhat from the usual case:

1) Fixed price projects with Scrum

Realizing fixed-price projects with the agile Scrum method is a contradiction at first sight. However, as customers often prefer the fixed price, we have found a way to combine both.

The use of Scrum in fixed price projects can facilitate collaboration between customers and suppliers. For this it is necessary that the customer lives through the Scrum process. However, this is the case with any agile project, regardless of the contracts or payment methods. The above agile values ​​must be lived throughout the project.

We are involved in the customer development process with our Scrum projects; Here we at JIT can use our greatest strengths – cooperative cooperation and direct external communication. In addition, it is not necessary to appreciate all the requirements for an application at once, because Scrum estimates the individual user stories one after the other. That’s a lot easier.

2) Function-Point with Scrum

The most common estimation metrics for Scrum are not Function Points, but Story Points, but they are not mandatory for Scrum. Much more important than the metric is the comparative appreciation of complexity. Therefore, when one makes the estimation of the complexity of an elementary process or dataset together, story points can be omitted.

A Scrum Team must appreciate together, set the Sprint Goal together and then achieve it together. That’s what matters.

Conclusion: The function point estimate and the fixed price can be profitably agreed with Scrum, if certain permissible adaptations are made. The key to the success of a Scrum project is that the team collectively lives the agile values. JIT fully lives up to this premise. We already lived the agile values ​​before Scrum was even introduced. Scrum has just brought more order into our agile everyday life.