Author: JIT

address: Vorgartenstraße 206B, A-1020 Vienna

My Life with Isis (so far). Part 2:

“My Life with Isis (so far).”

Did I finally make it and write one of those “catchy” headlines? (*)

When the endproduct is GUI driven, I can imagine that Isis may be a good fit in production as well. At the same time a focus always entails compromise elsewhere.

A downside of every small framework is documentation. For giants like spring apart from the official docs, you can find a myriad of blog articles, youtube videos, a plethora of questions on stackoverflow etc. A not so well known framework cannot possibly cope with this.

Sadly Apache Isis is not an exception. More than once I bit the table as I consulted the wise depths of the internet and ultimately was again being forwarded to the (at that time already known by heart) “Beyond the Basics” page (, that did not provide a solution to the problem at hand.

At its core Apache Isis uses DataNucleus ( as interface to the persistence layer. Again, documentation can’t compete with the likes of the old bull Hibernate.

Something that caused headaches was the lack of inner transactions. Execptionally positive was the response time on the official mailing list, when where we described a specific problem and our solution and asked if that was suitable to use with Isis. Within one or two hours we had an answer!

Another problem, that only occured in production, put a colleague to the test: A very big CLOB, that simply was not created in any testing environment, couldn’t be persisted. Documentation was silent regarding this topic. A workaround was to operate around (the by now dreaded) DataNucleus (which was totally fine in this case as it was no domain object).

During testing the GUI was golden. Especially due to the integration nature of the project, one could easily verify data and with a few clicks ease or even save a long-running retest.

Before I started, the idea of naked objects was foreign to me and I was a sceptic, more so as I moved towards functional programming in the last couple of years, where data and functions are (strictly) separated.

Does it really make sense to build “Fat Objects” again? Where to draw the (business) line? (Customers buy products, should there now be a buysProduct (product) method inside the customer or a buys (customer) inside the product)? If there are no – or at least few – services, how to ensure single responsibility?

I put aside my concerns (as good as I could) and today I can say that the experience was worth it and my scepticism didn’t disappear as a whole, but neither was all confirmed.

As every tool Apache Isis should be used for the tasks it was designed for. If GUI is a minor concern, I wouldn’t use Apache Isis, as this would be like wearing roller skates while stalking the woods: Of course doable, but I would sweat a lot and doubt it makes sense. (Apart from the impact on environment ;))

To get a feeling for a business domain however, or for fast iterations during rapit prototyping, I would short-list Apache Isis. Important to me would be to know, whether the prototype really will be dropped at the end or if the end product is GUI driven.

(*) In case you’re part of a secret agency: Don’t shoot please!

My Life with Isis (so far). Part 1:

„My Life with Isis (so far).“

Did I finally make it and write one of those “catchy” headlines? (*)

There is little that makes my curious technicianheart go boom boom like the opportunity to learn something new. A couple of months back I got the opportunity to support a project with an unknown framework and approach.

Of course I talk about Isis, which was only known to me as an egyptian goddess and of the recent past… well, you know!

Apart from that it’s also a JVM-based Apache project of the same name. Apache Isis ( is based on the idea of naked objects. To summarize shortly, the three base principles:

Firstly: All Business-logic is encapsulated in domain objects.

Secondly: The user interface should depict the domain objects directly

Thirdly – and this is the speciality of this pattern: The user interface should be generated automatically from the domain objects. (Interested readers should have a look at:

I see the biggest strength of Isis in the third point. Isis itself even promises >UI & REST “for free”< (and it delivers)!

With very minor work investment you get an easy to use GUI that shows all the relevant data of the domain object and allows actions on them as well. Therefore administration of these domain objects by human operators becomes childsplay. This also shows in the typical Usecases (see: and

Especially for rapid prototyping this approach is ideal, as the focus may lay on the actual business model and one does not “waste” precious time during each (modeling) iteration to update ones GUI or database, just to get a feel for the model and how it works.

(*) In case you’re part of a secret agency: Don’t shoot please!

My start in professional life as a developer

“My start in professional life as a developer.”

Hey, my name is Tom and I have a problem…but let’s start at the beginning. It was a rainy november day 20 years ago when I saw the light of day…oh, one moment, too far?

Last summer I decided that it would be time to look for the first full time job in my life. In the final spurt of my study (or in the last holidays of my life *sniff*) I was looking for an adequate job and there a funny announcement catched my attention. From IT heroes and super powers was talked about! My nerd heart began to beat and so I browsed through the official JIT website and searched for other information about my potential boss (many a person would call it stalking).
After powerful consideration I tried my luck and applied for the job. I knew that many employers are anxious for a finished study but my unconventional application didn’t miss the point.

And why should I lead you to believe in? Of course it was a big change from the student dissolute lifestyle to a fulltime job. But the motivation is something completely else. Apart from more money in my pocket the basic attitude is still the same: constant advancement.

At university you learn the work with legacy technologies only conditionally – often you smile about it when a phrase in this direction comes out. Near antique relicts from a time where computers were run by steam also the work mit new technologies doesn’t miss out. The right mix is the key to success.

Whereof I will never get used of, is a certain volatility according to the working packages and prioritisation of tasks. By and by I’m sure this will be easier for me.

What I really love about my job is the mixed crowd of colleagues

What I really love about my job is the motley bunch of colleagues who are more reminiscent of a big family (am I really the black sheep?). Hardly an hour goes by, in which one does not laugh of heart and passes every still gloomy day, if one works together on a goal. No matter how difficult a deadline may be to comply, our (gallows) humor has yet to take us any customer or release date. The most monotonous works (yes, there are also) become bearable when you sit in a room with people you like. (*)

(*) No, I was not forced to write this and oo … not tweak!


PS: Help, I was forced to write that!

Microservices – the Insider Perspective. Part 5:


Lightweight BPM engines, especially designed to orchestrate Microservices, are the latest outcomes of the Microservice revolution. An excellent example for their anatomy and capabilities is Zeebe, recently introduced by the German BPM pioneer Camunda.

Camunda created this BPM engine with a total focus on Microservice orchestration, optimizing it for horizontal scalability, low latency, and high throughput.

Zeebe operates with event sourcing, writing all process changes as events. Its instances work together in a cluster. A replication mechanism provides fault tolerance in case of node failure.

Zeebe can be integrated into existing event-based Microservice architectures for process-mining purposes. In such a pure observer role, the engine registers all events and correlates them with the business processes defined by BPMN.

This supports the fast and detailed monitoring of business processes, and their visualization, even – and this is new – in fully choreographed architectures.

Zeebe can also work actively integrated as an orchestration and / or communication layer, writing commands in event form either in an existing message bus or on Zeebe’s own messaging platform.

In short: we think Zeebe is very good – for all the reasons presented here, and because this engine proves once again that BPMs and Microservices are not mutually exclusive, but in fact ideal companions.

Microservices – the Insider Perspective. Part 4:


The basic principle of Microservice architecture is: “Light and agile is good and efficient”.

It drove all efforts to streamline the choreography solutions described in Part 3 of this series. It also explains why many Microservice pioneers frowned upon using BPM engines to upgrade Microservice systems: too heavyweight. Too much initial effort. Tedious launching. Everything Microservices sould not be. That´s what they said back then. Now they have changed their tunes, and with good reason.

It quickly became clear that orchestrated systems with centralized control aren´t more complex, cumbersome and difficult than choreographed solutions – on the contrary: Orchestrators literally sort out the inherent problems of choreographed systems:

Business processes no longer disappear in abstract codes. All parameters are monitored centrally, in real time, visualized and modeled in the BPMN context. Logical errors are obvious at a glance.

Processing timeouts and similar errors happens automatically.

The visualization makes the processes transparent for all stakeholders.

The current state of the art recommends choreographed systems only for simple standard processes, with the caveat that implementing additional new, more complex processes will create exponentially growing problems for the systems and their operators. In this context, it is also worth mentioning that it can make sense to mix choreography and orchestration in the same system.

The more so because BPM engines do not necessarily have to be heavyweight monolithic central authorities. On the contrary – it can make sense to integrate a trimmed-down engine directly into an individual Microservice. For example, a payment service in the context of an order service could use an on-board BPM engine to control the payment process.

Leading Microservice pioneers like Netflix have long recognized the value of these orchestrators and have created their own streamlined BPM engines. Conductor is a famous example.

Other new lightweight BPM engines like Camunda or Zeebe are now freely available. They run as an embedded part of a Microservice and can also be used in the cloud.

The growing popularity of such solutions has led to changes in licensing models, with fees calculated by transaction rates instead of counting servers or CPUs.

We will discuss the anatomy and benefits of a typical streamlined BPM engine in the next and final part of our blog series.

Microservices – the Insider Perspective. Part 3:


Welcome to part 3 of our blog series on Microservices – a young approach to IT architecture that replaces monolithic system dinosaurs with highly agile swarms of compact, autonomous services. The trick is to organize numerous individual services to create a meaningful, efficient, agile system.

No car can drive on one single wheel, and no business process consists of one single service: Microservices such as Billing, Payment and Shipping need to collaborate to create a value adding business process.

Such collaborations are either orchestrated or choreographed. What you have to know when making you choice is:

Choreographed Microservices will behave like dancing couples in a ballroom that, while keeping their individual routines, will react to each other, dodge each other, and all that without any centralized control. This is asynchronous, event-based communication:

Instead of directly contacting other services, choreographed Microservices will write relevant events into a message bus, e.g. Apache Kafka, which other services monitor for relevant events. For instance, if the Customer Service publishes the event ‘Customer Created’ while creating a new customer profile, the Loyalty Service will respond with the corresponding next process steps.

This adds up to a step-by-step event chain that covers the entire business process. Central control systems become obsolete and the coupling between individual services is as loose as ever possible – an ideal state in Microservice architecture.

So why not choreograph every Microservice architecture? Because achieving ideal states carries costs. Users of choreographed services pay with increased complexity due to asynchronous communication, with a total abstraction that either reflects business processes only implicitly as events or hides them completely in lines of code, and with challenges in process monitoring, because choreographed systems provide no information on running processes, error occurrence, etc. Monitoring has to be installed separately as costly custom solutions.

Furthermore, choreographed systems will only partially fulfill the promise of fully autonomous individual services: the message input and output of the services will create an indirect coupling that requires adapting multiple services whenever implementing new requirements – in blatant contradiction to the Microservice philosophy.

Solving these shortcomings would require an event command transformation outside the responsibility of the services involved. A separate service would be needed for this task, for example by a BPM engine, which in turn would mark a first step towards central orchestration.

In view of these complications, it becomes clear why orchestrated Microservice systems are now widely popular: in contrast to choreography, they are equipped with a central control unit, which directs the overall flow like a conductor directs orchestra musicians. REST is a frequently used communication line between control units and services. Lightweight BPM engines such as Camunda, Zeebe or Conductor are suitable central orchestrators.

Such orchestrated solutions initially had to struggle with prejudice. Now they are about to overtake the originally highly popular choreography solutions. Learn why – in Part 4 of our blog series.

Microservices – the Insider Perspective. Part 2:


Welcome to part 2 of our blog series on Microservices – a new approach to IT architecture that replaces monolithic system dinosaurs with highly agile swarms of compact, autonomous, collaborative services, with the coupling between services as loose as possible and cohesion within the individual service as strong as possible.

Such an architecture offers remedies for the immobility and uncontrollability of complex, organically grown systems. JIT has accompanied many clients during such projects and indeed recommends Microservices, provided that all technical and organizational prerequisites are really met.

Organizationally, every thinking should start with “Conway’s Law”. Although already half a century old, it is still valid: Melvin Conway teaches that whenever an organization designs a system, it will inevitably produce a copy of their team and communication structures. A company preparing to migrate to a Microservice architecture will therefore need to think deeply about business process anatomy and the distribution of tasks and responsibilities.

There more so because to achieve minimized coupling between Microservices and maximized cohesion within each service, the services and the underlying data models must be modeled with great care along clear domain boundaries and responsibilities.

Domain-driven design (DDD) is the tool of choice for modeling of this type, especially because it uses bounded contexts to divide the business domain into clearly defined zones like, e.g., Billing, Payment, Shipping etc.

Each “bounded context” defines an explicit interface used by collaborating services from other zones.

If you manage to organize your team structure along such clearly defined domains, your reward will be a clear, efficient Microservice architecture. If not, then not.

With the organizational requirements come the technical prerequisites. The first of those is a high level of automation: Continuous integration/delivery is essential in Microservice-based systems.

Of equal importance is continuous monitoring individual services as well as the entire system. This will prove somewhat more complex than monitoring conventional applications, because involves aggregating multiple servers, log files and metrics like CPU utilization or response times in one central location. Tools like Graylog, Graphite and Nagios will be helpful here.

Once you have checked all these preconditions off the list, the one remaining question is: do you want your future system to play like the Philharmonics? Or should it dance like Nijinsky? What does that mean, imply, and require? Read the answers in part 3 of our blog series.

Microservices – the Insider Perspective. Part 1:


Fourteen years of working in the IT industry have changed everything, starting with our perspectives. Our experiences has grown, and our naive enthusiasm for hypes and fashions has faded. Great, because IT is a tool, nothing more and nothing less, and the value of newly designed tools is not their being new, it is their capacity to solve problems.

Take Microservices for an example: They have been the talk of the industry for years, and at times praised as the ultimate solution for really, really everything IT. For J-IT, they have gathered steadily growing importance. Wherever we apply them, customer reactions are indeed enthusiastic, mostly because we recommend migration only if the problem justifies the effort and only after all organizational and technical requirements are covered.

We recommend Microservices for their huge potential in revitalizing IT landscapes that have grown organically over years and decades and have become too large, too complicated and immobile to adapt to changing environmental conditions. Jurassic Systems, so to speak.

The most glaring symptom of such an acute system growth crisis is a skyrocketing error rate, with varying teams are tampering with an essentially unmanageable application, implementing new features that have little to do with each other. Which does not matter anyway, because nobody in the company really understands the business logic of the application anymore, at least not in its entirety. The escalating complexity makes coding interventions in development and maintenance difficult, expensive and risky: even smallest changes cause unexpected effects in seemingly uninvolved, remote regions of the system. The growing risks inhibit necessary development steps. The company loses its agility, soon to be left behind by the market.

Microservices offer a perfect remedy here, being compact, autonomous services that collaborate with each other, minimizing the coupling between services while maximizing cohesion within each service. This IT architecture really fulfills the principle of single responsibility: “Do one thing and do it well”. Its advantages are obvious:

Compact, manageable services allow for easier, quicker, low-risk overhauls, throughout their whole lifecycles and by always the same team, marking a leap towards realizing the dev-ops vision. Implementing new services and modernizing individual services using specialized technologies is quite effortless. Performance-critical services are individually scalable. Ongoing, rapid, low-risk deployments reduce the time-to-market. Services can be linked to create new functionalities.

Figuratively speaking, Microservices replace cumbersome dinosaur systems with highly agile swarms of bees.

That’s what we tell our customers, but never without mentioning that this new agility carries a pricetag, and no, we are not talking initial investments, rather a wholly changed risk landscape: Migrations to Microservice architectures will always involve the characteristic challenges of complex distributed systems – think distributed transactions, ACID, and consistencies. Consult Bill Joy and his classic “Eight Errors Regarding Distributed Systems.”

Mind that we haven´t even begun discussing the organizational and technical requirements for Microservice migrations – but in part 2 of this blog series, we will do just that.


“Transparent fixed price offer”

When it comes to custom software development, many IT service providers struggle to avoid fixed-price offers. It is also the case that the compilation of the effort does not always reveal in the clarity expected by the customer.

With regard to the method of allocation, processing is often focused on the actual expenditure – that is, time and material. This means in plain language the complete assumption of the risk by the customer.

From the point of view of the IT service provider, this is the perfect solution – with the exception of one essential aspect: meeting the customer’s planning and budget security requirements.

If you take a closer look at the topic of settlement on a fixed price basis, this will result in simple rules for the IT service provider, which will have a positive effect on risk minimization:

Regardless of the overall project size, it must be ensured that individual atomic tasks (for example: functional requirements) must not exceed a maximum amount of effort. If this is the case, the corresponding task must be split again.
The IT service provider must be given the opportunity to become familiar with the customer’s infrastructure, technologies and acting persons.
Clear communication regarding the quality of the underlying documents (specifications, etc.). If necessary, a revision must be made. If there is no agreement between the two parties, this is the right time to devote to another project.
You do not have to complicate matters as they are anyway.

In addition to the binding price, the traceability of the underlying expenses is of the highest priority for the customer.

Hand on heart, how often did you, as a decision maker, receive multiple offers for the same request and could not compare them? In addition to this, it often makes the mentioned allocation of costs impossible for individual positions to calculate the ROI.

Again, there is a simple solution: listing according to professional requirements with the respective costs. Ideally traceable by an official metric, such as the Function Point method.


If you pay attention to some behaviors and rules in addition to the required technical skill set, and if you are also considering how an offer should be designed from a customer’s perspective, then you have a good chance of winning the contract.


“Passierschein A38 – BPM is already not a gallish village more.”

What Asterix and Obelix, the pass A38 and BPM have in common? Read it yourself:

In the cult animation “Asterix conquers Rome” Asterix and Obelix are faced with the task to obtain the pass A38 from a Roman prefecture (Note: Office). This exam is anything but easy: the two Gauls are constantly passing from department to department, but the required document remains denied them. Only when Asterix uses a cunning and requires the fictitious pass A39, he manages to use the donated confusion in his favor and thus still obtain the required pass A38.

This story shows – albeit exaggeratedly – that as soon as an institution reaches a certain size, process automation is essential to ensure the efficiency of handling various processes. The step towards automation offers the following advantages compared to manual process processing:

  • lower process costs
  • faster process execution
  • less process errors
  • better traceability

However, anyone who believes that the automatic execution of processes only has relevance for large companies with corresponding financial resources is wrong. Small and medium-sized enterprises (SMEs) can also benefit from process automation without having to incur financial costs. By means of so-called “human tasks” automatic processes can be selectively enriched with manual tasks. Thus, e.g. In the process “Registration of a new employee” the individual steps such as registration with the Social Security, creation of a user account and handover of key and access card manually. However, process automation ensures that none of the above steps are forgotten.

Introducing BPM into the enterprise is easier than ever. Lightweight open source solutions such as Camunda meanwhile enable a very low entry barrier. Setting up a process engine for a pilot project can be done within a very short time without license costs by an experienced Java Developer. Subsequently, in coordination with the department, the decision is made on the processes to be automated. Only after completing a successful pilot phase does the use of paid features gain in importance.

If the prefecture had already referred to BPM in relation to the history of Asterix and Obelix, obtaining the A38 pass would have been easy. Thus, even in ancient times, BPM makes life easier!