SOA example application

SOA describes a set of patterns for creating loosely coupled, standards-based business-aligned services that, because of the separation of concerns between description, implementation, and binding, provide a new level of flexibility.

Service Oriented Architecture terminology has spread in recent years, at least among people who were involved in most of the Information Technology activities. The guidelines suggested by this methodology are granted as major factors to succeed in different distributable systems domains.
Just as the definition is clear and easy to understand, so is its implementation into a real project, being intuitive, concise and elegant.

I have released an application demonstrating how SOA’s principles can be applied into a small project making use of EIP (Enterprise Integration Pattern), IoC (Inversion of Control), and a building tool and scripting language such as Groovy.
I analized a simple business case: an entertainment provider who wanted to dispatch rewards and bonuses to some of its customers, depending on customer service’s subscriptions.
The process sequence is simple:

It is required to provide an implementation of a RewardsService. The service accepts as input a customer account number and a portfolio containing channels subscriptions. The Customer Status team is currently developing the EligibilityService which accepts the account number as an input.

I set up an infrastructure to write acceptance tests for this first meaningful feature. This is what could be defined as a “walking skeleton,” a prototype with the essential aspect that it could be built, deployed and tested after being easily downloaded from Github.

RewardService is invoked by the client and it calls, in turn, the eligibility service which however, in this case is not  implemented. As many real scenarios expect external services, this proof-of-concept refers the eligibility service to a black-box, where only request/response interface is known.

The unit test simulates the eligibility service behaviors mocking the end-point through the Camel Testing Framework. However, if you want to run the application on your local machine I set up, within a line of code, a faux eligibility service that merely returns a positive response:

def alwaysEligible = {exchange -> if(exchange){exchange.getOut().setBody('CUSTOMER_ELIGIBLE')}} as Processor

The entry point is an HTTP Restful interface built upon the Apache CXF, and is easily set up within few lines in the configuration. CXF is initialized by Spring in this following way:

jaxrs.'server'(id:'restService',address:'http://${http.host}:${http.port}') {jaxrs.'serviceBeans'{ ref(bean:'rewardService')} }

Services are connected by Apache Camel. RewardService contains only the reference of the ESB context –  an instance of ProducerTemplate. Such solution allows a complete separation between the linking system and the business services. The Camel context represents  the SOA’s wiring, and is configured through a DSL as in the example below:

from('direct:rewards').to(eligibilityServiceEndpoint)

Gradle archetype for Spring applications

I am releasing a Gradle archetype useful for creating Java/Groovy applications based on Springframework. Of course, it is not a real archetype because such a creation is not possible. However, with very few steps you can create, edit and deploy an application server. It would be a most accomodating starting point for deployable software projects.

This release is an attempt to mitigate common issues related to development life-cycle phases such as testing, the running of application and deployment in various environments. The archetype leverages upon the flexible building process and on the top-most featured IoC (Inversion of Control) management system.

When creating application modules for linking services through HTTP, JMS or any other connector type, this archetype is refined and can be applied for satisfying these requirements:

  • Automatic testing, building and continuous integration.
  • A different configuration for each environment (development, integration, production).
  • Springframework based system.
  • Groovy support.

 

The project consists of:

  • Utility classes for given Spring context.
  • Grails-like DSL for Spring setup (beans.groovy).
  • Logging and application configuration properties for each environment (development/integration/production).
  • Gradle config file.

Why Gradle?

Problems exist using Maven in Groovy projects due to the gmaven plugin, which may indicate that it is not ready for the groovy-user community. Indeed, Gradle works perfectly on Groovy projects. It is so concise and elastic that you don’t have just a building system, you have a programming tool. When a customized behaviours proper plugin cannot be found in the registry, you may add custom tasks by writing groovy code directly to the build.gradle descriptor. Gradle is a swiss army knife for developers.

Getting started

  • Run
    git clone git@github.com:gfrison/proto-app.git myApp

    where myApp is the name of your project.

  • Edit property ‘projectName’ in ‘build.gradle’ with project name.
  • Add classes, and manage them with spring ‘beans.groovy’.
  • You are now ready to test, run and deploy your project through a continuous integration system such as Jenkins.

If you have suggestions, or pull requests from Github, myself the author, would be happy to consider them.

Software architect mistakes

I think that to get up in the morning and brew a good cup of coffee is one of the best way to start the day. You know, the heady fragrance that emanates from the machine-pot, it’s delicious. When it’s ready, pour the coffee into a cup, add some sugar, and finally you got it – end of the coffee making process.

Have you ever thought to design a coffee making process with some diagrams, or doing the same with other banal activities such as taking a shower? Of course not.
For other cases less trivial than these, including software project development, a minimal-design work can be quite useful and somewhat needed.
Often questions arise; is an architecture design worth the time and effort invested in it? Well, you may answer this question first: Are there risks in the project that could be minimized by an early design activity?

The more ambitious and challenging the project is, the higher the number of risks, and the more difficult it is to complete successfully.

How to identifying risks. The easiest place to start is with requirements, in whatever form they take, and to look for things that seem difficult to achieve.
Gathering requirements is fundamental for deciding what to do and how. However, sometimes problems arise at this starting point that lead to the ruination of the project. Some assumptions may underestimate this key phase and shake the architect role to its foundations:

1. It’s someone else’ responsibility to do requirements.

Domains drive the architecture choices, not vice-versa. Requirements can create architecture problems. At the very least, you need to assist the business analysts.

2. I learn the domain as I write the code; incrementally.

While prototyping pieces of software is a way for mitigating engineering risks and figuring out the hardest problems, writing code could be a waste of time for analyzing a domain. Rather, it’s very cost-effective to modelling it in advance.

3. The requirements are already fully understood by the stakeholders.

Clear communication is critical between people and the role of a software architect can be a very difficult one when others don’t understand what you do and why.

4. Domains are irrelevant to architecture choice.

Developers may copy an architecture from a past project. Maybe just following the company standard, but ignoring the motivations behind previous choices. They are more likely to be unaware of the qualities required in the current project.

5. I already know the requirements.

At least the documentation should be in your mind, but designers should use models to amplifying their reasoning abilities and unfold not clearly visible aspects that affect their own risks.

Cool DSL with Groovy

Domain model design has never been confused with ‘ease’. From the dawn of its conception, generating executable Unified Modeling Language (UML) diagrams meant sweat and frustration. Generating stubs & skeletons, alone, led one into quagmires of Java Enterprise Edition. Yet it is inherently possible — and actually has been for ages — to design a programming language dedicated to solve specific domain problems; effortlessly and quickly.

DSL is not a foreign thing. Developers are steeped in DSL, even if unwittingly. Frameworks are DSL. A macro is also DSL.
When you write a function? That too is steeped in DSL. But functions are often a dirty business; un-standarized; quirky, whereby even domain experts face a daunting task when unraveling and then being put to task to update such rapscallion boilerplates (that is if they do not themselves compound the problems).
A programmer worth his reputation would find it necessary to isolate and clean up all non specific domain terms within the DSL. This literally means rooting through and setting aside all secondary-in-importance setup code, allowing the domain expert to establish or reestablish the primary foundation code.
This is most times done for large money for an end user; the helpless customer.

Given an appropriate DSL that fits their needs, customers could write all of the
code that they need themselves, without having to be programmers.

Now let us imagine, or better, take for granted, that a good DSL experience can exists.
That given an appropriate and standardized DSL environment, even the customer,
with a few simple tools and grasp-of-concept, can self-write code, gracefully, and
specific to their needs without the cost and hassle of having to become, or running to,
programmers. And that is where Groovy comes in.

What’s Groovy? It’s Java as it should be. Easy and intuitive, it offers new features unknown to its parent yet (I’m still waiting for release 7), and come up with those benefits that form the basis of the DSLs that we will develop.

A fundamental feature is the MOP, the ability of changing runtime the properties and the behaviour of objects. It allows us to respond to method calls that do not exist in the class, in other word to “pretend” that these methods exist.
Another essence to consider is the Closure. It’s the real power of Groovy. The extreme flexibility of this kind of object/method allows us to change its behaviour replacing its delegate class, on the fly.

Closure delegate. Groovy has three variables inside each closure for defining different classes in its scope: this, owner, and delegate.
The this variable refers to the enclosing class. The owner variable is the enclosing object of the closure. The delegate variable is the same as the owner, unless that delegate is substituted.

class MyClass {
def closure = {
println "this class:"+this.class.name
println "delegate class:"+delegate.class.name
def nested = {
println "owner nested class:"+owner.class.name
}
nested()
}
}
def tryme = new MyClass().closure
tryme.delegate = this
tryme()
/*
output:
this class:MyClass
delegate class:Script1
owner nested class:MyClass$_closure1
*/

When a closure encounters a method call that it cannot handle itself, it automatically relays the invocation to its owner object. If this fails, it relays the invocation to its delegate. one of the reasons builders work the way they do is because they are able to assign the delegate of a closure to themselves.

Powered by Apache Mina

The application being discussed has to behave as follows: performs the client authentication, accomplishes request and response operations and forwards notifications asynchronously to the client. The Mina framework fulfills these needs because it was created to be as flexible and easy-fitting as possible in an array of case scenarios. Mina is structured in several layers that briefly can be broken down into: input parsing, execution of concatenated processes, and serialisation of  related responses, if needed. The infrastructure takes care of I/O  quirks, while it lets you write within it your business processes and handles session lifecycles through simple callbacks that you can manage within a  few codes. It is Majestic! You will love it.

In addition, there is a need to look for something that could help me, the author, to write down an application which implements many commands, preferably in a appealing way, where each piece of service could be isolated from the rest of the application.

The demultiplexer is a device with a single input and many outputs. Its role is to select the output line according to context rules. This approach is also implemented within Apache Mina for writing decoders, handlers and encoders. Apache Mina’s demux package includes: DemuxingIoHandler, DemuxingProtocolDecoder and the DemuxingProtocolEncoder.

Apache Mina Architecture
Filters
Filters are used for several purposes: I/O logging, performance tracking, thread pooling, overload control, blacklists, and so on. In a specific case I once had to configure two filters. One for the user authentication, and the other for thread pooling. Once a user is logged in, the authentication filter removes itself from the client session filter list, while substituting one transforming the raw input into POJOs.
Decoder
The TCP server must implements a proprietary protocol. It is a simple ASCII protocol for exchanging commands. Such command lengths are undefinable. They, however, are a sequence of characters much like SMTP. Therefore, the CumulativeProtocolDecoder class is extended enabling it to gather input through the end of command. It is then left to us to parse of the bytes and create a simple Java Bean. Post the operation, the bean is transferred through the filter chain to be executed.
Handler
One of the IoHandler implementations drew my attention while I was looking for something resembling the Command Pattern. Each message coming from clients means a specific action, and I find it so tedious writing a single Handler that switches operations by the type of the incoming request. An elegant solution is provided by DemuxingIoHandler that posts the requests toward the corresponding handler. The handlers have to implement the MessageHandler<E>, the generic type defined will be the object’s class that the DemuxingIoHandler will submit to the handler, and register themselves invoking addReceivedMessageHandler(Class<E> type, MessageHandler<? super E> handler).

The asynchronous nature of Mina allows to handle a huge number of clients by an handful set of threads. A further decoupling between I/O and business logic may be done by the ExecutorFilter, which is in charge of the messages after the NioProcessor.
Encoder
The transformation component works in a reverse way compared to the decoder: it serialises the POJO response coming from the handler process, to the output stream, toward the client. Likewise the handlers, it is possible to delegate its own encoder for each response object, but why not to send the IoBuffer straightly from the handler that elaborates the request? Separation of concerns might be the answer. The handler receives a command, a java bean, then it is processed and the handler returns an object for response, a POJO. It’s up to the encoder to transform the abstract response to a concrete message through the agreed TCP protocol.

Walzing with the Tech crunch

The economic crisis we’re currently going through is teaching some lessons to the Western countries, in particular to the Anglo-Saxons, that our grandparents know pretty much, although it seems we’ve forgotten the past years in this  financial bubble. The debt has several pros: allows building, buying, investing and, when properly managed, might ensure a safe return and a fair growth of the economy. However, the debt has an outstanding bad side: it must be paid back.  It might be postponed, rolled over, shifted to other (more or less conscious) subjects, and its dreadful effects would be identified as bankrupt, credit-crunch, real estate bubble and recession.

With a view to the software applications, a similar observation might be rightful in terms of ‘state of health’, which point to a family of properties that the software should have to be easy changeable, so it could respond quickly to the requirements evolution. A software that doesn’t enjoy good health is the one that has become fossilised to the original architecture, keeping it as is as possible, never revisited in the light of technological innovations and functional updates, but just patched with improvised and unconvincing surgery.

It’s suffering what it could be defined as inability to bear the debt built up over time, but in this case we’re not dealing with financial debt, this is the technical debt. Even though the term ‘technical debt’ sounds strange, it’s related to the financial fellow in many ways, and it is widespread in software development. The saying according to which economy is based on credit (debt) finds support also in the software world.

consultants at workDevelopers who are reading know well what I am talking about. You’re assigned to work in such XYZ firm from next Monday for at least 3 months, and when you’ll start this new task you’ll be instructed about what to do.

The workout mainly consists of implementing new features on top of the customer’s solid rock application, a very remarkable system built some years ago for serving peculiar needs.

So far, it has worked well, the owners said, you may just make it worse than it is now. Later on, you have no choice but to agree with them.

Expanding or changing the set of features without re-factoring looks like seeding a crop without ploughing the land before, if the system’s authors didn’t predict such an amendment. The harvest could be lost, couldn’t it?

You’ll be asked to complete your job updating the old system and keeping the structure as it is, avoiding to break the fragile balance among components. 

Just for you information, consultants were called few months ago for a similar task. They added such a mess into the code that you have to spend most of your time to figure out what they wanted to do than working effectively on new things. Maybe the customer were disappointed by their way to conduct the development and now it’s your turn.

Thus, in addition to the new enhancements, you should fix what your precursors did. 

This subject is hard to handle and quite unpleasant in particular when the customer doesn’t want to hear talking about re-factoring unless it doesn’t delay the delivery, which is almost impossible, so the scheduled task proceeds as expected.

In short, it looks like going for a walk over the broken glasses swearing you won’t be injured.

Using the post’s subject, it looks like getting into debt again for covering the old one, just for adding short term solutions when too many of these have been applied in the past.

Looking back at the past, I’m realising how this kind of intervention is predominant on the amount of works done, that I don’t know how much time I would have to wait unemployed if I wanted to work only into brand new projects…

Sometimes I’d define myself as a debt collector, and I find it uncomfortable as a lawyer or a doctor would feel against a criminal prosecution or a rescue surgery: it’s an exploitation of other’s misfortunes. It might be painful, but we make the customer feel better.

Recruiter advisory: Explicit lyrics

“I’m a people person, very personable. I absolutely insist on enjoying life. Not so task-oriented. Not a work horse. If you’re looking for a Clydesdale I’m probably not your man. Like I don’t live to work, it’s more the other way around. I work to live. Incidentally, what’s your policy on Columbus Day?”

You, Me and Dupree (2006) 

This is the interview every recruiter would want at 17.00 on Fridays, so fast to let you step out soon for the forthcoming weekend, plain and clear in the outcome.  

Usually it’s not so an easy job for the Head Hunter, selecting people and finding the right ones to slot into the pending position might be hard. The challenge would be more difficult when they are seeking to recruit through controversial methods, which hardly could achieve the wished result. As human beings, we have the natural tendency to think that our choices are rational, while we underestimate the effects of the undercurrents that, in a way or the other, affect our decisions.  We believe to be steady inside the boat in the middle of the sea, even if we are at the mercy of the weaves. We are prone to get swayed

Fortunately, the good recruiter studies books deeply and prepare himself in the training workshops to get rid of those diversions, and then finally he can apply scientific methods to his job. Progresses in this field could be checked when they act like CIA agents asking questions as “What will you do when you grow up?”, “When was the last time you were happy?” and again “What are your strengths and weaknesses?” rather than “Tell me about yourself, describe yourself in one word”. But, dear recruiter, I can’t describe myself in one word, unless it’s both hyphenated and a metaphor. 

What comes first, if I’m talking with the interviewer for the job, is that I want to check if his expectations match with mines. I’m talking with you to show my professional skills, not to talk about my passtime hobbies, neither about dance nor motocycles.

All those requests never end to astonish me for their futility, even if those make sense just for HR department, I’m not going to dig into the matter.

 

Behaviour interview

A much more effective approach is to conduct very structured interviews where the questions are focused on experience, skills and ability rather than vague things.

What recruiters sometimes try to follow is the behaviour interview assertion, which declares that the most accurate predictor of future performance is past performance in similar situations. It would be enough to fright any financial mentor but it finds logical basis for canditates’ evaluation. Perhaps the behaviour of a single man is easier predictable than a stock index. HR specialists claim that with this method of leading interviews, it’s much more difficult to get responses that are untrue to the candidate character, because these should be detailed descriptions of past events, or experiences faced at work.

I agree on the idea that past experiences are indicative on how people react under certain circumstances, but I’d put less emphasis on that, first of all because challenges are always different. Whatever technological issue the company is facing right now, it would be far from any candidate experience, recruiter may figure out something else by the applicant. What someone has done shows the ability to execute; personality is important, intelligence naturally more so, but improvisation remains the key.  Knowledge workers must adapt their knowledge to the situation, but if during the interview the cadidate isn’t projected on a real scenario to show his capabilities and past learned lessons, how the recruiter actually could form an objective opinion?  

I was rarely asked advices or opinions about real technological matters involved on development,  is that the reason the recruiter doesn’t know much about what the new employee is going to resolve?

Maybe sometimes inteviews are not used for hiring people, but just to gather information on cadidates, to create statistics on salaries, skills and to estimate how long does it take to search for a special kind of professional in the market, I guess.

Money

Recruiters may discard people based on salary, of course they can; especially if the point is that the people are interchangeable, low cost and easily replaceable like a natural resource. usually this happen when the target cadidate is junior.

Salary is a complex issue the more senior the target is. Seniors want to discuss the context of the job before they ask about money. Answering the salary question in a phone screen or in an interview before building rapport drop me to the disappointent, as the recruiter is telling me “We want the cheapest on your position”. That’s ok, but do you want to save money before you know what I have to offer? Or, why are you looking for someone senior?

HRs usually match your CV keywords (better know as buzzwords) with their table axis to define your salary box, framing cadidates in a very simple way. Although the salary offer is equals among peers, a fascinanting metric highlights that 5% of programmers are 20x more productive than the other 95%. Now, let me know in your opinion which section of this statistic is firstly discarded.

Quiz

I don’t see anything wrong with the interview questions with multiple choices, sometimes they are as funny as filling in crosswords, but some other times these questions upset me for I realized I forgot some exponential functions since the school… Damn!

Although it would be helpful to filter out applicants without a basic education, in several years of work I never had concerned about exponential calculus to strike a business requirement. 

 

What's the bottom printed row of this function?
for(int i=0; i<30; i++){
   System.out.println("line: "+i);
}
a) line: 29
b) line: 30
c) none of the above

 

These requests end up by annoying their prospective employees, any company would lose appeal, dropping any willingness to get hired by the company.

This is a newbie question, then the recruiter answered me telling that every technical employee in the firm had filled such a questionnaire. Really? I don’t think any of the experienced programmers I know would waste time crisscrossing questions like that  on a job interview unless they are hopelessly unemployed, and if the hiring manager is looking for an experienced developer, why ask these first-level programming questions? If the recruiter can’t read the resume, why would a hiring manager?

MDA on fire off the Shoulder of Orion

I must admit, frustration has increased over the years. I mean, interacting with the computer in terms of boolean, long, void. I’d rather sit on a sofa and describe the program by voice, or better, get into a 3D virtual reality and cook a software like a lunch meal in the kitchen. Playing with spheres, arrows, to design all program doodles.
I can’t picture it as a possible scenario in the near future, and like for the science fiction movies, we will need to wait a much longer time compared to what envisaged by movie directors or writers to see a minor part of the technologic developments imagined so far actually implemented.
It’s just for the secret ambition to get free from the textual codes that I’m debating about how to easily abstract the definition of information systems. In some ways, I’ve managed to do something related to this, in a restricted domain.

When Model Driven Architecture turn out right
Once during an interview, I was asked why I didn’t apply MDA over all software projects I leaded. The question should be hooked as a point of discussion on MDA misconceptions, and generally, on modelling and UML. Unfortunately these interpretations are hindered by a phenomenon that a famous observer of human events (Mark Twain) revealed, that I repropose it again in IT terms:

People commonly use UML like a drunk uses a lamp post; For support rather than illumination.

The initial costs of an MDA are pretty high, the return of the investments starts at the beginning of the automatic code generation. The models are built on the meta-model basis which defines the semantics of the system. Afterwards, the models are turned into code or other resources ready to be installed into the real system.
The initial developing efforts are focused on the meta-model and on transforming tools. The investments are rewarded by the automatic transformations that replace the repetitive coding work of programmers; and deeper is the amortization, the more the investment is profitable.
The ROI increases as much as the model instances are built, as well as the simplicity (in other words, easier to implement) of the metamodel and the trasformation tools. Where shall I apply this approach for best results? In a real-time video application? In a powerful compression API? I don’t think so. I guess high values of this coefficient would be found into SOA systems.
MDA is suitable for service domain, like a banking middleware, where you may amortize the modelling system through hundreds of services, with many data structures and flow descriptors, but conformed to few abstract structures, the metamodel actually.

Models are not code
Do you have to adopt UML for design metamodels? Definitely, not. You may define simple data classes in any textual format. The matter might raise when the metamodel grows in complexity, size and when many aspects of the domain are schematized into. Hence, you have to face UML: either you reinvent it, or you use it.
Perhaps tracing circles and arrows would be more embarassing compared to typing on a keyboard, but it could be necessary when structures become hinged and interrelated. Try to join a class diagram with a sequence diagram, then enclose all in a composite for interaction with external parts, and do it without formal and conventional visual patterns and… let me know!
UML is a complete and exhaustive, it’s so generalist it would be applied to describe any software model, but for describing not for coding.
Many people confuse UML as a programming language. Wrong. It’s a tool for representing a system, a structure, a flow. As mathematics aims to formulate conjectures among countable entities, UML offers a way to define abstract entities. Such high-level language is a mere conceptual schema, it defines components and services, no programs ready to run.

MDA layers
So what do you do with a picture with bubbles and arrows? Is it enough design some diagrams, push a button and voilà: getting a working system that fit your requirements? Nobody believes it, neither myself. The object model is unaware of the underliyng system and of its implicit matters. As we know, the model declares the structure and the behaviour differences between one service and another, for all the remaining aspects the system applies common platform specific beaviors. For instance, whether you want to enclose the service into a transaction you may setup a ‘transaction’ stereotype in your own profile metamodel, which will be transformed into a java annotation or xml attribute and then properly interpreted by the server framework. Once the MDA is ready, one designs the models and trasforms them into hardcore resources, the mythic code bullets, finally deployed into the server.
It would appear as too simple, and in order to keep off prejudicial comments I’ll tell you that the real applications are much more complex than this simple vision.
You can’t update code bullets by hand because the changes will be lost next time you generate them, automatically. Many exceptions are to be considered, allowing for example, to update generated code. I’ve used merging API like JMerge, and I found it useful to enrich the code without discontinuity from the model and the generated code.

7 steps to MDA revolution

scrumI didn’t believe that such a successful project was such a rare event in the IT industry, that’s why I’ve never caught another chance for applying the learned lessons again. I thought that the experience accrued on Model Driven Architecture will be reusable in other circumstances, though I’ve never seen concepts as executable UML or MDA either applied or mentioned in the following commitments I’ve pursued into.
The idea of this project wasn’t conceived by external consultants thirsting for selling their cool technology; instead, it was born and grew up just inside the development team. The architecture’s transition had been gradual, and little by little, as new automation scenarios penetrated our excited minds, we moved as many as possible development processes under MDA framework.
Despite my early impressions while considering to undertake the project, the upper management embraced it and laid down investments counting on the benefits that this new approach would provide to the development.
What is difficult to change is the modus operandi of a 300 employee company that offers banking services and applications, which is engaged in one of the most conservative field in technology and development methodologies by default. It was about a significant jump in the services development and as the PM remarked: “We are developing as dinosaurs, don’t you know what the hell happened to them?”, the way to MDA was traced.

The issues we faced with the introduction of modelling notions would be defined as practical contingencies rather than theoretic or philosophical reasons, foremost the mess in the business layer. It raised reliance and maintenance weaknesses with time, even security holes that sounded so bad in such a company with a plenty of  banks as customers.
The hundreds of cases developed by dozens of engineers turning over throughout the months in the java development area had reached the critical mass, enough to trigger an explosion/implosion of the whole system. On the other hand, the applications can stand up only by high costs of maintenance and lazy deliveries, due to the difficulties on integrating incoming services with the underlying system.
The application layer managed the data flows between clients at the top and feeds and legacy information systems at the bottom. On their way, they affected several mixes by business process rules hardcoded in obscure java classes. Unfortunately, most of those shaking details were lost, because of the policy related to the development, which didn’t claim about missing documentation, and then it was so damned annoying to go back and take over old artefacts for maintenance or updating rules. Only skilful programmers might extricate the balled up code. The critical mass had to drop down and be brought to lower temperatures quickly. New developments and dozens of  incoming features were planned, so a deep refactoring was a must; it can wait no more.

How was the domain layer implementation that popped up the highlighted problems?
The developer’s effort was mainly focused on the creation of java classes implementing a Command and defining the service to the framework through an xml descriptor. The input and output of such a command was a raw DOM argument, which was parsed to extract the input data needed by the business transaction, the most part of coding was regarded for parsing and filling the response’s service that was a raw xml document too. I think it isn’t agreeable to put most of the developing efforts merely on managing input/output data and mapping, but this was the daily job.
Apply the MDA take time, it was an one year evolution, and it would be summarized with:

  1. XSD barriers. It was necessary to set some boundaries for developers, in order to get a minimum of control over the data flows. Each service had its own formal validation on input/output data, though no restrictions were settled on how implementing the services. Never ever elements or attributes not defined in advance by commitments.
  2. Pojo. Replace the raw document with simple pojo as an argument in the call-back methods; this operation aggregates the formal validation with an easy approach on data manipulation. The binding xml-java isn’t hurdle, it is automatic and many available libraries can accomplish this step.
  3. First hints with EMF. Xsd files are models for xml data, EMF is a MOF java implementation, a general abstraction for writing all sorts of models, I don’t linger over it now, but it represented a jump to the service modelling. EMF is an open source library enclosed in the Eclipse platform easy to use and customizable, it aims to separate the abstract model from the ground.
  4. Choice of technology. The play with EMF opened new horizons on modelling facilities. Hooked by this methodology to design SOA applications I realized EMF is not enough, the UML (which core principles are inherited from MOF) can fit much better with my purpose to design the object model, define the process flow and the user experience. UML offers diagrams that you can join together, static and dynamic model may describe most of application structure and behaviour.
  5. Executable UML. What do you do with this bunch of diagrams if you can’t transform them in real artefacts and plug-in them in your SOA framework? Not so much, keeping UML diagrams without related transformations and executions is merely fine for documentation, not much more than this. At that time the company joined the Rational beta-program and I started to develop Eclipse compliant plugins which leveraged the power of UML2 eclipse implementation.
  6. How to define data mapping? One of the main obstacles encountered was the data mapping between two different structures. It happens when you need to connect two or more components inside a service call, and each of those have different data structures. In this case UML doesn’t provide any help and you have to customize the model with special stereotypes and profiles.
  7. Sequence and state diagrams. Class diagram were used to generate java classes, xsd files, copy cobol. Sequence diagrams on the other hand describe the flow of processes and their business rule, even conditional instructions which may be transformed to bpel or custom service descriptors. State diagram shows its benefits modelling the user experience and the steps to complete an operation, it easily tracks the state of sessions and will be transformed into the MVC system, as well as in whatever rich client forms.

Have your say.

Chain of failures on blocking threads

chain of failuresI came back to Milano little time ago and I’ve bumped into an API implementation in this new job. This will be a library that aims to interact with a remote application through a simple text-based protocol.
The typical process is a sequence of authorization – session initialization – commands processing – session disposing each of which enclosed in atomic request/respose interaction. The simplest and most immediate approach provides to write the protocol stubs, and manage them through simple methods that elaborate such commands at low level handling tcp sockets and the client/server handshaking with synchronous calls.
Sometimes the simplest is the best way, but not this time, especially within multi layer structured systems, where every component depends on many others, and any of those can fail.
This task rings as an alarm bell to me due to a recent project that looked like this one, and I can still remember the effects of hangs and missed responses in a SOA context; fortunately the event happened during a load test:

The application was a client interacting with openfire through XMPP. The investigation uncovered a bug that caused a dead lock in a connection pool in certain conditions, the consequences were easily predictable as the fast resource exhaustion, causing soon an application break down. The application server was over but also the client side was unrecoverable since the unresilient application’s architecture didn’t foresee hang requests.

What is unacceptable is the chain of failures that a problem like this can disseminate along the process path, what about combined systems where one side does not expect the other side to hang off if it stops responding?
Domino is a pleasant show, you watch all pieces tracing doodles during their falls, it’s funny but only when it doesn’t look like your system when it works.

Don’t play domino, be skeptical (and use concurrent package)

Blocking threads may happen every time you attempt to get resources out of a connection pool, deal with caches or registred objects, or make calls to external systems as this unfortunate experience above. I mean to be distrustful of each component you inquiry decoupling systems as necessary as to skirt the failure propagation. If your component is properly protected from its neighbours the probability of failure clearly drops down .
What does this mean in practice?
If you’re dealing with sockets you’re unaware of peer status, except when you send or receive bytes, then check the connectivity polling with fake sends and using setSoTimeout(int timeout) to prevent blocking reads.
However, I find much more effective isolating the whole business unit in a single timeboxed job, because delays may also come from huge responses as unbounded result set or file fecthing.
If you allow the clients to set timeouts, the request thread quit the operation when the call is not completed in time. Easy?
Concurrent programming is hard and it requires high skills and it is even discoraged unless you don’t want to reinvent the wheel. The java.util.concurrent package helps to craft your code with timeout controls as in the following example where I’m encapsulating a job unit (a login) into an ExecutorService.

public class Login implements Callable {

The Login action implements the Callable interface; despite Runnable it may throw checked exceptions when executed.

 Login login = new Login(user, password);
 Future<?> res = exe.submit(login);
 try {
   res.get(commandTimeout, TimeUnit.MILLISECONDS);
 }
 catch (ExecutionException e) {
   log.error("error on login", e);
 }
 finally {
   res.cancel(true);
 }

The tip shows to launch the callable through ScheduledThreadPoolExecutor.submit and waiting the task’s end through Future.get(long timeout, TimeUnit timeUnit). By specifying the timeout value the operation will be completed in time , otherwise a TimeoutException will be thrown.

N.B.: in this last case when timeout occours the ExecutorService doesn’t seem to take care about the still open thread, so don’t forget to execute Future.cancel(true) in the final statement.

Create a free website or blog at WordPress.com.
The Esquire Theme.