Giancarlo Frison Discontinuities

Cool DSL with Groovy

Domain model design has never been confused with ‘ease’. From the dawn of its conception, generating executable Unified Modeling Language (UML) diagrams meant sweat and frustration. Generating stubs & skeletons, alone, led one into quagmires of Java Enterprise Edition. Yet it is inherently possible — and actually has been for ages — to design a programming language dedicated to solve specific domain problems; effortlessly and quickly.

DSL is not a foreign thing. Developers are steeped in DSL, even if unwittingly. Frameworks are DSL. A macro is also DSL. When you write a function? That too is steeped in DSL. But functions are often a dirty business; un-standarized; quirky, whereby even domain experts face a daunting task when unraveling and then being put to task to update such rapscallion boilerplates (that is if they do not themselves compound the problems). A programmer worth his reputation would find it necessary to isolate and clean up all non specific domain terms within the DSL. This literally means rooting through and setting aside all secondary-in-importance setup code, allowing the domain expert to establish or reestablish the primary foundation code. This is most times done for large money for an end user; the helpless customer.

Given an appropriate DSL that fits their needs, customers could write all of the code that they need themselves, without having to be programmers.

Now let us imagine, or better, take for granted, that a good DSL experience can exists. That given an appropriate and standardized DSL environment, even the customer, with a few simple tools and grasp-of-concept, can self-write code, gracefully, and specific to their needs without the cost and hassle of having to become, or running to, programmers. And that is where Groovy comes in.

What’s Groovy?

It’s Java as it should be. Easy and intuitive, it offers new features unknown to its parent yet (I’m still waiting for release 7), and come up with those benefits that form the basis of the DSLs that we will develop.

A fundamental feature is the MOP, the ability of changing runtime the properties and the behaviour of objects. It allows us to respond to method calls that do not exist in the class, in other word to “pretend” that these methods exist. Another essence to consider is the Closure. It’s the real power of Groovy. The extreme flexibility of this kind of object/method allows us to change its behaviour replacing its delegate class, on the fly.

Closure delegate

Groovy has three variables inside each closure for defining different classes in its scope: this, owner, and delegate. The this variable refers to the enclosing class. The owner variable is the enclosing object of the closure. The delegate variable is the same as the owner, unless that delegate is substituted.

class MyClass {
  def closure = {
    println "this class:"+this.class.name
    println "delegate class:"+delegate.class.name
    def nested = {
      println "owner nested class:"+owner.class.name
    }
    nested()
  }
}
def tryme = new MyClass().closure
tryme.delegate = this
tryme()

//output:
//this class:MyClass
//delegate class:Script1
//owner nested class:MyClass_closure1

When a closure encounters a method call that it cannot handle itself, it automatically relays the invocation to its owner object. If this fails, it relays the invocation to its delegate. one of the reasons builders work the way they do is because they are able to assign the delegate of a closure to themselves.

Powered by Apache Mina

The application being discussed has to behave as follows: performs the client authentication, accomplishes request and response operations and forwards notifications asynchronously to the client. The Mina framework fulfills these needs because it was created to be as flexible and easy-fitting as possible in an array of case scenarios. Mina is structured in several layers that briefly can be broken down into: input parsing, execution of concatenated processes, and serialisation of related responses, if needed. The infrastructure takes care of I/O quirks, while it lets you write within it your business processes and handles session lifecycles through simple callbacks that you can manage within a few codes. It is Majestic! You will love it.

In addition, there is a need to look for something that could help me, the author, to write down an application which implements many commands, preferably in a appealing way, where each piece of service could be isolated from the rest of the application.

The demultiplexer is a device with a single input and many outputs. Its role is to select the output line according to context rules. This approach is also implemented within Apache Mina for writing decoders, handlers and encoders. Apache Mina’s demux package includes: DemuxingIoHandler, DemuxingProtocolDecoder and the DemuxingProtocolEncoder.

Filters

Filters are used for several purposes: I/O logging, performance tracking, thread pooling, overload control, blacklists, and so on. In a specific case I once had to configure two filters. One for the user authentication, and the other for thread pooling. Once a user is logged in, the authentication filter removes itself from the client session filter list, while substituting one transforming the raw input into POJOs.

Decoder

The TCP server must implements a proprietary protocol. It is a simple ASCII protocol for exchanging commands. Such command lengths are undefinable. They, however, are a sequence of characters much like SMTP. Therefore, the CumulativeProtocolDecoder class is extended enabling it to gather input through the end of command. It is then left to us to parse of the bytes and create a simple Java Bean. Post the operation, the bean is transferred through the filter chain to be executed.

Handler

One of the IoHandler implementations drew my attention while I was looking for something resembling the Command Pattern. Each message coming from clients means a specific action, and I find it so tedious writing a single Handler that switches operations by the type of the incoming request. An elegant solution is provided by DemuxingIoHandler that posts the requests toward the corresponding handler. The handlers have to implement the MessageHandler, the generic type defined will be the object’s class that the DemuxingIoHandler will submit to the handler, and register themselves invoking

addReceivedMessageHandler(Class<E> type, MessageHandler<? super E> handler)

.

The asynchronous nature of Mina allows to handle a huge number of clients by an handful set of threads. A further decoupling between I/O and business logic may be done by the ExecutorFilter, which is in charge of the messages after the NioProcessor.

Encoder

The transformation component works in a reverse way compared to the decoder: it serialises the POJO response coming from the handler process, to the output stream, toward the client. Likewise the handlers, it is possible to delegate its own encoder for each response object, but why not to send the IoBuffer straightly from the handler that elaborates the request? Separation of concerns might be the answer. The handler receives a command, a java bean, then it is processed and the handler returns an object for response, a POJO. It’s up to the encoder to transform the abstract response to a concrete message through the agreed TCP protocol.

Waltzing with the Tech Crunch

The economic crisis we’re currently going through is teaching some lessons to the Western countries, in particular to the Anglo-Saxons, that our grandparents know pretty much, although it seems we’ve forgotten the past years in this financial bubble. The debt has several pros: allows building, buying, investing and, when properly managed, might ensure a safe return and a fair growth of the economy. However, the debt has an outstanding bad side: it must be paid back. It might be postponed, rolled over, shifted to other (more or less conscious) subjects, and its dreadful effects would be identified as bankrupt, credit-crunch, real estate bubble and recession.

With a view to the software applications, a similar observation might be rightful in terms of ‘state of health’, which point to a family of properties that the software should have to be easy changeable, so it could respond quickly to the requirements evolution. A software that doesn’t enjoy good health is the one that has become fossilised to the original architecture, keeping it as is as possible, never revisited in the light of technological innovations and functional updates, but just patched with improvised and unconvincing surgery.

It’s suffering what it could be defined as inability to bear the debt built up over time, but in this case we’re not dealing with financial debt, this is the technical debt. Even though the term ‘technical debt’ sounds strange, it’s related to the financial fellow in many ways, and it is widespread in software development. The saying according to which economy is based on credit (debt) finds support also in the software world.

Developers who are reading know well what I am talking about. You’re assigned to work in such XYZ firm from next Monday for at least 3 months, and when you’ll start this new task you’ll be instructed about what to do.

The workout mainly consists of implementing new features on top of the customer’s solid rock application, a very remarkable system built some years ago for serving peculiar needs.

“So far, it has worked well”, the owners said, “you may just make it worse than it is now”. Later on, you have no choice but to agree with them.

Expanding or changing the set of features without re-factoring looks like seeding a crop without ploughing the land before, if the system’s authors didn’t predict such an amendment. The harvest could be lost, couldn’t it?

You’ll be asked to complete your job updating the old system and keeping the structure as it is, avoiding to break the fragile balance among components.

Just for you information, consultants were called few months ago for a similar task. They added such a mess into the code that you have to spend most of your time to figure out what they wanted to do than working effectively on new things. Maybe the customer were disappointed by their way to conduct the development and now it’s your turn.

Thus, in addition to the new enhancements, you should fix what your precursors did.

This subject is hard to handle and quite unpleasant in particular when the customer doesn’t want to hear talking about re-factoring unless it doesn’t delay the delivery, which is almost impossible, so the scheduled task proceeds as expected.

In short, it looks like going for a walk over the broken glasses swearing you won’t be injured.

Using the post’s subject, it looks like getting into debt again for covering the old one, just for adding short term solutions when too many of these have been applied in the past.

Looking back at the past, I’m realising how this kind of intervention is predominant on the amount of works done, that I don’t know how much time I would have to wait unemployed if I wanted to work only into brand new projects…

Sometimes I’d define myself as a debt collector, and I find it uncomfortable as a lawyer or a doctor would feel against a criminal prosecution or a rescue surgery: it’s an exploitation of other’s misfortunes. It might be painful, but we make the customer feel better.

Recruiter advisory Explicit lyrics

“I’m a people person, very personable. I absolutely insist on enjoying life. Not so task-oriented. Not a work horse. If you’re looking for a Clydesdale I’m probably not your man. Like I don’t live to work, it’s more the other way around. I work to live. Incidentally, what’s your policy on Columbus Day?”

You, Me and Dupree (2006)

This is the interview every recruiter would want at 17.00 on Fridays, so fast to let you step out soon for the forthcoming weekend, plain and clear in the outcome.

Usually it’s not so an easy job for the Head Hunter, selecting people and finding the right ones to slot into the pending position might be hard. The challenge would be more difficult when they are seeking to recruit through controversial methods, which hardly could achieve the wished result. As human beings, we have the natural tendency to think that our choices are rational, while we underestimate the effects of the undercurrents that, in a way or the other, affect our decisions. We believe to be steady inside the boat in the middle of the sea, even if we are at the mercy of the weaves. We are prone to get swayed.

Fortunately, the good recruiter studies books deeply and prepare himself in the training workshops to get rid of those diversions, and then finally he can apply scientific methods to his job. Progresses in this field could be checked when they act like CIA agents asking questions as “What will you do when you grow up?”, “When was the last time you were happy?” and again “What are your strengths and weaknesses?” rather than “Tell me about yourself, describe yourself in one word”. But, dear recruiter, I can’t describe myself in one word, unless it’s both hyphenated and a metaphor.

What comes first, if I’m talking with the interviewer for the job, is that I want to check if his expectations match with mines. I’m talking with you to show my professional skills, not to talk about my hobbies, neither about dance nor motorcycling.

All those requests never end to astonish me for their futility, even if those make sense just for HR department, I’m not going to dig into the matter.

Behaviour interview

A much more effective approach is to conduct very structured interviews where the questions are focused on experience, skills and ability rather than vague things.

What recruiters sometimes try to follow is the behaviour interview assertion, which declares that the most accurate predictor of future performance is past performance in similar situations. It would be enough to fright any financial mentor but it finds logical basis for canditates’ evaluation. Perhaps the behaviour of a single man is easier predictable than a stock index. HR specialists claim that with this method of leading interviews, it’s much more difficult to get responses that are untrue to the candidate character, because these should be detailed descriptions of past events, or experiences faced at work.

I agree on the idea that past experiences are indicative on how people react under certain circumstances, but I’d put less emphasis on that, first of all because challenges are always different. Whatever technological issue the company is facing right now, it would be far from any candidate experience, recruiter may figure out something else by the applicant. What someone has done shows the ability to execute; personality is important, intelligence naturally more so, but improvisation remains the key. Knowledge workers must adapt their knowledge to the situation, but if during the interview the candidate isn’t projected on a real scenario to show his capabilities and past learned lessons, how the recruiter actually could form an objective opinion?

I was rarely asked advices or opinions about real technological matters involved on development, is that the reason the recruiter doesn’t know much about what the new employee is going to resolve?

Maybe sometimes interviews are not used for hiring people, but just to gather information on candidates, to create statistics on salaries, skills and to estimate how long does it take to search for a special kind of professional in the market, I guess.

Money

Recruiters may discard people based on salary, of course they can; especially if the point is that the people are interchangeable, low cost and easily replaceable like a natural resource. usually this happen when the target candidate is junior.

Salary is a complex issue the more senior the target is. Seniors want to discuss the context of the job before they ask about money. Answering the salary question in a phone screen or in an interview before building rapport drop me to the disappointment, as the recruiter is telling me “We want the cheapest on your position”. That’s ok, but do you want to save money before you know what I have to offer? Or, why are you looking for someone senior?

HRs usually match your CV keywords (better know as buzzwords) with their table axis to define your salary box, framing candidates in a very simple way. Although the salary offer is equals among peers, a fascinating metric highlights that 5% of programmers are 20x more productive than the other 95%. Now, let me know in your opinion which section of this statistic is firstly discarded.

Quiz

I don’t see anything wrong with the interview questions with multiple choices, sometimes they are as funny as filling in crosswords, but some other times these questions upset me for I realized I forgot some exponential functions since the school… Damn!

Although it would be helpful to filter out applicants without a basic education, in several years of work I never had concerned about exponential calculus to strike a business requirement.

What's the bottom printed row of this function?

for(int i=0; i<30; i++){
   System.out.println("line: "+i);
}

a) line: 29
b) line: 30
c) none of the above

These requests end up by annoying their prospective employees, any company would lose appeal, dropping any willingness to get hired by the company.

This is a newbie question, then the recruiter answered me telling that every technical employee in the firm had filled such a questionnaire. Really? I don’t think any of the experienced programmers I know would waste time crisscrossing questions like that on a job interview unless they are hopelessly unemployed, and if the hiring manager is looking for an experienced developer, why ask these first-level programming questions? If the recruiter can’t read the resume, why would a hiring manager?

MDA on fire off the Shoulder of Orion

I must admit, frustration has increased over the years. I mean, interacting with the computer in terms of boolean, long, void. I’d rather sit on a sofa and describe the program by voice, or better, get into a 3D virtual reality and cook a software like a lunch meal in the kitchen. Playing with spheres, arrows, to design all program doodles. I can’t picture it as a possible scenario in the near future, and like for the science fiction movies, we will need to wait a much longer time compared to what envisaged by movie directors or writers to see a minor part of the technologic developments imagined so far actually implemented. It’s just for the secret ambition to get free from the textual codes that I’m debating about how to easily abstract the definition of information systems. In some ways, I’ve managed to do something related to this, in a restricted domain.

When Model Driven Architecture turn out right

Once during an interview, I was asked why I didn’t apply MDA over all software projects I leaded. The question should be hooked as a point of discussion on MDA misconceptions, and generally, on modeling and UML. Unfortunately these interpretations are hindered by a phenomenon that a famous observer of human events (Mark Twain) revealed, that I repropose it again in IT terms:

People commonly use UML like a drunk uses a lamp post; For support rather than illumination.

The initial costs of an MDA are pretty high, the return of the investments starts at the beginning of the automatic code generation. The models are built on the meta-model basis which defines the semantics of the system. Afterwards, the models are turned into code or other resources ready to be installed into the real system. The initial developing efforts are focused on the meta-model and on transforming tools. The investments are rewarded by the automatic transformations that replace the repetitive coding work of programmers; and deeper is the amortization, the more the investment is profitable. The ROI increases as much as the model instances are built, as well as the simplicity (in other words, easier to implement) of the metamodel and the transformation tools. Where shall I apply this approach for best results? In a real-time video application? In a powerful compression API? I don’t think so. I guess high values of this coefficient would be found into SOA systems. MDA is suitable for service domain, like a banking middleware, where you may amortize the modeling system through hundreds of services, with many data structures and flow descriptors, but conformed to few abstract structures, the meta-model actually.

Models are not code

Do you have to adopt UML for design metamodels? Definitely, not. You may define simple data classes in any textual format. The matter might raise when the metamodel grows in complexity, size and when many aspects of the domain are schematized into. Hence, you have to face UML: either you reinvent it, or you use it. Perhaps tracing circles and arrows would be more embarrassing compared to typing on a keyboard, but it could be necessary when structures become hinged and interrelated. Try to join a class diagram with a sequence diagram, then enclose all in a composite for interaction with external parts, and do it without formal and conventional visual patterns and… let me know! UML is a complete and exhaustive, it’s so generalist it would be applied to describe any software model, but for describing not for coding. Many people confuse UML as a programming language. Wrong. It’s a tool for representing a system, a structure, a flow. As mathematics aims to formulate conjectures among countable entities, UML offers a way to define abstract entities. Such high-level language is a mere conceptual schema, it defines components and services, no programs ready to run.

MDA layers

So what do you do with a picture with bubbles and arrows? Is it enough design some diagrams, push a button and voilà: getting a working system that fit your requirements? Nobody believes it, neither myself. The object model is unaware of the underlying system and of its implicit matters. As we know, the model declares the structure and the behaviour differences between one service and another, for all the remaining aspects the system applies common platform specific beaviors. For instance, whether you want to enclose the service into a transaction you may setup a ‘transaction’ stereotype in your own profile meta-model, which will be transformed into a Java annotation or Xml attribute and then properly interpreted by the server framework. Once the MDA is ready, one designs the models and transforms them into hardcore resources, the mythic code bullets, finally deployed into the server. It would appear as too simple, and in order to keep off prejudicial comments I’ll tell you that the real applications are much more complex than this simple vision. You can’t update code bullets by hand because the changes will be lost next time you generate them, automatically. Many exceptions are to be considered, allowing for example, to update generated code. I’ve used merging API like JMerge, and I found it useful to enrich the code without discontinuity from the model and the generated code.