The application being discussed has to behave as follows: performs the client authentication, accomplishes request and response operations and forwards notifications asynchronously to the client. The Mina framework fulfills these needs because it was created to be as flexible and easy-fitting as possible in an array of case scenarios. Mina is structured in several layers that briefly can be broken down into: input parsing, execution of concatenated processes, and serialisation of related responses, if needed. The infrastructure takes care of I/O quirks, while it lets you write within it your business processes and handles session lifecycles through simple callbacks that you can manage within a few codes. It is Majestic! You will love it.
In addition, there is a need to look for something that could help me, the author, to write down an application which implements many commands, preferably in a appealing way, where each piece of service could be isolated from the rest of the application.
The demultiplexer is a device with a single input and many outputs. Its role is to select the output line according to context rules. This approach is also implemented within Apache Mina for writing decoders, handlers and encoders. Apache Mina’s demux package includes: DemuxingIoHandler, DemuxingProtocolDecoder and the DemuxingProtocolEncoder.
Filters are used for several purposes: I/O logging, performance tracking, thread pooling, overload control, blacklists, and so on. In a specific case I once had to configure two filters. One for the user authentication, and the other for thread pooling. Once a user is logged in, the authentication filter removes itself from the client session filter list, while substituting one transforming the raw input into POJOs.
The TCP server must implements a proprietary protocol. It is a simple ASCII protocol for exchanging commands. Such command lengths are undefinable. They, however, are a sequence of characters much like SMTP. Therefore, the CumulativeProtocolDecoder class is extended enabling it to gather input through the end of command. It is then left to us to parse of the bytes and create a simple Java Bean. Post the operation, the bean is transferred through the filter chain to be executed.
One of the IoHandler implementations drew my attention while I was looking for something resembling the Command Pattern. Each message coming from clients means a specific action, and I find it so tedious writing a single Handler that switches operations by the type of the incoming request. An elegant solution is provided by DemuxingIoHandler that posts the requests toward the corresponding handler. The handlers have to implement the MessageHandler, the generic type defined will be the object’s class that the DemuxingIoHandler will submit to the handler, and register themselves invoking
The asynchronous nature of Mina allows to handle a huge number of clients by an handful set of threads. A further decoupling between I/O and business logic may be done by the ExecutorFilter, which is in charge of the messages after the NioProcessor.
The transformation component works in a reverse way compared to the decoder: it serialises the POJO response coming from the handler process, to the output stream, toward the client. Likewise the handlers, it is possible to delegate its own encoder for each response object, but why not to send the IoBuffer straightly from the handler that elaborates the request? Separation of concerns might be the answer. The handler receives a command, a java bean, then it is processed and the handler returns an object for response, a POJO. It’s up to the encoder to transform the abstract response to a concrete message through the agreed TCP protocol.
The economic crisis we’re currently going through is teaching some lessons to the Western countries, in particular to the Anglo-Saxons, that our grandparents know pretty much, although it seems we’ve forgotten the past years in this financial bubble. The debt has several pros: allows building, buying, investing and, when properly managed, might ensure a safe return and a fair growth of the economy. However, the debt has an outstanding bad side: it must be paid back. It might be postponed, rolled over, shifted to other (more or less conscious) subjects, and its dreadful effects would be identified as bankrupt, credit-crunch, real estate bubble and recession.
With a view to the software applications, a similar observation might be rightful in terms of ‘state of health’, which point to a family of properties that the software should have to be easy changeable, so it could respond quickly to the requirements evolution. A software that doesn’t enjoy good health is the one that has become fossilised to the original architecture, keeping it as is as possible, never revisited in the light of technological innovations and functional updates, but just patched with improvised and unconvincing surgery.
It’s suffering what it could be defined as inability to bear the debt built up over time, but in this case we’re not dealing with financial debt, this is the technical debt. Even though the term ‘technical debt’ sounds strange, it’s related to the financial fellow in many ways, and it is widespread in software development. The saying according to which economy is based on credit (debt) finds support also in the software world.
Developers who are reading know well what I am talking about. You’re assigned to work in such XYZ firm from next Monday for at least 3 months, and when you’ll start this new task you’ll be instructed about what to do.
The workout mainly consists of implementing new features on top of the customer’s solid rock application, a very remarkable system built some years ago for serving peculiar needs.
“So far, it has worked well”, the owners said, “you may just make it worse than it is now”. Later on, you have no choice but to agree with them.
Expanding or changing the set of features without re-factoring looks like seeding a crop without ploughing the land before, if the system’s authors didn’t predict such an amendment. The harvest could be lost, couldn’t it?
You’ll be asked to complete your job updating the old system and keeping the structure as it is, avoiding to break the fragile balance among components.
Just for you information, consultants were called few months ago for a similar task. They added such a mess into the code that you have to spend most of your time to figure out what they wanted to do than working effectively on new things. Maybe the customer were disappointed by their way to conduct the development and now it’s your turn.
Thus, in addition to the new enhancements, you should fix what your precursors did.
This subject is hard to handle and quite unpleasant in particular when the customer doesn’t want to hear talking about re-factoring unless it doesn’t delay the delivery, which is almost impossible, so the scheduled task proceeds as expected.
In short, it looks like going for a walk over the broken glasses swearing you won’t be injured.
Using the post’s subject, it looks like getting into debt again for covering the old one, just for adding short term solutions when too many of these have been applied in the past.
Looking back at the past, I’m realising how this kind of intervention is predominant on the amount of works done, that I don’t know how much time I would have to wait unemployed if I wanted to work only into brand new projects…
Sometimes I’d define myself as a debt collector, and I find it uncomfortable as a lawyer or a doctor would feel against a criminal prosecution or a rescue surgery: it’s an exploitation of other’s misfortunes. It might be painful, but we make the customer feel better.
“I’m a people person, very personable. I absolutely insist on enjoying life. Not so task-oriented. Not a work horse. If you’re looking for a Clydesdale I’m probably not your man. Like I don’t live to work, it’s more the other way around. I work to live. Incidentally, what’s your policy on Columbus Day?”
You, Me and Dupree (2006)
This is the interview every recruiter would want at 17.00 on Fridays, so fast to let you step out soon for the forthcoming weekend, plain and clear in the outcome.
Usually it’s not so an easy job for the Head Hunter, selecting people and finding the right ones to slot into the pending position might be hard. The challenge would be more difficult when they are seeking to recruit through controversial methods, which hardly could achieve the wished result. As human beings, we have the natural tendency to think that our choices are rational, while we underestimate the effects of the undercurrents that, in a way or the other, affect our decisions. We believe to be steady inside the boat in the middle of the sea, even if we are at the mercy of the weaves. We are prone to get swayed.
Fortunately, the good recruiter studies books deeply and prepare himself in the training workshops to get rid of those diversions, and then finally he can apply scientific methods to his job. Progresses in this field could be checked when they act like CIA agents asking questions as “What will you do when you grow up?”, “When was the last time you were happy?” and again “What are your strengths and weaknesses?” rather than “Tell me about yourself, describe yourself in one word”. But, dear recruiter, I can’t describe myself in one word, unless it’s both hyphenated and a metaphor.
What comes first, if I’m talking with the interviewer for the job, is that I want to check if his expectations match with mines. I’m talking with you to show my professional skills, not to talk about my hobbies, neither about dance nor motorcycling.
All those requests never end to astonish me for their futility, even if those make sense just for HR department, I’m not going to dig into the matter.
A much more effective approach is to conduct very structured interviews where the questions are focused on experience, skills and ability rather than vague things.
What recruiters sometimes try to follow is the behaviour interview assertion, which declares that the most accurate predictor of future performance is past performance in similar situations. It would be enough to fright any financial mentor but it finds logical basis for canditates’ evaluation. Perhaps the behaviour of a single man is easier predictable than a stock index. HR specialists claim that with this method of leading interviews, it’s much more difficult to get responses that are untrue to the candidate character, because these should be detailed descriptions of past events, or experiences faced at work.
I agree on the idea that past experiences are indicative on how people react under certain circumstances, but I’d put less emphasis on that, first of all because challenges are always different. Whatever technological issue the company is facing right now, it would be far from any candidate experience, recruiter may figure out something else by the applicant. What someone has done shows the ability to execute; personality is important, intelligence naturally more so, but improvisation remains the key. Knowledge workers must adapt their knowledge to the situation, but if during the interview the candidate isn’t projected on a real scenario to show his capabilities and past learned lessons, how the recruiter actually could form an objective opinion?
I was rarely asked advices or opinions about real technological matters involved on development, is that the reason the recruiter doesn’t know much about what the new employee is going to resolve?
Maybe sometimes interviews are not used for hiring people, but just to gather information on candidates, to create statistics on salaries, skills and to estimate how long does it take to search for a special kind of professional in the market, I guess.
Recruiters may discard people based on salary, of course they can; especially if the point is that the people are interchangeable, low cost and easily replaceable like a natural resource. usually this happen when the target candidate is junior.
Salary is a complex issue the more senior the target is. Seniors want to discuss the context of the job before they ask about money. Answering the salary question in a phone screen or in an interview before building rapport drop me to the disappointment, as the recruiter is telling me “We want the cheapest on your position”. That’s ok, but do you want to save money before you know what I have to offer? Or, why are you looking for someone senior?
HRs usually match your CV keywords (better know as buzzwords) with their table axis to define your salary box, framing candidates in a very simple way. Although the salary offer is equals among peers, a fascinating metric highlights that 5% of programmers are 20x more productive than the other 95%. Now, let me know in your opinion which section of this statistic is firstly discarded.
I don’t see anything wrong with the interview questions with multiple choices, sometimes they are as funny as filling in crosswords, but some other times these questions upset me for I realized I forgot some exponential functions since the school… Damn!
Although it would be helpful to filter out applicants without a basic education, in several years of work I never had concerned about exponential calculus to strike a business requirement.
These requests end up by annoying their prospective employees, any company would lose appeal, dropping any willingness to get hired by the company.
This is a newbie question, then the recruiter answered me telling that every technical employee in the firm had filled such a questionnaire. Really? I don’t think any of the experienced programmers I know would waste time crisscrossing questions like that on a job interview unless they are hopelessly unemployed, and if the hiring manager is looking for an experienced developer, why ask these first-level programming questions? If the recruiter can’t read the resume, why would a hiring manager?
I must admit, frustration has increased over the years. I mean, interacting with the computer in terms of boolean, long, void. I’d rather sit on a sofa and describe the program by voice, or better, get into a 3D virtual reality and cook a software like a lunch meal in the kitchen. Playing with spheres, arrows, to design all program doodles.
I can’t picture it as a possible scenario in the near future, and like for the science fiction movies, we will need to wait a much longer time compared to what envisaged by movie directors or writers to see a minor part of the technologic developments imagined so far actually implemented.
It’s just for the secret ambition to get free from the textual codes that I’m debating about how to easily abstract the definition of information systems. In some ways, I’ve managed to do something related to this, in a restricted domain.
When Model Driven Architecture turn out right
Once during an interview, I was asked why I didn’t apply MDA over all software projects I leaded. The question should be hooked as a point of discussion on MDA misconceptions, and generally, on modeling and UML. Unfortunately these interpretations are hindered by a phenomenon that a famous observer of human events (Mark Twain) revealed, that I repropose it again in IT terms:
People commonly use UML like a drunk uses a lamp post; For support rather than illumination.
The initial costs of an MDA are pretty high, the return of the investments starts at the beginning of the automatic code generation. The models are built on the meta-model basis which defines the semantics of the system. Afterwards, the models are turned into code or other resources ready to be installed into the real system.
The initial developing efforts are focused on the meta-model and on transforming tools. The investments are rewarded by the automatic transformations that replace the repetitive coding work of programmers; and deeper is the amortization, the more the investment is profitable.
The ROI increases as much as the model instances are built, as well as the simplicity (in other words, easier to implement) of the metamodel and the transformation tools. Where shall I apply this approach for best results? In a real-time video application? In a powerful compression API? I don’t think so. I guess high values of this coefficient would be found into SOA systems.
MDA is suitable for service domain, like a banking middleware, where you may amortize the modeling system through hundreds of services, with many data structures and flow descriptors, but conformed to few abstract structures, the meta-model actually.
Models are not code
Do you have to adopt UML for design metamodels? Definitely, not. You may define simple data classes in any textual format. The matter might raise when the metamodel grows in complexity, size and when many aspects of the domain are schematized into. Hence, you have to face UML: either you reinvent it, or you use it.
Perhaps tracing circles and arrows would be more embarrassing compared to typing on a keyboard, but it could be necessary when structures become hinged and interrelated. Try to join a class diagram with a sequence diagram, then enclose all in a composite for interaction with external parts, and do it without formal and conventional visual patterns and… let me know!
UML is a complete and exhaustive, it’s so generalist it would be applied to describe any software model, but for describing not for coding.
Many people confuse UML as a programming language. Wrong. It’s a tool for representing a system, a structure, a flow. As mathematics aims to formulate conjectures among countable entities, UML offers a way to define abstract entities. Such high-level language is a mere conceptual schema, it defines components and services, no programs ready to run.
So what do you do with a picture with bubbles and arrows? Is it enough design some diagrams, push a button and voilà: getting a working system that fit your requirements? Nobody believes it, neither myself. The object model is unaware of the underlying system and of its implicit matters. As we know, the model declares the structure and the behaviour differences between one service and another, for all the remaining aspects the system applies common platform specific beaviors. For instance, whether you want to enclose the service into a transaction you may setup a ‘transaction’ stereotype in your own profile meta-model, which will be transformed into a Java annotation or Xml attribute and then properly interpreted by the server framework. Once the MDA is ready, one designs the models and transforms them into hardcore resources, the mythic code bullets, finally deployed into the server.
It would appear as too simple, and in order to keep off prejudicial comments I’ll tell you that the real applications are much more complex than this simple vision.
You can’t update code bullets by hand because the changes will be lost next time you generate them, automatically. Many exceptions are to be considered, allowing for example, to update generated code. I’ve used merging API like JMerge, and I found it useful to enrich the code without discontinuity from the model and the generated code.
I didn’t believe that such a successful project was such a rare event in the IT industry, that’s why I’ve never caught another chance for applying the learned lessons again. I thought that the experience accrued on Model Driven Architecture will be reusable in other circumstances, though I’ve never seen concepts as executable UML or MDA either applied or mentioned in the following commitments I’ve pursued into.
The idea of this project wasn’t conceived by external consultants thirsting for selling their cool technology; instead, it was born and grew up just inside the development team. The architecture’s transition had been gradual, and little by little, as new automation scenarios penetrated our excited minds, we moved as many as possible development processes under MDA framework.
Despite my early impressions while considering to undertake the project, the upper management embraced it and laid down investments counting on the benefits that this new approach would provide to the development.
What is difficult to change is the modus operandi of a 300 employee company that offers banking services and applications, which is engaged in one of the most conservative field in technology and development methodologies by default. It was about a significant jump in the services development and as the PM remarked:
“We are developing as dinosaurs, don’t you know what the hell happened to them?”
the way to MDA was traced.
The issues we faced with the introduction of modeling notions would be defined as practical contingencies rather than theoretic or philosophical reasons, foremost the mess in the business layer. It raised reliance and maintenance weaknesses with time, even security holes that sounded so bad in such a company with a plenty of banks as customers.
The hundreds of cases developed by dozens of engineers turning over throughout the months in the Java development area had reached the critical mass, enough to trigger an explosion/implosion of the whole system. On the other hand, the applications can stand up only by high costs of maintenance and lazy deliveries, due to the difficulties on integrating incoming services with the underlying system.
The application layer managed the data flows between clients at the top and feeds and legacy information systems at the bottom. On their way, they affected several mixes by business process rules hard-coded in obscure java classes. Unfortunately, most of those shaking details were lost, because of the policy related to the development, which didn’t claim about missing documentation, and then it was so damned annoying to go back and take over old artifacts for maintenance or updating rules. Only skillful programmers might extricate the balled up code. The critical mass had to drop down and be brought to lower temperatures quickly. New developments and dozens of incoming features were planned, so a deep refactoring was a must; it can wait no more.
How was the domain layer implementation that popped up the highlighted problems?
The developer’s effort was mainly focused on the creation of java classes implementing a Command and defining the service to the framework through an xml descriptor. The input and output of such a command was a raw DOM argument, which was parsed to extract the input data needed by the business transaction, the most part of coding was regarded for parsing and filling the response’s service that was a raw xml document too. I think it isn’t agreeable to put most of the developing efforts merely on managing input/output data and mapping, but this was the daily job.
Apply the MDA take time, it was an one year evolution, and it would be summarized with:
XSD barriers. It was necessary to set some boundaries for developers, in order to get a minimum of control over the data flows. Each service had its own formal validation on input/output data, though no restrictions were settled on how implementing the services. Never ever elements or attributes not defined in advance by commitments.
Pojo. Replace the raw document with simple pojo as an argument in the call-back methods; this operation aggregates the formal validation with an easy approach on data manipulation. The binding xml-java isn’t hurdle, it is automatic and many available libraries can accomplish this step.
First hints with EMF. Xsd files are models for xml data, EMF is a MOF java implementation, a general abstraction for writing all sorts of models, I don’t linger over it now, but it represented a jump to the service modeling. EMF is an open source library enclosed in the Eclipse platform easy to use and customizable, it aims to separate the abstract model from the ground.
Choice of technology. The play with EMF opened new horizons on modeling facilities. Hooked by this methodology to design SOA applications I realized EMF is not enough, the UML (which core principles are inherited from MOF) can fit much better with my purpose to design the object model, define the process flow and the user experience. UML offers diagrams that you can join together, static and dynamic model may describe most of application structure and behaviour.
Executable UML. What do you do with this bunch of diagrams if you can’t transform them in real artifacts and plug-in them in your SOA framework? Not so much, keeping UML diagrams without related transformations and executions is merely fine for documentation, not much more than this. At that time the company joined the Rational beta-program and I started to develop Eclipse compliant plugins which leveraged the power of UML2 eclipse implementation.
How to define data mapping? One of the main obstacles encountered was the data mapping between two different structures. It happens when you need to connect two or more components inside a service call, and each of those have different data structures. In this case UML doesn’t provide any help and you have to customize the model with special stereotypes and profiles.
Sequence and state diagrams. Class diagram were used to generate java classes, xsd files, copy cobol. Sequence diagrams on the other hand describe the flow of processes and their business rule, even conditional instructions which may be transformed to bpel or custom service descriptors. State diagram shows its benefits modeling the user experience and the steps to complete an operation, it easily tracks the state of sessions and will be transformed into the MVC system, as well as in whatever rich client forms.