Giancarlo Frison Signals from the Noise

Deeplearning in Text Classification

In the Divine Comedy, Minos is a daemon appointed to guard the entrance of the hell. He listens to the sins of souls and indicates them their destinations by wrapping his tail as many times as the assigned circle. The figure is emblematic of the machine learning classification, where an entity is identified as belonging to a category or to another. Rather than condemning souls to endless pains, the harmless tool I am describing can judge whether an user’s utterance belongs to a specific intention, or to a limited range of emotions. Namely, it can serve intention recognition and sentimental analysis.

In the realm of conversational commerce, the examined sentence could be:

I want to buy some apples and pears

The system recognizes the intention search and presents the results.

Intention prediction is not an untackled problem and the market offers plenty of services. There are many players such as Google (Api.ai), Facebook (Wit.ai), Microsoft (Luis.ai) just for mentioning some of them, but this shouldn’t prevent further explorations in the topic, sometimes with unexpected positive surprises, as shows in the graph.

Minos Accuracy

The test was performed against real data used for training the deployed model of the Chatbot system and the results are relevant for the real working scenario, so no cherry picking in this case. 300 training samples, 56 test samples for 25 classes, these are the dataset’s numbers.

Minos, the text classifier, uses an ensemble of machine learning models. It combines multiple classifiers for getting a good prediction out of utterances submitted to Charly. One of the models is based on Convolutional Neural Networks (CNN).

CNN in NLP

CNN is mostly applied to image recognition thanks to the tollerance on translations (rotations, distortions) and the compositionality principle (entities are composed by its constituents). Admittedly, CNN might appear counter-intuitive at a first approach because text looks very different from images:

  1. The order of the words in text is not as important as the order of the pixel in an image.
  2. Humans percept text sequentially, not in convolutions.

Invariance

Entities like images and texts, should be compared differently. The smallest atomic element in text is the single charater, rather than the word, like the pixel in images. The proportion is more like:

text : char = image : pixel

By this angle of view, the order of characters in sentences is fundamental. Convolutions in text come in form of:

single word => bi-grams (two adjacent words) => n-grams

like graphical features

lines , corners => mouths, eyes => faces

come out of portraits.

In CNN the pair adjective + object for example, could be recognized invariantly of its position, at the begin or at the end of a sentence, exactly like a face is recognized wherever it is located in the whole picture.

Sequentiality

It might seem more intuitive to apply Recurrent Neural Networks (like LSTM, Attention or Seq2seq) for text classification, due to the sequential nature of RNNs algorithms. I didn’t run any test on them so far, but I would promptly play with TreeLSTM. CNN performs well, and one might say that Essentially, all models are wrong, but some are useful, an essay the fit with the idea that final outcome drives the decisions, and experimental results play an important role.

Word Embeddings

Alike any NLP, in CNN words are replaced by their correspective semantic vector. Most famous are Google word2vec, GloVe and FastText. I decided to make use of ConceptNet Numberbatch that took first place in both subtasks at SemEval 2017 task 2. Moreover, the vector file is very small (~250M) compared to Google News word2vec (~1.5G) and from an engeneering point of view, those numbers matter.

Minos is still experimental and not well tuned, doors are open for improvements. An aspect shouldn’t be ignored on working with CNN is the Catastrofic Forgetting, an annoying phenomenon that ruins irrevocably the entire training.

Automated Question Answering using Semantic Networks

I worked recently in a small prototype that combines NLP analysis and semantic datasources for answering simple generic questions, by learning how to get the informations given a fairly small amount of question/answer pairs.

Conversational interactions represents the core of any modern Chatbot and the ability to manage utterances and conversations is the strongest indicator of user’s satisfaction. A natural and spontaneous QA dialogue, as every Chatbot would aim to engage, will attempt to solve 3 fundamental issues:

  1. Classify utterances and extract dependencies between words.
  2. Integrate source of knowledge.
  3. Infer transitive semantics (e.g., reconstructing what it is implied but not written).

Neural networks are particularly effective in conversational modelling. Architectures like seq2seq have demonstrated to generate sounding conversational interactions, by predicting sentences given the previous sentences in the dialogue. The end-to-end nature of those models represents one of their major strengths. Those models have no hand-crafted rules, since they are trained against large conversational datasets.
Nevertheless, these models can’t incorporate content in the form of factual informations from sources rather than the training corpora; the conversational analyzer (1) and the knowledge representation (2) are not distinguishable.

Split language and knowledge domains

The approach I’m going to describe will allow the model to be versatile and applicable to different domains since it can clearly distinguish (1) and (2) as interoperable components of a QA system.

The goal of the prototype is to obtain meaningful answers out of simple questions, by learning how to get the information instead of learning the information. Basically, the machine learning system won’t be instructed merely on which is the right answer, but rather how to find it in a given datasource.

I was inspired by this paper that uses ConceptNet, an open-source semantic database which gather informations from several sources such as WordNet, DBPedia, Wiktionary. It helps computers to understand the meaning of the words by solving their analogies with others. ConceptNet serves as ontology source and also covers the inference service (3) by its graph based nature, where entities are connected by semantic relations.

Sentences classification and entities extraction

As could be intuitively deducted, the sentences structure is invariant in respect of the number of entities used for querying the datasource, therefore the samples necessary to train could be much lower than the seq2seq previously mentioned approach.

NLP dependency tree

Knowledge crawler

Within the entities extracted from the utterance and the expected answer (or a list of optional answers) the crawler should return the shortest navigable path in the knowledge graph for obtaining the accepted answer.

Input: nsubj: colour, nmod: sea

Expected answer: blue

Compiled model: [{nsubj}:/r/IsA] and [{nmod}:/r/RelatedTo]

The outcome is correlated to the logic of the sentence, and different types of sentences are classified accordingly. However, similar grammar structures can also have different semantics and they should be treated differently. In the case of

Question:what is the capital of germany?

the crawler will find the query with the least number of joins required to get the expected results. It outputs a different model: {nmod}:/r/dbpedia/capital

In this case, a direct relation (dbpedia/capital) univocally describes the expected relation, and it is selected as the best alternative for answering that specific class of questions.

Run the model

Now let’s run the model asking: what is the color of the sun?. The inference component will first classify the sentence and it will associate it with the first model have been previously compiled. The crawler will search entities that fulfill both relations (X IsA color and X RelatedTo sea), and gets the result: yellow.

These are some of possible outcomes:

>Tell me some cities in Italy in front of the sea
Venice, genoa

>What is the capital of Germany?
Berlin

>Is blue the color of the sea?
Yes

>Which lakes can you find in Italy?
Lago como, lago garda

Machine Comprehension on Chatbots

One of the most demanded feature in chatbots is the ability to automatically provide helpful informations. Users might ask about how to pay the purchases online, how to return an defected item, when the purchase could be delivered or just about the opening hours of a shop.

One way to implement this feature is to train a sentence classifier for a determined set of questions the merchant is willing to answer. The system should be instructed on some examples such as: “which credit card do you accept?”, “How do I pay?”, “which payment do you support?” and so on. This simple technique requires a sequence of manual tasks for every conversational agent, such as set up the training and inference pipeline for questions/answers, or reuse the Natural Language Understanding (NLU) system already adopted by the chatbot, if present.

A second more intriguing and sophisticated approach leverages the advances in machine comprehension, which is the ability to read text and then answer questions about it, automatically. Stanford NLP group created SQuAD a dataset consisting of 107.785 questions pairs on 536 articles, for training and evaluate machine comprehension models. One example of article and question and expected answer is:

In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail…

Question: What causes precipitation to fall?

Answer: gravity

Understand text is hard. It requires the knowledge of the language and a representation of the topic. Those challenges could be easily compared with the linguistic and cultural barrier among people. For example, I can hardly understand a paper written in chinese about Panda’s immune system, essentially because I don’t know the chinese language and I don’t know anything about immunology. Similarly, a program can’t do better on unless it masters these two aspects: language on one hand, and high-level concepts on the another.

For running an automatic FAQ responder I used one of the top-ten reading comprehension system available, the BiDAF (Bi-Directional Attention Flow). It doesn’t perform badly (81,5% F1 accuracy) compared to human precision, which is 86,8%. I applied BiDAF on Charly, a chatbot for conversational commerce, for serving informations extracted from given text like this:

My phone number is +4911002233. I live in Munich, Germany. You can pay with your credit card, we accept: Visa, Mastercard, American Express, Maestro, Visa Debit. The delivery is twice per week on Tuesday and Saturday. if the purchase or a product is not good or you are unsatisfied please return the product with the receipt within 30 days to the driver or call us on +4911002233.

This is how Charly answers:

Charly chatbot

This approach is much scalable than a classical questions’ classifier. It allows automatic responses from text that could be just scanned automatically by the FAQ’s page present in the customer website, or just a plain info text submitted by e-mail or a web form.