Archive

This talk considers the preference modeling problem and addresses the fact that pairwise comparison data often reflects irrational choice, e.g. intransitivity. Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features. Formalizing this idea, I will introduce our proposal for a “salient feature preference model” and discuss sample complexity results for...

Read More

Explainability is a topic of growing importance in NLP. In this work, we provide a unified perspective of explainability as a communication problem between an explainer and a layperson about a classifier’s decision. We use this framework to compare several prior approaches for extracting explanations, including gradient methods, representation erasure, and attention mechanisms, in terms of their communication success. In addition, we reinterpret these methods...

Read More

Whereas physical obstacles were mostly associated with signal attenuation in telecommunications, their presence in 5G's millimeter wave systems adds complex, non-linear phenomena, including reflections and scattering. The result is a multipath propagation environment, shaped by the obstacles encountered during transmission, indicating a strong and highly non-linear relationship between a device's received radiation and its position. In this presentation, new ways to shape these signals will...

Read More

In order to make decisions, for instance when purchasing a product, people rely on rich and accurate descriptions, which entail multi-label retrieval processes. However, multi-label classification is challenged by high dimensional and complex feature spaces and its dependency on large and accurately annotated datasets. Deep learning approaches brought a definite breakthrough in performance across numerous machine learning problems, and image classification was, undoubtedly, one of...

Read More

Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns....

Read More

Duplicate detection concerns with identifying pairs of attributes/records that refer to the same real-world object, thus corresponding to a fundamental process when ensuring data quality in databases. Existing methods to detect duplicate attributes can leverage heuristic string similarity measures based on characters or small character sequences, phonetic encoding techniques that match strings based on the way they sound, or hybrid techniques that combine different approaches. However,...

Read More

Artificial neural networks, one of the most successful approaches to supervised learning, were originally inspired by their biological counterparts. However, the most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible. Many believe that the next generation of artificial neural networks should be built upon a better understanding of biological learning. So, for decades, neuroscience and machine learning communities have been trying...

Read More

In the past few years, deep generative models, such as generative adversarial networks, variational autoencoders, and their variants, have seen wide adoption for the task of modelling complex data distributions. In spite of the outstanding sample quality achieved by those methods, they model the target distributions implicitly, in the sense that the probability density functions induced by them are not explicitly accessible. This fact renders...

Read More

Visual attention mechanisms are widely used in multimodal tasks, such as image captioning and visual question answering (VQA), being softmax attention mechanism the standard choice. One drawback of softmax-based attention mechanisms is that they assign probability mass to all image regions, regardless of their adjacency structure and of their relevance to the text. To better link the image structure with the text, we replace the...

Read More