Wednesday, January 17, 2018

My Story with Polar Codes Comes to an End

I have long been wondering whether polar codes are fractal, found out that they are in 2015, and put a paper on arXiv. The story that began in 2010 finally comes to end -- the paper is now published!

I submitted the paper to the IEEE Transactions on Information Theory, where it was under review for 18 months. I was told that the two reviewers thought the paper collects a few "interesting and cute mathematical observations", but that it does not tell us anything about the relevant code properties such as error probabilities or code complexity. I was further told that if I could "add a convincing application", then the paper could get to a second review round.

I withdrew the paper and submitted it to Entropy, and open-access journal published by MDPI, in October 2017. I received reviews within less than two months. Four of the five reviewers were inclined to accept the paper after some revision, while the fifth reviewer insisted to reject the paper (and probably still does) because of its missing connection to real-world codes. It is worth mentioning that the review process was not only much faster than the one for the IEEE Transactions on Information Theory, but that I also got high-quality reviews: Some of the reviewers went through the math in detail, made corrections and suggestions for improvement that led to generalizing some of the results. This shows that a proper, high-quality review process cannot be used to justify long review cycles.

Friday, January 12, 2018

Reading Notes: Neural Networks and the Information Bottleneck Principle

There is a recent hype regarding the connection between neural networks (NNs) and the information bottleneck principle. In this post, I try to summarize a few papers that have been published since 2015. A popsci treatment of this topic can be found here. All linked papers are available on arXiv or on OpenReview.

The reader should be familiar with the information bottleneck (IB) principle. Specifically, it assumes that we have a class random variable (RV) $Y$ and a (multi-dimensional) feature RV $X$, and that our goal is to compress $X$ to a "less complex" RV $\hat{Y}$ that shares much information with $Y$ but little with $X$ (compression versus preserving relevant information). Mathematically, one tries to minimize
$$
  I(X;\hat{Y}) - \beta I(Y;\hat{Y}) \text{ or equivalently } I(X;\hat{Y}) + \beta I(X;Y|\hat{Y})
$$
where $\beta$ is a parameter trading between these two goals. The first term in either expression is often called the rate $R$, while the second term in the latter expression is the information distortion $D_{IB}$. This allows to draw the IB curve with $R$ on one and $D_{IB}$ on the other axis. This curve depends only on the joint distribution of $(X,Y)$ and is monotonically decreasing.

2015 - Tishby and Zaslavsky, "Deep Learning and the Information Bottleneck Principle". How it all started.

The authors claim that the goal of NNs is to compress the features as much as possible, with the constraint that the compression should be a minimal sufficient statistic of the class RV. The IB principle formulates this goal, hence they claim that it can be used to better understand NNs. Indeed, feed-forward NNs are a sequence of compression functions; the feature RV $X$ is compressed to the output of the first hidden layer $h_1$, which is then compressed to the output of the second hidden layer $h_2$, etc. Finally, the output of the last hidden layer, $h_m$, is compressed to $\hat{Y}$. The authors claim that the IB functional not only characterizes the NN as a whole, but each layer separately. I.e., each layer should minimize $I(h_i;h_{i-1})$ while simultaneously maximizing $I(Y;h_i)$. The former has an interpretation in terms of the minimum description length principle, while the latter bounds the classification error. With this in mind, one can evaluate the optimality of each layer separately by comparing its performance with the IB curve. Each layer can only increase $D_{IB}$, but the hope is that $R$ is reduced by compression.
The authors then discuss finite sample bounds, which are relevant in estimating the IB curve from a finite data set. Specifically, it turns out that the empirical IB curve is bounded from below by the decreasing true IB curve, but from above by a convex function. I.e., for increasing $R$ the actual value on the IB curve may not necessarily decrease with increasing rate $R$. This is exactly the phenomenon of overfitting: A large $R$ may lead to a small $D_{IB}$ on the training set, but to a much larger $D_{IB}$ on the test set. It may thus be beneficial to aim at obtaining a representation close to the minimum of the convex upper bound. There, the rate $R$ is smaller and leads to a slightly larger $D_{IB}$ on the training set, but the worst-case increase of $D_{IB}$ for the test set is bounded. The authors argue that compression is thus necessary for generalization. Based on that, they argue that the IB principle may be used for the design of NNs. Finally, the authors briefly mention that there is a connection between bifurcations of the IB curve corresponding to constraints on $\hat{Y}$ (e.g., cardinality of the support) and the structure of the NN.

Based on this paper, to directions of research have been initiated: Designing NNs using the IB principle, and understanding IBs using the IB principle.

Design of NNs

2016 - Khadivi, Tandon, Ramakrishnan, "Flow of Information in Feed-Forward Deep Neural Networks"

This paper proposes using the IB functional as a cost function for training a NN. Specifically, they decompose the rate $R$, which happens to be $H(\hat{Y})$ because the NN is deterministic, as a telescoping sum of information losses per layer. Moreover, they decompose the information distortion $D_{IB}$ as a sum of terms $I(h_{i-1};Y|h_{i})$; indeed, their equation (19) is precisely my Proposition 4 in this paper. Their goal is to maximize compression with a constrained distortion, as opposed to the variational formulation of Tishby. Strangely, though, they have their compression goal is minimizing $I(X;\hat{Y})+\sum_{i=1}^m I(X;h_i)$, i.e., they also want to compress the individual layer outputs separately. This leads to a stronger compression factor for early layers, or equivalently, to smaller values of $\beta$ for early layers.
Finally, the authors formulate the training problem not as finding weights, but first finding admissible functions between layers (where admissibility means that the function can be implemented using a NN layer of given structure). They then obtain the weights from the obtained function. This approach is rather strange and hard to implement in practice, I suppose, and that may be the reason why the authors focus on Boolean functions in their example section.

2016 - Alemi, Fischer, Dillon, Murphy, "Deep Variational Information Bottleneck"

The authors also propose using IB to train a NN, claiming that it is "natural and useful". However, since computing mutual information is "computationally challenging", they replace the terms in the IB functional by upper and lower bounds. Specifically, they replace the decoding distribution $P_{Y|\hat{Y}}$ by a surrogate in the term $I(\hat{Y};Y)$, and the marginal distribution $P_{\hat{Y}}$ by a surrogate in the term $I(\hat{Y};X)$. For example, they assume that $P_{Y|\hat{Y}}$ is a simple softmax and that $P_{\hat{Y}}$ is a spherical Gaussian. Moreover, they assume that the encoder that maps $X$ to $\hat{Y}$ is stochastic, i.e., it responds with a Gaussian RV for which the mean vector and covariance matrix are obtained from a neural network. This allows to sample from the distribution of $\hat{Y}$ and to compute gradients and to use backpropagation. Note that this latter assumption is not one on the surrogate $P_{\hat{Y}|X}$, but rather one on the structure of the NN: Inside the NN there is a "bottleneck layer" that responds with $\hat{Y}$ (it may be the last layer), where the layer explicitly adds Gaussian noise.
They discover that the IB principle works as a decent regularizer in the sense of achieving good classification performance on MNIST data. They moreover plot the information plane (the IB curve) and discover that a poor choice of $\beta$, leading to low compression, leads to overfitting. An interesting figure shows a "two-dimensional bottleneck layer", i.e., $\hat{Y}$ has two components. For small compression $\mathbb{R}^2$ is clearly separated into ten sectors corresponding to the 10 classes of the MNIST data set, but that larger compression these regions tend to overlap, leading to misclassification but also to robustness to adversarial examples. For image classification tasks it was shown that the IB principle leads to NNs that are more robust to adversarial examples.
It is worth mentioning that the authors apply the IB principle to the NN as a whole, i.e., not to individual layers as Tishby suggested. They leave applying it to multiple (or all) layers for future work.

2017 - Kolchinsky, Tracey, Wolpert, "Nonlinear Information Bottleneck"

Although conceptually similar to the work of Alemi et al., these authors actually aim at implementing IB for non-Gaussian and non-discrete distributions (leading, generally, to a nonlinear compression function). Again, the authors argue that the IB functional is hard to compute and hence resort to bounds by replacing distributions by surrogate ones. Specifically, they assume a stochastic encoder $P_{\hat{Y}|X}$, which is based on a differential function (implemented using a neural network) to which spherical Gaussian noise is added. They moreover assume that the marginal distribution at the output of the deterministic neural network can be modeled by a Gaussian mixture. Using this marginal distribution and the conditional distribution, they present a non-parametric bound on the term $I(X;\hat{Y})$ (rather than a variational bound as in the previously summarized paper). They moreover implement the decoding function via a neural network and estimate the log decoding probability $\log P_{Y|\hat{Y}}$ by evaluating an "appropriately chosen neural network cost function" and by sampling from the "stochastic" bottleneck variable $\hat{Y}$. In other words, they use two neural networks to implement the encoding and decoding functions of the IB functional. Very interesting is their Figure 2, which shows how the bottleneck layer manages to separate AND compress the classes, such that data points from the same class almost fall together. If the parameter $\beta$ is set to infinity, then compression is not a goal anymore. The bottleneck layer then still manages to separate the classes, but each class corresponds to a somewhat spread-out cloud of data points. Data points are still distinguishable, which means that the bottleneck layer not only contains class information, but also additional information about the data points that is actually irrelevant.

2018 - ?, Parametric Information Bottleneck to Optimize Stochastic Neural Networks

This paper on OpenReview trains a neural network using the IB functional in every layer, i.e., in each layer they want to minimize $I(h_i;h_{i-1})$ and maximize $I(h_i;Y)$, with the trade-off determined by a layer-specific parameter $\beta_i$. They do not rely on a NN decoder as Alemi et al. did, but empirically estimate $I(X;h_i)$ and use a variational approximation for $I(Y;h_i)$. For this, they treat the subsequent layers of the NN as an approximation of $P_{Y|h_i}$. They furthermore estimate $I(h_i;h_{i-1})$ by Monte Carlo sampling. They show that their approach, minimizing the IB functional, performs well on the MNIST data set, where all layers had the same parameter $\beta$.

Understanding NNs

2017 - Schwartz-Ziv, Tishby, "Opening the Black Box of Deep Neural Networks via Information"

This paper probably kicked up most of the dust; a recording of a talk of Tishby's on this topic is available. In this paper, the authors train a NN with sigmoid activation functions and without regularization. They furthermore evaluate the IB functional for every layer throughout training. They observe that the training phase is split into two parts: A phase in which the training error reduces quickly. This phase is characterized by a stochastic gradient with a large mean and small variance ("drift phase") and by an increase of $I(h_i; Y)$ (the layer learns). Simultaneously, the terms $I(h_i; X)$ seem to increase, too (because they also learn something about the features). This drift phase is very short. The much longer phase is characterized by a compression of the input information subject to the training error criterion (i.e., training error does not change?). In this phase, the gradient has a small mean and a large variance ("diffusion phase"), and is characterized by further increase of $I(h_i; Y)$, but a simultaneous decrease of $I(h_i; X)$ in the deeper layers: The deeper layers compress, in the sense that they forget input information not relevant for classification. This phase is significantly longer. What is interesting is that, without regularization, there is no decrease in weight norms nor sparsity observable. Rather, the compression occurs because of the stochasticity of gradient descent. This suggests that, without regularization, there are multiple NNs with (possibly random) weights that have equal classification performance. It therefore suffices to find a NN in this equivalence class; interpreting the output of the sigmoid function as a probability (which is equivalent to adding noise?) leads to choosing a function with low complexity (rather than one which is random and has high complexity; cf. Section 2.4). The authors furthermore indicate that the converged layers have IB functionals that lie close to the optimal curve for special values of $\beta$, where $\beta$ is smaller for deeper layers (more focus on compression). Furthermore, they observed that the compression phase is faster for NNs with more hidden layers. They also claim that the convergence of the layers to the IB functional can be used in training NNs.

2018 - ?, On the Information Bottleneck Theory of Deep Learning

This paper on OpenReview, where a long discussion between Tishby (a reviewer?) and the authors can be found, essentially refutes some of the claims of Tishby's paper. Specifically, they argue that compression in the diffusion phase does not appear for linear and ReLU activation functions. They argure moreover that the compression observed in Tishby's paper is due to the fact that the sigmoid nonlinearity is saturated, leading to small mutual information values when the latter is estimated via binning (to which Tishby responded that the authors don't know how to estimate mutual information). They showed that compression occurs independently of generalization, by showing that 1) a linear network generalizes but does not compress, and that an overfitted network compresses but does not generalize. Finally, they showed that similar results are obtained by batch gradient descent instead of stochastic gradient descent, which shows that compression is not achieved by stochasticity. The verdict which of these papers is closer to the truth is still out. It is worth mentioning, though, that they discuss several problems with information-theoretic cost functions in Appendix C, where they also discuss the importance of noise.

Tuesday, October 31, 2017

Reading Notes: Co-Clustering

In co-clustering we have two sets of related entities, both of which we want to cluster -- the main goal is that the clusters of these two sets preserve the "meaningful" relationships between the original sets. As a running example, we assume that the two sets are a set of words $\mathcal{W}$ and a set of documents $\mathcal{D}$, and that the relation between these two is the number of times a word appears in a document. We furthermore assume that the relationship between these two sets is described by a $\mathcal{W}\times\mathcal{D}$ matrix $A$.

Co-clustering has tight connections to/applications in community detection in bi-partite graphs.

Block Models

Block models assume that the entries of $A$ are independently drawn from distributions parameterized by the word and document clusters; for example, if $A$ is binary, then for each co-cluster, a different Bernoulli distribution is assumed, while for $A$ being a contingency table, a Poisson distribution with a parameter depending on the co-cluster is often assumed. Clustering can then be performed by finding a model that maximizes the likelihood of the $A$, i.e., by applying a variant of classification EM. Relevant references can be found in the paper "Sparse Poisson Latent Block Model for Document Clustering (2017)" by Ailem et al. and mainly go back to Govaert and Nadif. Ailem et al. assumed that the block model is "block diagonal", i.e., that the words and documents are clustered in the same number of clusters, and they assumed that the off-diagonal blocks are parameterized with the same underlying parameter, leading to fewer parameters to be estimated by the EM-type algorithm.

Information-Theoretic Cost Functions

These cost functions are based on transforming $A$ to a joint probability distribution $P$ of two RVs $W$ and $D$ corresponding to the words and documents, i.e., $W$ has alphabet $\mathcal{W}$ and $D$ has alphabet $\mathcal{D}$. We denote the RVs corresponding to clusters by an overline, i.e., $\overline{W}$ and $\overline{D}$.
  • Not really simultaneous co-clustering, but still co-clustering: In "Document Clustering using Word Clusters via the Information Bottleneck Method (2000)" by Slonim and Tishby, the authors first clustered the words in order to maximize $I(\overline{W};D)$, and then used these word clusters in order to find document clusters such that $I(\overline{W};\overline{D})$ is maximized. They used the agglomerative information bottleneck method in each of these two steps. In "Iterative Double Clustering for Unsupervised and Semi-Supervised Learning (2002)", El-Yaniv and Sourourjon improved the method by clustering the words (to clusters $\overline{W}^2$) again, but this time maximizing $I(\overline{W}^2;\overline{D}^1)$, where $\overline{D}^1$ are the document clusters obtained in the first iteration. They they obtained the document clusters by maximizing $I(\overline{D}^2;\overline{W}^2)$. They then repeated this procedure several times until convergence.
  • The work "A divisive information-theoretic feature clustering algorithm for text classification (2003)" by Dhillon et al. is not on co-clustering, but on word clustering in order to make document classification more robust. The goal of this technique is to find word clusters such that the mutual information $I(D;\overline{W})$, the mutual information between the word clusters and the documents, is maximized. The maximization procedure is similar to k-means, i.e., cluster memberships are computed, on the basis of which new cluster centroids are determined.
  • Based on the previous work, Dhillon et al. wrote the influential paper "Information-Theoretic Co-Clustering (2003)". Co-clustering was suggested to yield improved performance compared to one-way clustering, presumably because of an implicit regularization/dimensionality reduction. The goal was to maximize $I(\overline{D};\overline{W})$, i.e., to maximize the information the word clusters share with the document clusters. This is equivalent to approximating $P$ by a non-negative matrix trifactorization $\tilde{P}=A_W Q A_D$, where $Q$ is the joint probability distribution between $\overline{D}$ and $\overline{W}$ (it is hence a $\overline{\mathcal{W}}\times\overline{\mathcal{D}}$ matrix), and where the cost function is the Kullback-Leibler divergence $D(P\Vert \tilde{P})$. The authors then suggested a sequential optimization algorithm, i.e., one starts with a co-clustering, then one computes the block model, based on which all word clusters are updated, which then changes the block model before the document clusters are updated.
  • The paper "Information Bottleneck Co-Clustering (2010)" by Wang et al. follows the spirit of the information bottleneck method for multiple variables, i.e., using multi-information. The multi-information of a collection of RVs, described as a Bayesian network, is the sum of the mutual information between each RV and its parent RVs in this network. The goal is then to minimize the multi-information of the "input graph" (described by $P$ and the clustering functions) while at the same time maximizing the multi-information of the co-clustering to the target variables (described by $Q$ as in the previous section and the mutual information between document clusters and words/word clusters and documents). The cost function thus becomes
    $$ I(W;\overline{W}) + I(D;\overline{D}) + I(W;D) - \beta I(W;\overline{D}) - \beta I(D;\overline{W}) - \beta I(\overline{W};\overline{D}).$$ They minimize this cost function either using an agglomerative technique or by a Blahut-Arimoto-type fixed-point iteration in combination with some heuristic (continuation method) to escape local optima.
  • An information-theoretic cost function was also used by Bekkerman et al. in "Multi-Way Distributional Clustering via Pairwise Interactions (2005)", where they investigated simultaneous clustering of $m\ge 2$ sets (or $m$ RVs). Rather than using multi-information, they depended on a graph structure with $m$ vertices and an edge $e_{ij}$ if they wanted to maximize $I(\tilde{X}_i;\tilde{X}_j)$, i.e., the mutual information between the clusterings of the $i$-th and $j$-th RV. While they essentially proposed a simple sequential technique, they combined it with splits and merges, i.e., they started with an initial clustering solution in which $\tilde{X}_i=X_i$ for some $i$, while $\tilde{X}_j=const.$ for others.

Spectral/Modularity-Based Methods

Spectral and modularity-based methods assume that the word-document matrix $A$ can be interpreted as the biadjacency matrix of a bipartite graph. In these techniques, one either tries to minimize some cut score, or one tries to maximize modularity, which is always defined relative to a null model. In these models, one always has $\overline{\mathcal{W}}=\overline{\mathcal{D}}$.
  • In "Co-clustering documents and words using bipartite spectral graph partitioning (2001)", Dhillon suggested co-clustering by cutting the graph, minimizing the normalized cut. Relaxing the cut criterion, for $\overline{\mathcal{W}}=\overline{\mathcal{D}}=2$, one can compute the second singular vectors of a normalization of $A$, stack them, and cluster the elements of this single vector into two clusters using k-means. By unstacking, these two clusters then correspond to the two word clusters and the two document clusters. This technique can be extended to $\overline{\mathcal{W}}=\overline{\mathcal{D}}=k$ by computing $\ell=\lceil\log_2 k\rceil$ singular vectors, stacking them to a matrix $Z$, and perform k-means for $k$ clusters on the $\ell$-dimensional rows of $Z$. There is a Matlab implemenation of this method.
  • The paper "Co-clustering for Binary and Categorical Data with Maximum Modularity (2011)" by Labiod and Nadif uses modularity as an objective function; modularity is maximized solving the relaxed generalized eigenvalue problem and then clustering the eigenvalues using k-means just as Dhillon proposed. They focused on binary and categorical data.
  • Ailem et al. proposed modularity as an objective function in "Co-clustering document-term matrices by direct maximization of graph modularity (2015)". They proposed searching for modules in the bipartite graph, by alternatingly maximizing the modularity over word clusters and document clusters, i.e., they first fix the word clusters and maximize modularity over the document clusters, then fix document clusters to find optimal word clusters.

Other Methods

  • Co-clustering can be seen also as a non-negative matrix trifactorization problem; Sra and Dhillon present update rules for both Euclidean distance and Kullback-Leibler divergence in their report " Generalized nonnegative matrix approximations (2006)". The latter cost function is also used in Information-Theoretic Co-Clustering by Dhillon et al.
  • The paper "Co-clustering through optimal transport (2017)" by Laclau et al. formulates the co-clustering problem as the problem to transport the empirical probability mass from the words to the documents, the solution of which is a joint probability distribution $\tilde{P}$ approximating the empirical one. Entropically regularized transport yields again the Kullback-Leibler divergence as cost function, thus minimizing $D(\tilde{P}\Vert P)$, where $P$ is obtained from $A$ (note the difference to Information-Theoretic Co-Clustering), and where $\tilde{P}$ is connected to variational inference. The authors showed that co-clustering can be solved via the Sinkhorn-Knopp algorithm and suggested a heuristic to determine the number of clusters.
  • Not exactly a co-clustering method, but rather a meta-heuristic, was proposed by Cheng et al. in "Co-ClusterD: A Distributed Framework for Data Co-Clustering with Sequential Updates (2015)". They analyzed the "concurrent" update rule of, e.g., Information-Theoretic Co-Clustering of Dhillon et al. and replaced it by a "sequential" update rule: Essentially, they propose updating the statistics for a co-cluster as soon as a single element of a set changes its cluster membership, thus influencing the next cluster update immediately. Concurrent update rules, on the other hand, update the co-cluster statistics only after all words have been reassigned to word clusters (and similarly for document clusters). They then present a framework for performing co-clustering in a distributed fashion.
  • A hierarchical co-clustering method was introduced by Li et al. in "Hierarchical co-clustering: a new way to organize the music data (2012)". Their "divisive" version applies Dhillon's spectral method recursively, while their "agglomerative" version simply merges clusters in order to minimize cluster heterogenity.
  • Banerjee et al. view co-clustering as a matrix approximation task in "A generalized maximum entropy approach to Bregman co-clustering and matrix approximation (2007)". Essentially, they view the co-clustering as a way to get a summary statistic of $A$ based on the co-clusters, and they proposed several different types of summary statistics (e.g., co-cluster means). They then show that, given $A$ and the type of summary statistic, the optimal summary statistic and the optimal co-clustering is obtained via minimizing any Bregman divergence, such as the squared Euclidean distance or the Kullback-Leibler divergence. Their algorithm is similar to the one proposed by Dhillon et al. for Information-Theoretic Co-Clustering.
  • In "Locally Discriminative Coclustering (2012)", Zhang et al. proposed a cost function mapping a word and a document to the same co-cluster if the corresponding entry in $A$ is large. In addition, they proposed also enforcing co-clusters that respect dependencies between words alone, and between documents alone. They showed that the complete problem can be relaxed to an eigenvalue problem.

Friday, October 6, 2017

Reading Notes: Community Detection in Bipartite Graphs

A bipartite graph consists of two sets $V_a$ and $V_b$ that unite to the set of vertices, and a set of edges $E\subseteq V_a\times V_b$. The bipartite adjacency matrix $B$ contains a 1 in position $(a,b)$ iff $(a,b)\in E$; the adjacency matrix $A$ of the graph is given by

$$ A = \left[\begin{array}{cc} 0 & B \\ B^T & 0 \end{array}\right].$$

A classic paper is the one by Freeman, "Finding Social Groups: A Meta-Analysis of the Southern Women Data (2003)", who compared and analyzed 21 approaches to community detection.

Block Models

  • A block model approach has been proposed by Doreian, Bagatelj, and Ferligolj in their paper "Generalized Blockmodeling of Two-mode Network Data (2004)". Their approach is based on modeling blocks via specific attributes (all-ones, all-zeros, row-regular, etc.) and then inferring the position of these blocks in $B$. They do so by proposing a sequential heuristic in which they minimize the difference between the ideal block model and the actual adjacency matrix (more details are here). Hence, they require not only to specify the number of communities, but also the type of blocks used in the model. Different choices yield different solutions, as they illustrate for the Southern Women dataset. Their approach can also be used for weighted graphs.
Stochastic block models assume that the number of edges between two nodes is drawn from some probability model, and that the probabilities are different for edges within blocks and between blocks. Usually, the block probabilities are estimated, and the blocks are inferred by maximizing the (log-)likelihood of the graph given the block model.
  • The paper "Efficiently inferring community structure in bipartite networks (2014)" from Larremore, Clauset, and Jacobs proposes a block model in which the number of edges are assumed to be drawn from a Poisson distribution, admitting different Poisson parameters for every block and every pair of blocks. They introduced a second model which admits modelling bipartite graphs with broad degree distributions. The authors suggested a sequential algorithm with a heuristic to escape local minima. The algorithm is implemented for MATLAB and R. The algorithm seems to work also for weighted graphs, but requires the numbers of blocks (i.e., communities) as input.

Spectral/Modularity-based Methods

Modularity is a measure of how well a graph can be split into modules. It is based on a null model (typically one in which the edges are randomized such that the vertex degrees remain constant) and the comparison of a given partition of the network to a partition of this null model. Modularity is usually maximized by spectral methods. Although these methods seem to work also for weighted graphs, the derivation of the null model is only justified for unweighted graphs (i.e., for $B$ comprising of 0s and 1s).
  • Barber proposed a modularity measure for bipartite networks in "Modularity and Community Detection in Bipartite Networks (2007)". He proposed optimizing this measure recursively: Fixing the row partition, the column partition can be obtained easily, and vice-versa. He hence alternatively optimizes these two partitions, calling his algorithm BRIM. Barber also suggested a bisection approach to determine the number of communities. A fundamental drawback is that his method requires the same number of clusters for both rows and columns (see eq. (21) in the paper). The cost function has been used in a MATLAB implementation, which is described in this paper.
  • A modularity measure was also proposed by Guimera, Sales-Pardo, and Nunes Amaral in "Module Identification in Bipartite and Directed Networks (2007)". They optimize their cost function via simulated annealing, but show that their results do not differ from modularity-based methods applied to $B^TB$. 
  • Dormann and Strauss proposed a simulated-annealing Monte Carlo method for maximizing modularity in "A method for detecting modules in quantitative bipartite networks (2014)". They measure modularity using Barber's definition. Their algorithm, QuanBiMo, is based on fitting a tree to a graph, a method proposed here. An R package is available.

Projection Methods

Projection Methods usually work on $B^TB$ (weighted projection) or on $\max\{1,B^TB\}$ (unweighted projection) rather than on $A$. In that sense, they work on less information than is available in the original model, but admit using community detection methods for unipartite graphs. If $B$ is binary, $B^TB$ and $BB^T$ contain all "relevant" information about $A$ (thus, $B$), hence a dual-projection approach could potentially work as well as an approach targeted at $A$. This, at least, is argued by Everett and Borgatti in "The dual-projection approach for two-mode networks (2013)".

Other Techniques

Friday, September 15, 2017

Invariant Distribution of a Higher-Order Markov Chain (Pt. 2)

Some time ago I was looking into the uniqueness of the invariant distribution of a second-(or higher-)order Markov chain, and I was running into problems. After a few weeks of work, a manuscript was ready in which I found some useful sufficient conditions for this invariant distribution to be unique. The manuscript is available on arXiv, and published in the Statistics and Probability Letters. You can access the full-text of the manuscript for free until October 1st 2017 (afterwards, the arXiv version remains available, of course). I'm happy to get feedback, so please don't hold back :)!

Tuesday, April 11, 2017

An Inequality between Conditional Entropy and Error Probability

Suppose that $X$ is a random variable that can assume $N$ different values, and suppose that $Y$ is another random variable that is linked to $X$ in an arbitrary way. We want to estimate $X$ from observing $Y$, i.e., we define a (possibly random) function $\hat{X}(Y)$. (We may assume that $X$ is a signal we want to estimate and that $Y$ is a noisy or distorted observation of $X$.) We define the detection probability as $P_d:=\mathbb{P}(X=\hat{X}(Y))$, where the probability is taken w.r.t. the joint distribution $p_{X,Y}$ of $X$ and $Y$. A famous inequality relating $P_d$ to the conditional entropy of $X$ given $Y$ is Fano's inequality:
$$
 H(X|Y) \le h_2(P_d) + (1-P_d)\log (N-1)
$$
where $h_2(p):=-p\log p-(1-p)\log(1-p)$. Fano's inequality depends on the alphabet size $N$ of $X$. In what follows we derive a different inequality that is independent of $N$.

To this end, suppose that we use the maximum a posteriori estimate (MAP) of $X$ given $Y$, i.e.,
$$
  \hat{x}(y) = \arg\max_x p_{X,Y}(x,y) = \arg\max_x p_{X|Y}(x|y).
$$
Given that $Y=y$, we can thus define
$$
P_d(y) = \sum_{x} p_{X|Y}(x|y) \mathbb{I}(x=\hat{x}(y)) = \max_x p_{X|Y}(x|y)
$$
from which $P_d=\sum_y p_Y(y) P_d(y)$ follows. Comparing the right-hand side of this with Renyi's entropy of order infinity, we observe that
$$
 P_d(y) = 2^{-H_\infty(X|Y=y)}.
$$
Renyi's entropy is non-increasing in the order, i.e., we have $H_\infty(X|Y=y)\le H(X|Y=y)$. Hence, $P_d(y)\ge 2^{-H(X|Y=y)}$. The function $2^{-x}$ is convex in $x$. Hence, Jensen's inequality yields
$$
 P_d = \sum_y p_Y(y) P_d(y) \ge \sum_y p_Y(y) 2^{-H(X|Y=y)} \ge 2^{- \sum_y p_Y(y) H(X|Y=y)} = 2^{-H(X|Y)}.
$$
Thus, the MAP detection probability is bounded from below by a decreasing function of the conditional entropy. The bound does not depend (explicitly) on the alphabet size of $X$.

Monday, January 9, 2017

New Features for WhatsApp & Co?

This was originally intended as a submission to the Science Park Graz idea contest (that's why it's in German) - but then I thought that the idea is not innovative enough, even though I would love to see those features in WhatsApp or Hangouts. Who knows, maybe Google is reading this ;).


Zwei Use Cases, an denen WhatsApp, Facebook & Co scheitern

Use Case 1: Wochenendausflug mit Freunden teilen

Du verbringst das Wochenende in München und möchtest deinen Freunden zuhause ein paar Fotos vom Marienplatz, der Allianz-Arena und dem Olympiaturm schicken. Du könntest die Fotos auf Facebook, Instagram oder Google+ teilen. Jene deiner Freunde, die diese sozialen Netzwerke selten nutzen, könnten diese Fotos verpassen weil der Newsfeed in der Zwischenzeit von anderen Neuigkeiten überschwemmt wurde. Du könntest deine Fotos deinen Freunden per WhatsApp schicken - vielleicht existiert sogar schon eine passende Gruppe. Aber einerseits werden deine Freunde mit mehreren Benachrichtigungen belästigt, andererseits fühlen sie sich vielleicht dazu verpflichtet, auf deine Fotos mit Kommentaren zu reagieren. Der Abend im Hofbräuhaus wird so schnell zum Message-Marathon.

Use Case 2: Weihnachtswünsche und einen Jahresrückblick an Bekannte schicken

Du möchtest entfernten Bekannten (ehemalige ArbeitskollegInnen, entfernte Familienmitglieder, etc.) ein frohes Fest wünschen und sie wissen lassen, was im vergangenen Jahr bei dir passiert ist. Was du zu sagen hast sprengt den Rahmen der durchschnittlichen WhatsApp-Nachricht, und Facebook und Google+ sind dir zu öffentlich - zudem besteht das große Risiko, dass deine Bekannten die Nachricht aus den im letzten Use Case genannten Gründen gar nicht bekommen.

Detailbeschreibung

Um diese Use Cases abzudecken, bietet das App verschiedene Features, die über die Spezifikationen der derzeit verfügbaren Messaging-Apps hinausgehen. Zur besseren Übersicht sind die neuartigen Features (wie zum Beispiel Nachrichtentypen, Gruppenrollen und Gruppeneinstellungen) fett markiert.

Darstellung

Die Darstellung ähnelt WhatsApp bzw. den Abos von Youtube: Der Startbildschirm listet die Gruppen auf, in denen man Mitglied ist und gibt mit einer Zahl neben dem Gruppennamen die Anzahl der neuen Nachrichten in der Gruppe an. Bei Touch auf den Gruppennamen kommt man zum Gruppenchat, der wieder WhatsApp ähnelt. Ein wesentlicher Unterschied sind zwei neue Nachrichtentypen (Stories und Textnachrichten im eReader-Stil, siehe unten), die im Gruppenchat nur kompakt als Überschrift angezeigt werden. Alle Nachrichten haben außerdem eine Fußzeile mit der Anzahl von Likes bzw. Kommentaren (siehe unten). Das App hat außerdem ein Webinterface welches vor allem für das Verfassen längerer Nachrichten hilfreich ist.

Gruppenrollen

Neben dem Gruppenadministrator und dem Gruppenmitglied, die es auch in WhatsApp und anderen Messaging-Gruppen gibt, gibt es auch Leser, also eine rein passive Gruppenrolle.

  • Gruppenadministrator: Legt die Gruppeneinstellungen fest; lädt Mitglieder in die Gruppe ein und teilt ihnen Gruppenrollen zu
  • Gruppenmitglied: Teilt Nachrichten mit anderen Gruppenmitgliedern bzw. kommentiert und liked Nachrichten anderer Gruppenmitglieder
  • Leser: Passive Teilnahme; kann ggf. Nachrichten lesen, kommentieren und liken, aber keine neuen Nachrichten teilen

Nachrichtentypen

Das App erlaubt neben Textnachrichten im Sprechblasenstil natürlich auch das Teilen von Fotos, Videos, Dokumenten und Internetlinks. Ein besonderes Feature sind Textnachrichten im eReader-Stil. Dieser Nachrichtentypus ist für lange Textnachrichten ausgelegt und ermöglicht ein angenehmes Lesen durch Umblättern statt Scrollen. Außerdem gibt es Stories, also Nachrichten, die über einen längeren Zeitraum geschrieben werden und mit Fotos oder Videos versehen sind. Stories werden vom Gruppenmitglied wie normale Nachrichten geschrieben und abgeschickt, erreichen die anderen Gruppenteilnehmer aber nicht sofort. Erst wenn die Story als abgeschlossen markiert wurde, erscheinen alle kumulierten Nachrichten im Gruppenchat.
Auf Nachrichten kann auf dreierlei Art reagiert werden: Einerseits durch eine neue Nachricht -- ganz im Stile von WhatsApp. Andererseits können Nachrichten kommentiert und geliked werden -- ganz im Stile von Social Networks wie Facebook oder Google+.

Gruppeneinstellungen

Die Gruppeneinstellungen werden vom Gruppenadministrator festgelegt. Sie regeln unter anderem

  • Welche Nachrichtentypen dürfen in der Gruppe verwendet werden?
  • Gibt es Mindestgrößen für Stories bzw. Textnachrichten im eReader-Stil?
  • Welche Nachrichtentypen führen zu Push-Benachrichtigungen (es sei denn, das Gruppenmitglied hat Push-Benachrichtigungen für diese Gruppe deaktiviert)?
  • Wie darf auf Nachrichten reagiert werden? Sind Kommentare erlaubt? Gibt es die Möglichkeit zu liken?

Individuelle Einstellungen

Ferner gibt es Einstellungen, die jeder Nutzer für sich selbst festlegen kann. Dazu gehört unter anderem die Deaktivierung von Push-Benachrichtigungen und natürlich die Schriftgröße von Textnachrichten im eReader-Format. Außerdem kann jeder Nutzer entscheiden, wer Kommentare auf Nachrichten sehen kann: Alle Mitglieder einer Gruppe, oder nur der Verfasser der Nachricht. Die Reihenfolge der Gruppenliste ist ebenfalls anpassbar. So können die Gruppen nach der Anzahl oder Aktualität ungelesener Nachrichten, nach der Rolle des Nutzers in der jeweiligen Gruppe oder nach einer selbst gewählten Priorisierung geordnet sein. Nutzer können ferner entscheiden, wie sie in Gruppen hinzugefügt werden wollen: Hinzufügen ohne Bestätigung, Hinzufügen mit Bestätigung durch Nachricht oder Hinzufügen durch persönlichen Kontakt (z.B. über NFC-Transfer oder Fotografieren eines QR-Codes am Bildschirm des Gruppenadministrators). Diese Einstellungen sind für jede Gruppenrolle, die dem Nutzer zugeteilt werden soll, und für ausgewählte einladende Personen separat möglich. So erlaube ich zum Beispiel vertrauten Kontakten, mich zu jeder Gruppe als Leser hinzuzufügen, in der sie Gruppenadministrator sind; als Gruppenadministrator möchte ich nur hinzugefügt werden, wenn ich andere Gruppenadministratoren der selben Gruppe persönlich treffe.

Zurück zu den Use Cases

Use Case 1: Wochenendausflug mit Freunden teilen

Als Gruppenadministrator legst du eine Gruppe mit deinen Freunden als Gruppenmitgliedern an. Du erlaubst nur den Nachrichtentypus Stories und verbietest Kommentare - liken ist erlaubt. Die Push-Benachrichtigung ist aktiviert.
Im Laufe des Wochenendes schickst du mehrere Fotos an diese Gruppe. Deine Freunde bekommen am Sonntagabend die Benachrichtigung, dass du eine Story geteilt hast. Sie haben die Möglichkeit, die Fotos anzuschauen und zu liken, aber die Gruppeneinstellung verbietet Kommentare. Niemand ist verpflichtet, dir zu antworten -- wer es dennoch will, findet sicher einen Weg dazu (z.B. eine Direktnachricht oder SMS).

Use Case 2: Weihnachtswünsche und einen Jahresrückblick an Bekannte schicken

Als Gruppenadministrator legst du eine Gruppe mit deinem Bekanntenkreis als Lesern an. Du deaktivierst die Push-Benachrichtigung, erlaubst aber Kommentare.
Du schreibst im Webinterface deine Nachricht und schickst sie als Textnachricht im eReader-Stil an die Gruppe. Deine Leser sehen beim nächsten Öffnen der App einen roten Punkt vor dem Gruppennamen am Hauptbildschirm. Wenn sie die Nachricht öffnen, sehen sie keine scrollbare Nachricht, sondern eine Anzeige im eReader-Format: Durch Berühren des Bildschirms wird die Seite umgeblättert. Da die Option von dir aktiviert wurde, können deine Bekannten deine Nachricht kommentieren. Sie können dabei selbst entscheiden, ob alle Leser oder nur du das Kommentar sehen können.