Graph Neural Networks

Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling.
Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, Cesare Alippi.
Preprint (2019).
We propose Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarsened versions of a graph by leveraging on its topology only. During training, the GNN learns new representations for the vertices and fits them to a pyramid of coarsened graphs, which is computed in a preprocessing step. As theoretical contributions, we first demonstrate the equivalence between the MAXCUT partition and the node decimation procedure on which NDP is based. Then, we propose a procedure to sparsify the coarsened graphs for reducing the computational complexity in the GNN; we also demonstrate that it is possible to drop many edges without significantly altering the graph spectra of coarsened graphs.

DistancePreserving Graph Embeddings from Random Neural Features.
Daniele Zambon, Cesare Alippi, Lorenzo Livi.
Preprint (2019).
We present Graph Random Neural Features (GRNF), a novel embedding method from graphstructured data to real vectors based on a family of graph neural networks. The embedding naturally deals with graph isomorphism and preserves, in probability, the metric structure of graph domain. In addition to being an explicit embedding method, it also allows to efficiently and effectively approximate graph metric distances (as well as complete kernel functions); a criterion to select the embedding dimension trading off the approximation accuracy with the computational cost is also provided.

MinCUT pooling in Graph Neural Networks.
Filippo Maria Bianchi, Daniele Grattarola, Cesare Alippi.
Preprint (2019).
We propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCUT optimization objective. For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e.g., graph classification), and, thanks to the minCUT objective, also on the graph connectivity.

Graph Neural Networks with Convolutional ARMA Filters.
Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, Cesare Alippi.
Preprint (2019).
We propose a novel graph convolutional layer based on autoregressive moving average (ARMA) filters that, compared to the polynomial ones, provide a more flexible response thanks to a rich transfer function that accounts for the concept of state. We implement the ARMA filter with a recursive and distributed formulation, obtaining a convolutional layer that is efficient to train, is localized in the node space and can be applied to graphs with different topologies.
Learning in nonstationary environments

ChangePoint Methods on a Sequence of Graphs.
Daniele Zambon, Cesare Alippi, Lorenzo Livi.
IEEE Transactions on Signal Processing (2019).
Given a finite sequence of graphs, we propose a methodology to identify possible changes in stationarity in the stochastic process that generated such graphs. We consider a general family of attributed graphs for which both topology (vertices and edges) and associated attributes are allowed to change over time, without violating the stationarity hypothesis. Novel ChangePoint Methods (CPMs) are proposed that map graphs onto vectors, apply a suitable statistical test in vector space and detect changes –if any– according to a userdefined confidence level; an estimate for the change point is provided as well. We ground our methods on theoretical results that show how the inference in the numerical vector space is related to the one in graph domain, and viceversa.

Deep Learning for Time Series Forecasting: The Electric Load Case.
Alberto Gasparin, Slobodan Lukovic, Cesare Alippi.
Preprint (2019).
We review and experimentally evaluate on two realworld datasets the most recent trends in electric load forecasting, by contrasting deep learning architectures on short term forecast (one day ahead prediction). Specifically, we focus on feedforward and recurrent neural networks, sequence to sequence models and temporal convolutional neural networks along with architectural variants.

Autoregressive Models for Sequences of Graphs.
Daniele Zambon, Daniele Grattarola, Lorenzo Livi, Cesare Alippi.
International Joint Conference on Neural Networks (2019).
This paper proposes an autoregressive (AR) model for sequences of graphs, which generalizes traditional AR models. A first novelty consists in formalizing the AR model for a very general family of graphs, characterized by a variable topology, and attributes associated with nodes and edges. A graph neural network is also proposed to learn the AR function associated with the graphgenerating process, and subsequently predict the next graph in a sequence.

Change Detection in Graph Streams by Learning Graph Embeddings on ConstantCurvature Manifolds.
Daniele Grattarola, Daniele Zambon, Lorenzo Livi, Cesare Alippi.
IEEE Transactions on Neural Networks and Learning Systems (2019).
We focus on the problem of detecting changes in stationarity in a stream of attributed graphs. To this end, we introduce a novel change detection framework based on neural networks and Constantcurvature manifolds (CCMs), that takes into account the nonEuclidean nature of graphs. Our contribution in this work is twofold. First, via a novel approach based on adversarial learning, we compute graph embeddings by training an autoencoder to represent graphs on CCMs. Second, we introduce two novel change detection tests operating on CCMs.

Anomaly and Change Detection in Graph Streams through ConstantCurvature Manifold Embeddings.
Daniele Zambon, Lorenzo Livi, Cesare Alippi.
IEEE International Joint Conference on Neural Networks (2018).
We investigate how embedding graphs on constantcurvature manifolds (hyperspherical and hyperbolic manifolds) impacts on the ability to detect changes in sequences of attributed graphs. The proposed methodology consists in embedding graphs into a geometric space and perform change detection there by means of conventional methods for numerical streams.

Concept Drift and Anomaly Detection in Graph Streams.
Daniele Zambon, Cesare Alippi, Lorenzo Livi.
IEEE Transactions on Neural Networks and Learning Systems (2018).
We consider stochastic processes generating graphs and propose a methodology for detecting changes in stationarity of such processes. The methodology acts by embedding every graph of the stream into a vector domain, where a conventional multivariate change detection procedure can be easily applied. We ground the soundness of our proposal by proving several theoretical results.

Detecting Changes in Sequences of Attributed Graphs.
Daniele Zambon, Lorenzo Livi, Cesare Alippi.
IEEE Symposium Series on Computational Intelligence (2017).
We consider a methodology for detecting changes in sequences of graphs. Changes are recognized by embedding each graph into a vector space, where conventional change detection procedures exist and can be easily applied. We introduce the methodology and focus on expanding experimental evaluations on controlled yet relevant examples involving geometric graphs and Markov chains.
Dynamics of RNNs

Echo State Networks with SelfNormalizing Activations on the HyperSphere.
Pietro Verzelli, Cesare Alippi, Lorenzo Livi.
Nature Scientific Reports (2019).
We propose a model of echo state networks that eliminates critical dependence on hyperparameters, resulting in networks that provably cannot enter a chaotic regime and, at the same time, denotes nonlinear behavior in phase space characterized by a large memory of past inputs, comparable to the one of linear networks. Our contribution is supported by experiments corroborating our theoretical findings, showing that the proposed model displays dynamics that are richenough to approximate many common nonlinear systems used for benchmarking.

A Characterization of the Edge of Criticality in Binary Echo State Networks.
Pietro Verzelli, Lorenzo Livi, Cesare Alippi.
IEEE International Workshop on Machine Learning for Signal Processing (2018).
We propose binary echo state networks (ESNs), which are architecturally equivalent to standard ESNs but consider binary activation functions and binary recurrent weights. For these networks, we derive a closedform expression for the edge of criticality (EoC) in the autonomous case and perform simulations in order to assess their behavior in the case of noisy neurons and in the presence of a signal. We propose a theoretical explanation for the fact that the variance of the input plays a major role in characterizing the EoC.
Others

Adversarial Autoencoders with ConstantCurvature Latent Manifolds.
Daniele Grattarola, Lorenzo Livi, Cesare Alippi.
Applied Soft Computing (2019)
We introduce the constantcurvature manifold adversarial autoencoder (CCMAAE), a probabilistic generative model trained to represent a data distribution on a constantcurvature Riemannian manifold (CCM). Our method works by matching the aggregated posterior of the CCMAAE with a probability distribution defined on a CCM, so that the encoder implicitly learns to represent data on the CCM to fool the discriminator network. The geometric constraint is also explicitly imposed by jointly training the CCMAAE to maximize the membership degree of the embeddings to the CCM.