NEWS #latest

The hyperbolic brain

2021/06/01  |  academic

Recent advances in network science include the discovery that complex networks have a hidden geometry and that this geometry is hyperbolic. Studying complex networks through the lens of their effective hyperbolic geometry has led to valuable insights on the organization of a variety of complex systems ranging from the Internet to the metabolism of E. coli and humans. This methodology can also be used to infer high-quality maps of connectomes, where brain regions are given coordinates in hyperbolic space such that the closer they are the more likely that they are connected. Even if Euclidean space is typically assumed as the natural geometry of the brain, distances in hyperbolic space offer a more accurate interpretation of the structure of connectomes, which suggests a new perspective for the mapping of the organization of the brain’s neuroanatomical regions.

In humans, this approach has been able to explain the multiscale organization of connectomes in healthy subjects at five anatomical resolutions. Zoomed-out connectivity maps remain self-similar and our geometric network model, where distances are not Euclidean but hyperbolic, predicts the observations by application of a renormalization protocol. Our results prove that the same principles organize brain con-nectivity at different scales and lead to efficient decentralized communication.

If you want to see more:

Geometric renormalization unravels self-similarity of the multiscale human connectome

PNAS 117 20244-20253 (2020)    

Navigable maps of structural brain networks across species

PLoS Computational Biology 16 e1007584 (2020)


2019/12/05  |  academic

Mercator is a new embedding tool to create maps of networks in the hyperbolic plane. Download it now from GitHub or acces our NETS2MAPS platform!

Mercator is a reliable embedding method to map real complex networks into their hyperbolic latent geometry. The method assumes that the structure of networks is well described by the PopularityxSimilarity S1/H2 static geometric network model, which can accommodate arbitrary degree distributions and reproduces many pivotal properties of real networks, including self-similarity patterns. The algorithm mixes machine learning and maximum likelihood approaches to infer the coordinates of the nodes in the underlying hyperbolic disk with the best matching between the observed network topology and the geometric model. In its fast mode, Mercator uses a model-adjusted machine learning technique performing dimensional reduction to produce a fast and accurate map, whose quality already outperform other embedding algorithms in the literature. In the refined Mercator mode, the fast-mode embedding result is taken as an initial condition in a Maximum Likelihood estimation, which significantly improves the quality of the final embedding. Apart from its accuracy as an embedding tool, Mercator has the clear advantage of systematically inferring not only node orderings, or angular positions, but also the hidden degrees and global model parameters, and has the ability to embed networks with arbitrary degree distributions. Overall, our results suggest that mixing machine learning and maximum likelihood techniques in a model-dependent framework can boost the meaningful mapping of complex networks.

original font

Renormalizing complex networks

2021/05/25  |  academic

Scaling up real networks by geometric branching growth

PNAS May 25, 2021 118 (21) e2018994118;  

Branching processes underpin the complex evolution of many real systems. However, network models typically describe network growth in terms of a sequential addition of nodes. Here, we measured the evolution of real networks—journal citations and international trade—over a 100-y period and found that they grow in a self-similar way that preserves their structural features over time. This observation can be explained by a geometric branching growth model that generates a multiscale unfolding of the network by using a combination of branching growth and a hidden metric space approach. Our model enables multiple practical applications, including the detection of optimal network size for maximal response to an external influence and a finite-size scaling analysis of critical behavior.                                                                                                                                                     

Multiscale unfolding of real networks by geometric renormalization

Nature Physics doi:10.1038/s41567-018-0072-5 (2018)

Symmetries in physical theories denote invariance under some transformation, such as self-similarity under a change of scale. The renormalization group provides a powerful framework to study these symmetries, leading to a better understanding of the universal properties of phase transitions. However, the small-world property of complex networks complicates application of the renormalization group by introducing correlations between coexisting scales. Here, we provide a framework for the investigation of complex networks at different resolutions. The approach is based on geometric representations, which have been shown to sustain network navigability and to reveal the mechanisms that govern network structure and evolution. We define a geometric renormalization group for networks by embedding them into an underlying hidden metric space. We find that real scale-free networks show geometric scaling under this renormalization group transformation. We unfold the networks in a self-similar multilayer shell that distinguishes the coexisting scales and their interactions. This in turn offers a basis for exploring critical phenomena and universality in complex networks. It also affords us immediate practical applications, including high-fidelity smaller-scale replicas of large networks and a multiscale navigation protocol in hyperbolic space, which betters those on single layers.

Postdoctoral Position in Mapping Big Data Systems - CLOSED

2018/06/21  |  academic

Postdoctoral Position in Mapping Big Data Systems  

Applications are invited for a Postdoctoral fellowship at the University of Barcelona to work with Prof. M. Ángeles Serrano and Prof. Marián Boguñá at the Department of Condensed Matter Physics and UB Institute of Complex Systems (UBICS). The position is funded by a "Fundación BBVA" grant.

This is a postdoctoral position with a salary of €29,700 per year (before taxes) for one year renewable until june 2020. There will be additional support for conference travels.  


The discovery and understanding of the hidden geometry of complex networks have become fundamental problems within Network Science giving place to the field of Network Geometry. The research will focus on mixing of machine learning and statistical techniques for an efficient embedding of very large networks in hidden metric spaces to produce meaningful maps. We will create an online platform to host an atlas of network maps, and where scientists and interested people can produce theirs.  

Essential Skills  

  • A PhD in Physics, Computer Science, Computer/Electronic Engineering, or other relevant discipline is expected.
  • Excellent software development skills - Expertise in complex networks
  • Excellent communication skills, verbal and written (English) 

Informal queries about this position should be sent to  

Application process  

Interested applicants are requested to submit a Curriculum Vitae including relevant publications and the name and contact details of 2 referees. Additionally, a one page cover letter explaining your interest in this specific position could be included.

Submissions should be sent by email with subject “MBDS Application” to
Interviews will be carried out as soon as suitable candidates are identified.
To successful candidate is expected to start in September  2018, or before (if possible).

New paper in New Journal of Physics

2015/06/16  |  academic

Escaping the avalanche collapse in self-similar multiplexes

We deduce and discuss the implications of self-similarity for the robustness to failure of multiplexes, depending on interlayer degree correlations. First, we define self-similarity of multiplexes and we illustrate the concept in practice using the configuration model ensemble. Circumscribing robustness to survival of the mutually percolated state, we find a new explanation based on self-similarity both for the observed fragility of interconnected systems of networks and for their robustness to failure when interlayer degree correlations are present. Extending the self-similarity arguments, we show that interlayer degree correlations can change completely the global connectivity properties of self-similar multiplexes, so that they can even recover a zero percolation threshold and a continuous transition in the thermodynamic limit, qualitatively exhibiting thus the ordinary percolation properties of noninteracting networks. We confirm these results with numerical simulations.