^{1}

^{2}

^{1}

^{3}

^{4}

^{5}

The authors have declared that no competing interests exist.

The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance fragmentation and polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards opinion fragmentation, which emerges also in conditions where the original model would predict consensus, b) increased polarisation of opinions and c) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Fragmentation and polarization are augmented by a fragmented initial population.

Political polarization and opinion fragmentation is a generally observed, ingravescent negative trend in modern western societies [

A considerable and rapidly increasing part of the population does not use traditional media (printed press, radio, TV or even online journals) for obtaining news [

Another important factor pointing in the same direction is related to the “share” function, which is largely responsible for the fast spreading of news and thus enhancing popularity. This spreading takes place on the social network, where links are formed mostly as a consequence of homophily, i.e., exchange of information takes place between people with similar views. The variety of interactions of a person (family, school, work, hobby, etc.) may contribute to diversification of information sources [

The link between opinion fragmentation and polarization, and algorithmic bias from online platforms has not been proven to date, with conflicting conclusions from different studies [

Network effects are obviously important in the spreading of news and opinions, however, in our present study we will ignore the network structure of the system and will exclusively focus on the consequences of the algorithmic bias for selecting the content presented to the users. This corresponds to a mean field approach, which is widely used as a first approximation to spreading problems [

The task is therefore to model the evolution of the distribution of individual attitudes in society, provided there is a bias in selecting partners whose opinions are confronted. Recent years has seen the introduction of several models of opinion dynamics [

The paper is organized as follows. In the next section we introduce the model in detail. Section Results contains the results of the simulations. We close the paper with a discussion and an account of further research.

The original bounded confidence model [_{i} ∈ [0, 1]. This opinion can be considered the degree by which an individual agrees or not to a certain position. Individuals are connected by a complete social network, and interact pairwise at discrete time steps. The interacting pair (_{i} and _{j} may change, depending on a so called

If we define the distance between two opinions _{i} and _{j} as _{ij} = |_{i} − _{j}|, then information is exchanged between the two individuals only if _{ij} ≤

In the following we consider only the case of

To introduce algorithmic bias in the interaction between individuals, we modify the procedure by which the pair _{ij}:

In this way, the probability to select _{ij} is smaller, i.e. alike individuals interact more. The parameter

In the following we will analyse the new model numerically, through simulation of the opinion formation process. To avoid undefined operations in _{ik} = 0 (_{ik}, _{ε}. So, if _{ik} < _{ε} then _{ik} is replaced with _{ε} in _{ε} = 0.0001 in the following. Our simulations are design to stop when the population converges to a stable cluster configuration. To achieve this, we analyse the population at every iteration, i.e. every

In order to understand how the introduction of the algorithmic bias affects model performance, we study the model under multiple criteria for various combinations of parameters

The behaviour of the original bounded confidence model is defined by the parameter

In the following we analyse fragmentation by studying the number of clusters obtained for our extended model, starting from an uniform initial distribution of opinions, for a population of size _{i} is the size of cluster

For

The plot shows that our simulations reproduce perfectly the behaviour of the original model (

We also study polarization of the population due to algorithmic bias, showing in

Values are averaged over 100 runs.

While the asymptotic number of opinion clusters is important, the time to obtain these clusters is equally so. In a real setting, available time is finite, and so if consensus forms only after a very long period of time, it may never actually emerge in the real population. Thus, we measure the time needed for convergence (to either one or more opinion clusters) in our extended model. This can be counted as total number of pairwise interactions required to obtain a stable configuration, divided by

Total number of interactions required for convergence normalized by the number of individuals, averaged over 100 runs.

Another observation emerging from

One may argue, however, that measuring the time as the total number of interactions may inflate the figures. Each interaction can have 3 outcomes: (1) nothing happens because a pair of individuals with identical opinions (_{i} = _{j}) were selected, (2) nothing happens because of bounded confidence _{ij} >

^{σ}) where

Normalized total, non-null difference and active number of interactions required for convergence for ^{3.4}) is shown as a visual aid only.

This observed slowdown can be explained by considering the time evolution of the sum of pairwise distances between individuals in the population: _{i < j} _{ij}, i.e. a termination function. At each interaction within the bounded confidence limit, _{ij}. A larger algorithmic bias means on average smaller _{ij} at each interaction, hence reaching the minimum requires a larger number of iterations.

To prove that _{ij} ≤ _{ij}, (2) the distances between _{ij} = 0, hence it decreases. For (2) and (3), consider _{ik}(_{jk}(_{ik}(_{jk}(_{ik}(_{jk}(_{ik}(_{jk}(_{ik}(_{jk}(_{ij}(_{ik}(_{jk}(_{ij}(_{jk}, _{ik})(

To support the points above, we show in

The first row corresponds to the case where

A third analysis that we performed aimed at understanding whether the size of the population plays a role in the effect of the algorithmic bias. Again, this is important for realistic scenarios, since opinion formation may happen both at small and at large scale. Hence we look at the transition between consensus and two opinion clusters for variable population sizes, both for the original and for the extended model.

In the original model, the transition between

Effective number of clusters obtained for various

To analyse the effect of the population size when

Effective number of clusters obtained for various

Mean opinion distance obtained for various

Previous results were obtained for the case where numerical simulations assumed uniformly random initial opinions in the population. However, in reality, opinion formation may start from slightly fragmented initial conditions. To simulate this, we introduced artificially a symmetric gap around the opinion value 0.5, to simulate a population where opinions are already forming. The width of the gap was varied to understand the effect of various initial fragmentation levels both for the original bounded confidence model (

Effect of the initial condition on the effective number of clusters (averages over 100 runs).

Similar behaviour can be observed when analysing the average opinion distance, displayed in

Effect of the initial condition on the average opinion distance (averages over 100 runs).

In terms of time for convergence,

Effect of the initial condition on the convergence time measured in number of active interactions (averages over 100 runs).

Hence, again, the two effects appear to work together against reaching consensus, either by favoring the appearance of additional clusters or by slowing down consensus when this could, in principle, emerge.

A model of algorithmic bias in the framework of bounded confidence was presented, and its behavior analyzed. Algorithmic bias is a mechanism that encourages interaction among like-minded individuals, similar to patterns observed in real social network data. We found that, for this model, algorithmic bias hinders consensus and favors opinion fragmentation and polarization through different mechanisms. On one hand, consensus is hindered by a very strong slowdown of convergence, so that even when one cluster is asymptotically obtained, the time to reach it is so long that in practice consensus will never appear. Additionally, we observed fragmentation of the population as the bias grows stronger, with the number of clusters obtained increasing compared to the original model. At the same time, the average opinion distance also grew in the population, indicating emergence of polarization. A fragmented initial condition also enhances the fragmentation and polarization, augmenting the effect of the algorithmic bias. Additionally, we observed that small populations may be less resilient to fragmentation and polarization, due to finite size effects.

The results presented here are based on the mean field bounded confidence model, and may be influenced by this choice. A first assumption is that bounded confidence exists, i.e. individuals with very distant opinions do not exchange information hence do not influence each other. However, our conclusions regarding the fact that algorithmic bias hinders consensus still stand even when bounded confidence is removed from this model (i.e.

Third, it would be interesting to see how taking into account a more realistic social network structure among individuals, instead of a complete graph where anybody may interact with anybody else, would impact the opinion formation process, possibly exacerbating the effects observed in this study. The Deffuant model is known to be affected by other topologies such as scale free networks when the networks do not contain a large number of edges [

A fourth assumption of our model is in the dynamics of the interaction. Negative interactions can also be important in the dynamics [

A concept related to algorithmic bias is homophily, i.e. the tendency of people to build friendships with similar others, which is visible in the way the social network is built. This is one factor that could facilitate interactions with like-minded individuals, similar to our model. A recent study shows that homophily enhances consensus in the Deffuant model [

Although there is evidence that many types of social interactions are subject to algorithmic bias, the debate still continues on whether this generates or not opinion polarization in the long term. Our numerical results support the first option, which we plan to analyse in more detail in the future by applying our model to real data from social network processes. Recent work on how to counteract opinion polarization on social networks has also appeared [

We thank Guillaume Deffuant for useful discussions on finite size effects in the bounded confidence model. We also thank the IT Center of the University of Pisa (Centro Interdipartimentale di Servizi e Ricerca) for providing access to computing resources for simulations.