March 10, 2025
This paper introduces a simulator designed for opinion dynamics researchers to model competing influences within social networks in the presence of LLM-based agents. By integrating established opinion dynamics principles with state-of-the-art LLMs, this tool enables the study of influence propagation and counter-misinformation strategies. The simulator is particularly valuable for researchers in social science, psychology, and operations research, allowing them to analyse societal phenomena without requiring extensive coding expertise. Additionally, the simulator will be openly available on GitHub, ensuring accessibility and adaptability for those who wish to extend its capabilities for their own research.
Large Language Models (LLMs) are becoming ubiquitous, often shaping discourse in ways we barely notice. But what happens when the entire public opinion space is influenced or even outsourced to AI-driven agents [1]? While LLMs have been extensively studied in isolation, their behavior within dynamic social networks, interacting alongside humans, remains an open and critical research frontier. Understanding how these AI-enabled agents shape influence, polarisation, and consensus in evolving networks is key to anticipating the societal impacts of this technological shift. [2], [3],[4], [5].
Understanding how people adjust their opinions based on social influence was the basis of opinion dynamics research [6], [7], with wide-ranging implications in fields such as public health initiatives, conflict resolution, and misinformation mitigation. Opinions spread and evolve within social networks, often driven by factors such as peer influence [8], media exposure [9], and group dynamics [10]. Accurate models of these processes have been considered critical not only for forecasting trends such as opinion polarisation [11] or consensus formation, but also for designing targeted interventions to counteract harmful effects, such as the spread of misinformation or societal divides [12]. Agent-based models (ABMs) are used to simulate interactions among individual agents (a proxy for humans) to explore the emergent properties of opinion propagation. They can provide powerful frameworks for investigating complex scenarios [13], [14], for testing strategies for mitigating negative outcomes and perhaps fostering constructive social influence, e.g., incorporating explicit assumptions about cognitive processes in opinion updating.
Understanding how LLMs behave in multi-agent social interactions is crucial for advancing AI applications [15], [16]. LLMs in autonomous systems offer opportunities to revolutionise decision-making by simulating fairness, reciprocity, and competition in social contexts [17]. Their behaviour could influence resource allocation, conflict resolution, and interaction strategies. Unlike traditional agent-based models with predefined rules, LLMs can exhibit more flexible, human-like behaviours, enhancing realism in simulations for policy evaluation. These capabilities make them valuable for designing AI systems that better mimic human social dynamics [18], improving both their practical application and the insights they provide into complex, real-world decision-making processes.
This paper introduces a simulator to model influence and counter-influence in a wargame setting. Wargames, originally developed for military strategy, have evolved into powerful tools for decision-making across various domains. Today, they are used to model business strategies, assess cybersecurity threats, and simulate geopolitical conflicts. Governments and corporations employ wargames to anticipate economic shifts, supply chain disruptions, and the impact of emerging technologies. In healthcare, they help model pandemic responses, testing different policy interventions before real-world implementation. AI-driven wargames further enhance scenario analysis, enabling rapid adaptation to complex environments. By fostering strategic thinking and resilience, modern wargaming serves as a critical tool for navigating uncertainty in an increasingly interconnected world.
The simulator can facilitate studies to understand how artificial intelligence, specifically LLMs, can emulate human-like opinion dynamics and influence propagation in a social network. Traditional approaches to modelling opinion dynamics often rely on simplified rules that may not capture some of the communicative strategies and adaptive behaviours seen in human interactions. The specific problem tackled by this work is the challenge of understanding the interplay between misinformation and counter-misinformation in shaping public opinion. By introducing adversarial LLMs based agents, for instance, one agent spreading misinformation and the other countering it, this introduces a more realistic framework for analysing how LLMs dominate each other while aiming at shifting the opinion of the population [19], [20], [21], [22], [23].
The scenario has been strategically developed to reflect the asymmetric nature of the contested information environment, emphasising the vulnerabilities faced by the Blue team. This framework mirrors adversarial dynamics often modeled in serious games or wargames, particularly in cybersecurity. While the Red Team and Blue Team construct is common in cybersecurity practices (as detailed in NIST’s Glossary [24]), this scenario extends the concept to the broader geopolitical information landscape within a fictitious nation-state [25].
The system comprises two LLM-based agents with opposing objectives: the Red Agent, responsible for disseminating misinformation, and the Blue Agent, tasked with counteracting misinformation and restoring trust. These agents operate within a directed network of neutral agents, termed Green Nodes, which represent individuals within a population. The simulator allows the users to upload their own graphs or use the functionality provided in the simulator to generate a network. Users can choose the LLMs for both Blue and Red agents. Currently, the simulator supports various versions of Open AI’s GPT [26] as well as other open source models from HuggingFace. The simulator also has the provision to upload a new model.
Green nodes exhibit predispositions toward either agent, influenced by prior interactions and the content of incoming messages. Each Green Node’s behaviour is defined by core parameters adapted from the Deffuant model [13], [14], including susceptibility to influence, confirmation bias, and mechanisms for updating beliefs. These parameters ensure that the modelled population exhibits realistic characteristics, such as resistance to extreme viewpoints and gradual alignment shifts. Each agent in the population is represented by a scalar value (or vector) that denotes their opinion on a specific topic (Figure 2). The opinions are within a bounded range, such as \([0,1]\) If the difference in their opinions is below a certain threshold (the confidence bound, \(\epsilon\)), the agents influence each other and adjust their opinions closer together. The adjustment is controlled by a convergence parameter (\(\mu\)), dictating how much the agents move toward each other’s opinions.
The simulation proceeds in discrete time steps, during which the Red/Blue agents alternately broadcast messages to the Green Nodes (also viewable by the other LLM agent). The Green nodes that are connected to each other interact with each other (Figure 1). Key operational components include:
Message Generation Each agent generates a message based on its LLM’s output, informed by the current state of the network and its strategic objective. For example, the Red Agent prioritises persuasive misinformation, while the Blue Agent constructs factual rebuttals optimised for resource efficiency.
Message Potency/influence factor) Messages are assigned a potency score that quantifies their influence. The LLMs determine the potency of each message that they generate. The influence factor determines the extent to which the Green Nodes adjust their alignment toward the broadcasting agent. While the Red Agent has access to unlimited resources, high-potency messages incur penalties, particularly when directed at strongly blue-aligned nodes, mimicking real-world scepticism toward overt misinformation [27]. In contrast, the Blue Agent operates under constrained resources, with each message incurring a cost proportional to its potency. This constraint requires strategic resource management, as overly powerful debunking messages risk rapid depletion of available energy.
Node Update Mechanism Upon receiving a message, the Green Nodes adjust their alignment based on their predisposition, the potency of the message, and the influence of the connected neighbours. Updates occur iteratively, capturing both direct and network-mediated effects of influence propagation.
Termination Criteria The simulation concludes when an agent achieves a majority alignment within the Green Node population, indicating a decisive shift in opinion. Alternatively, the simulation terminates after a fixed number of rounds if neither agent achieves dominance, representing a stalemate.
The simulation can be evaluated using the following metrics. At the end of the simulation, a .csv file is generated which can be used for further analysis. In addition, messages and network states are also captured.
Network Alignment Distribution: The final proportion of Green Nodes aligned with each agent. This refers to the polarisation in the network. A sample output graph is shown in Figure 3.
Resource Efficiency: The Blue Agent’s energy expenditure relative to alignment gains.
Node Resilience: The resistance of nodes with strong predispositions to opposing influences.
Temporal Evolution: The rate of alignment change over successive rounds.
The simulator presented in this paper provides an interesting approach to studying opinion dynamics, combining the generative capabilities of LLMs with structured agent-based modeling principles. By incorporating realistic constraints, such as resource limitations and susceptibility penalties, it offers insights into the dynamics of influence competition and the effectiveness of counter-misinformation strategies. Furthermore, this work highlights the dual potential of LLMs as both tools for studying opinion propagation and as models for emulating human-like decision-making in complex social systems. We are working on improving the prompting strategies and providing more control to the end user in future.
We have avoided sharing detailed prompts in the code to prevent misuse. We commit to promoting responsible AI development.
This research was supported by the Collaborative Research Grant awarded to Mehwish Nasim by DSC/JTSI Western Australia in 2023. The authors acknowledge the support of the following students in implementing this software: Rhianna Hepburn, JJ Jun, Olivia Morrison, Devarsh Patel and Edwin Tang.