We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Agent-Based Modeling

Originally published May 14, 2002

Introduction

In agent-based modeling (ABM), a system is modeled as a collection of autonomous decision-making entities called agents. Each agent individually assesses its situation and makes decisions on the basis of a set of rules. Agents may execute various behaviors appropriate for the system they represent—for example, producing, consuming, or selling. Repetitive competitive interactions between agents are a feature of agent-based modeling, which relies on the power of computers to explore dynamics out of the reach of pure mathematical methods. At the simplest level, an agent-based model consists of a system of agents and the relationships between them. Even a simple agent-based model can exhibit complex behavior patterns and provide valuable information about the dynamics of the real-world system that it emulates. In addition, agents may be capable of evolving, allowing unanticipated behaviors to emerge. Sophisticated ABM sometimes incorporates neural networks, evolutionary algorithms, or other learning techniques to allow realistic learning and adaptation.

ABM is a mindset more than a technology. The ABM mindset consists of describing a system from the perspective of its constituent units. A number of researchers think that the alternative to ABM is traditional differential equation modeling; this is wrong, as a set of differential equations, each describing the dynamics of one of the system's constituent units, is an agent-based model. A synonym of ABM would be microscopic modeling, and an alternative would be macroscopic modeling. As the ABM mindset is starting to enjoy significant popularity, it is a good time to redefine why it is useful and when ABM should be used. These are the questions this article addresses, first by reviewing and classifying the benefits of ABM and then by providing a variety of examples in which the benefits will be clearly described. What the reader will be able to take home is a clear view of when and how to use ABM. One of the reasons underlying ABM's popularity is its ease of implementation: indeed, once one has heard about ABM, it is easy to program an agent-based model. Because the technique is easy to use, one may wrongly think the concepts are easy to master. But although ABM is technically simple, it is also conceptually deep. This unusual combination often leads to improper use of ABM.

Benefits of Agent-Based Modeling. The benefits of ABM over other modeling techniques can be captured in three statements: (1) ABM captures emergent phenomena; (2) ABM provides a natural description of a system; and (3) ABM is flexible. It is clear, however, that the ability of ABM to deal with emergent phenomena is what drives the other benefits.

ABM Captures Emergent Phenomena. Emergent phenomena result from the interactions of individual entities. By definition, they cannot be reduced to the system's parts: the whole is more than the sum of its parts because of the interactions between the parts. An emergent phenomenon can have properties that are decoupled from the properties of the part. For example, a traffic jam, which results from the behavior of and interactions between individual vehicle drivers, may be moving in the direction opposite that of the cars that cause it. This characteristic of emergent phenomena makes them difficult to understand and predict: emergent phenomena can be counterintuitive. Numerous examples of counterintuitive emergent phenomena will be described in the following sections. ABM is, by its very nature, the canonical approach to modeling emergent phenomena: in ABM, one models and simulates the behavior of the system's constituent units (the agents) and their interactions, capturing emergence from the bottom up when the simulation is run.

One may want to use ABM when there is potential for emergent phenomena, i.e., when:

  • Individual behavior is nonlinear and can be characterized by thresholds, if-then rules, or nonlinear coupling. Describing discontinuity in individual behavior is difficult with differential equations.
  • Individual behavior exhibits memory, path-dependence, and hysteresis, non-markovian behavior, or temporal correlations, including learning and adaptation.
  • Agent interactions are heterogeneous and can generate network effects. Aggregate flow equations usually assume global homogeneous mixing, but the topology of the interaction network can lead to significant deviations from predicted aggregate behavior.
  • Averages will not work. Aggregate differential equations tend to smooth out fluctuations, not ABM, which is important because under certain conditions, fluctuations can be amplified: the system is linearly stable but unstable to larger perturbations.

Interestingly, because ABM generates emergent phenomena from the bottom up, it raises the issue of what constitutes an explanation of such a phenomenon. The broader agenda of the ABM community is to advocate a new way of approaching social phenomena, not from a traditional modeling perspective but from the perspective of redefining the scientific process entirely. According to Epstein and Axtell, “[ABM] may change the way we think about explanation in the social sciences. What constitutes an explanation of an observed social phenomenon? Perhaps one day people will interpret the question, ‘Can you explain it?' as asking ‘Can you grow it?'.”

ABM Provides a Natural Description of a System. In many cases, ABM is most natural for describing and simulating a system composed of “behavioral” entities. Whether one is attempting to describe a traffic jam, the stock market, voters, or how an organization works, ABM makes the model seem closer to reality. For example, it is more natural to describe how shoppers move in a supermarket than to come up with the equations that govern the dynamics of the density of shoppers. Because the density equations result from the behavior of shoppers, the ABM approach will also enable the user to study aggregate properties. ABM also makes it possible to realize the full potential of the data a company may have about its customers: panel data and customer surveys provide information about what real people actually do. Knowing the actual shopping basket of a customer makes it possible to create a virtual agent with that shopping basket rather than a density of people with a synthetic shopping basket computed from averaging over shopping data.

The difference between business processes and activities provides another example of how much more natural ABM is. A business process is an abstraction, sometimes useful, which is often difficult for people inside an organization to relate to. ABM looks at the organization from the viewpoint not of business processes but of activities, that is, what people inside the organization actually do.

The two descriptions must, of course, be mutually consistent. The business process description actually provides the modeler with a useful consistency check. However, when it comes to populating, validating, and calibrating the model, people inside the organization have an easier time answering questions about their own activities: they can relate to the model because the models describe their activities.

One may want to use ABM when describing the system from the perspective of its constituent units' activities is more natural, i.e., when:

  • The behavior of individuals cannot be clearly defined through aggregate transition rates.
  • Individual behavior is complex. Everything can be done with equations, in principle, but the complexity of differential equations increases exponentially as the complexity of behavior increases. Describing complex individual behavior with equations becomes intractable.
  • Activities are a more natural way of describing the system than processes.
  • Validation and calibration of the model through expert judgment is crucial. ABM is often the most appropriate way of describing what is actually happening in the real world, and the experts can easily “connect” to the model and have a feeling of “ownership.”
  • Stochasticity applies to the agents' behavior. With ABM, sources of randomness are applied to the right places as opposed to a noise term added more or less arbitrarily to an aggregate equation.

ABM is flexible. The flexibility of ABM can be observed along multiple dimensions. For example, it is easy to add more agents to an agent-based model. ABM also provides a natural framework for tuning the complexity of the agents: behavior, degree of rationality, ability to learn and evolve, and rules of interactions. Another dimension of flexibility is the ability to change levels of description and aggregation: one can easily play with aggregate agents, subgroups of agents, and single agents, with different levels of description coexisting in a given model. One may want to use ABM when the appropriate level of description or complexity is not known ahead of time and finding it requires some tinkering.

Areas of Application. Examples of emergent phenomena abound in the social, political, and economic sciences. It has become progressively accepted that some phenomena can be difficult to predict and even counterintuitive. In a business context, situations of interest where emergent phenomena may arise can be classified into four areas: (1) Flows: evacuation, traffic, and customer flow management.; (2) Markets: stock market, shopbots and software agents, and strategic simulation.; (3) Organizations: operational risk and organizational design.; (4) Diffusion: diffusion of innovation and adoption dynamics.

Flow Management

Theme Park. An obvious flow management application of ABM is the simulation of customer behavior in a theme park. The collective patterns generated by thousands of customers can be extremely complex as customers interact: for example, how long one waits at an attraction in a theme park depends on other people's choices. A major theme park resort company was thinking about how to improve adaptability in labor scheduling, but knew that this depended on knowing more about the optimal balance of capacity and demand. Axtell and Epstein developed ResortScape, an agent-based model of the park that provides an integrated picture of the environment and all of the interacting elements that come into play in such a resort. The model provides a fast way for managers to identify, adjust, and watch the impact of any number of management levers such as:

  • When or whether to turn off a particular ride;
  • How to distribute rides per capita throughout the park space;
  • What is the tolerance level for wait times; and
  • When to extend operating hours.

In the simulation, agents represent a realistic and changeable mix of both supply (attractions, shops, food concessions) and demand (visitors with different preferences) elements of a day at the park. Leveraging existing resources and data, such as customer surveys, segmentation studies, queue timers, people counters, attendance estimates, and capacity figures, the model generates information about guest flow. Users can design and run an infinite number of scenarios to study the dynamics of the park space, test the effectiveness of various management decisions, and track visitor satisfaction throughout the day.

ABM is particularly useful in this context, because the mapping between the agents' preferences and behaviors on the one hand, and the park's performance (in terms of average waiting times, number of attractions visited, total distance walked, etc.) on the other is too complex to be dealt with by using mathematical techniques and purely statistical analysis of the data. Why is the mapping too complex? Because, for example, the time a given customer has to wait at a given attraction depends on what other customers are doing, how they respond to different park conditions, what their wish list is, etc. The flow of customers in the park and the money they spend are “emergent” properties of interactions among and between customers and the spatial layout of the park. Therefore, simulating the park's operations with a given layout seems to be the only solution. ABM is the most natural and easiest way of describing the system, because the actors of this system are customers (and attractions) with a behavior of their own. For example, waiting times at a theme park attraction result from the interactions of many behavioral units: the customers. Finally, the data available to the modeler are naturally structured for ABM: the available data are a description of the desires and behaviors of a number of customers.

Supermarket. Along the same lines, Bilge, Venables, and Casti have developed an agent-based model of a supermarket (http://www.pubmedcentral.gov/redirect.cgi?&&reftype=extlink&artid=128598&&http://www.simworld.co.uk). SIMSTORE is a model of a real British supermarket, the Sainsbury's store at South Ruislip in West London. The agents in SIMSTORE are software shoppers armed with shopping lists. They make their way around the silicon store, picking goods off the shelves according to rules such as the nearest-neighbor principle: “Wherever you are now, go to the location of the nearest item on your shopping list.” Using these rules, SIMSTORE generates the paths taken by customers, from which it can calculate customer densities at each location.

It is also possible to link all points visited by, say, at least 30 percent of customers to form a most popular path. An optimization algorithm can then change where in the supermarket different goods are stacked and so minimize, or maximize, the length of the average shopping path. Shoppers, of course, do not want to waste time, so they want the shortest path. But the store manager would like to have them pass by almost every shelf to encourage impulse buying. So there is a dynamic tension between the minimal and maximal shopping paths. This model was originally aimed at helping Sainsbury's to redesign its stores to generate greater customer throughput, reduce inventories, and shorten the time that products are on the shelves.

Department Store. Macy's is a department store chain using ABM. In 1997, Macy's East approached PricewaterhouseCoopers with the following question: “How do we know when we have the right number of salespeople on the selling floor?” According to industry veterans, the retail business is a business of averages, where analysis is done on a spreadsheet. It is a business that deals with sales volume per hour as the determining factor in its allocation of salespeople, and the number of salespeople placed on the selling floor is based on the velocity in sales predicted for a specific day. And yet real behavior is the result of interactions between individuals, not averages. With ABM, Macy's had the opportunity to use visualization to review data in a way that becomes informational and leads to solutions. Spreadsheet data averages can be used to estimate distributions of individual behavior, so the individual agents in the simulation are consistent with the available real-world data. But because the agents represent individuals, the actual flow of their behavior can be much more realistic and informative. So instead of making estimates from the top down, Macy's can observe how volume really occurs from the bottom up. The virtual store can be modified in terms of layout (shelves, cash register positions, gates, etc.) and number of employees per department to see how these changes influence the affective state of a large number of agents. One can then explore the space of levers to maximize the number of happy customers in the most cost-effective way. Results from the model include the observation of “microbursts” of demand, where customers may be doing “project shopping” (e.g., buying an outfit and then accessorizing it), the importance of proximity to items (physical placement as well as brand-relatedness), which helps drive impulse buying.

Markets

Stock Market. The dynamics of the stock market results from the behavior of many interacting agents, leading to emergent phenomena that are best understood by using a bottom-up approach—ABM. There has been an upsurge of interest in agent-based models of markets in the last few years, stimulated by the pioneering work of Arthur and colleagues. One commercial application has been developed by Bios Group for the National Association of Security Dealers Automated Quotation (NASDAQ) Stock Market (http://www.pubmedcentral.gov/redirect.cgi?&&reftype=extlink&artid=128598&&http://www.cbi.cgey.com/journal/issue4/features/future/future.pdf). In 1997, the NASDAQ Stock Market was about to implement a sequence of apparently small changes: reduction in tick size, from 1/8th to 1/16th and so on down to pennies. NASDAQ considers changes in trading policies very carefully: NASDAQ stands to lose a great deal if a new rule provokes a negative network-wide response from investors, market makers, and issuers. In the past, NASDAQ executives have analyzed the financial marketplace through economic studies, financial models, and feedback from market participants. The Market Quality Committee establishes regulations largely as a result of input from economists, lawyers, lobbyists, and policy makers.

To evaluate the impact of tick-size reduction, NASDAQ has been using an agent-based model that simulates the impact of regulatory changes on the financial market under various conditions. The model allows regulators to test and predict the effects of different strategies, observe the behavior of agents in response to changes, and monitor developments, providing advance warning of unintended consequences of newly implemented regulations faster than real time and without risking early tests in the real marketplace. In the agent-based NASDAQ model, market maker and investor agents (institutional investors, pension funds, day traders, and casual investors) buy and sell shares by using various strategies. The agents' access to price and volume information approximates that in the real-world market, and their behaviors range from very simple to complicated learning strategies. Neural networks, reinforcement learning, and other artificial intelligence techniques were used to generate strategies for agents. This creative element is important because NASDAQ regulators are especially interested in strategies that have not yet been discovered by players in the real market, again to approach their goal of designing a regulatory structure with as few loopholes as possible, to prevent abuses by devious players.

The model produced some unexpected results. Specifically, the simulation suggests a reduction in the market's tick size can reduce the market's ability to perform price discovery, leading to an increase in the bid–ask spread. A spread increase in response to tick-size reduction is counterintuitive because tick size is the lower bound on the spread. Initially, it was believed that the implementation of decimalization would be conducive to tighter spread, easing the discrepancy between bids and asking prices. Decimalization, overall, was thought to be highly efficient and effective. Among market professionals, the perceived wisdom is that providing greater granularity of price denomination is good for investors because it promotes competition among buyers and sellers who can negotiate in more precise terms, and thus it drives the market's spread down, which results in better prices for investors. This wisdom is difficult to test empirically: the complexity of market behavior makes isolating cause and effect highly problematic. Without a computer simulation, rule makers are stuck with an intuitive argument, and one that is poor in detail, judging market interaction by only one measure: competition (and hence price). Other dimensions of the problem go unaddressed: if better prices are available, do only small investors benefit, or will large ones benefit too? Will smaller tick sizes make the market more jittery and volatile?

A spreadsheet model or even system dynamics (a popular business-modeling technique that uses sets of differential equations) would not have been able to generate the same deep insights as ABM, because the behavior of the market emerges out of the interactions of the players, who in turn may change their behavior in response to changes in the market. The interactions between investors, market makers, and the operating rules of the NASDAQ Stock Market make the entire system's dynamics quite hard to understand. Predicting how it would change under a new set of operating regulations cannot be based on intuition or on classical modeling techniques, because they are not suited to describe the complexities of the behavior of the stock market agents. For example, the mapping between tick size and spread can be understood only by taking into account details of the investors' and market makers' behavior to model the process of price discovery.

Organizations

One promising area of application for ABM is organizational simulation. It is clearly possible to model the emergent collective behavior of an organization or of a part of an organization in a certain context or at a certain level of description. At the very least, the process of designing the simulation produces valuable qualitative insights. But, in certain cases, one is also able to generate semiquantitative insights.  

Financial Institutions. Operational risk arises from the potential that inadequate information systems, operational problems, breaches in internal controls, fraud, or unforeseen catastrophes will result in unexpected losses. According to the Basle Committee on Banking, operational risk involves breakdowns in internal controls and corporate governance that can lead to financial losses through error, fraud, or failure to perform in a timely manner or cause the interests of the bank to be compromised in some other way, for example, by its dealers, lending officers, or other staff exceeding their authority or conducting business in an unethical or risky manner. It is increasingly viewed as the most important risk that banks face. Examples of large operational losses include Daiwa, Sumitomo, Barings, Salomon, Kidder Peabody, Orange County, Jardine Fleming, and more recently NatWest Markets, the Common Fund, or Yamaichi. Although most banks have developed efficient and sometimes sophisticated ways of dealing with market risk and to large extent credit risk, they are still in the early stages of developing operational risk measurement and monitoring. Unlike market and credit risk, operational risk factors are largely internal to the organization, and a clear mathematical or statistical link between individual risk factors and the size and frequency of operational loss does not exist. Experience with large losses is infrequent, and many banks lack a time series of historical data on their own operational losses and their causes. Uncertainty about which factors are important arises from the absence of a direct relationship between the risk factors usually identified (measured through internal audit ratings, internal control self-assessment based on such indicators as volume, turnover, error rates, and income volatility) and the size and frequency of loss events. This contrasts with market risk, where changes in prices have an easily computed impact on the value of the bank's trading portfolio, and with credit risk, where changes in the borrower's credit quality are often associated with changes in the interest rate spread of the borrower's obligations over a risk-free rate. Given all of the characteristics of operational risk, it is obviously difficult to quantify. Operational historical data are so scarce that it is not possible to allocate capital reliably and efficiently, and it is not possible to obtain good VAR (value-at-risk) and RAROC (risk-adjusted return on capital) estimates. Capital allocation is important because it gives managers an incentive to keep operational risk under control. Yet there is increasing pressure on financial institutions to quantify operational risk in a way that convinces both investors (efficient allocation of capital) and regulatory entities (risk under “control”). More precisely, a financial institution must be able to quantify operational risk within a reliable framework to be able to keep risk under control, optimize economic capital allocation, and determine its insurance needs.

Given the characteristics of operational risk, bottom-up enterprise-wide simulation looks like a promising approach (to low-frequency high-impact operational risk). What is needed is a framework that includes the possibility of nonlinear effects because of interactions among subunits and to cascading events. The framework should be able to operate with scarce data. Hence the idea to simulate operations from the bottom up to generate a large artificial data set that includes large events. The artificially generated data can then be used to apply classical capital allocation techniques. Bios and Cap Gemini Ernst & Young have applied ABM techniques to measuring and managing operational risk at Société Générale Asset Management (SGAM). A simulation model of the business unit's activities was designed, starting with business process modeling and workflow identification. By using the business process model and the workflows the bank's “agents” were then identified, and their activities were modeled as well as their interactions with other agents and the risk factors that could impact their activities. To make the tool tractable in the end the activities had to be modeled in enough detail to capture the “physics” of the bank but not too much detail. The risk factors were connected to the bank's profit and loss through potentially complex pathways in the organization, for example from a client's order to the detection of a trading error in the back office. Then the bank's environment was modeled—the markets, customers, regulators, etc. By running the model, it is possible to generate artificial earnings distributions, used to estimate potential losses and their likelihood. For example, the bank can compute its “earnings-at-risk,” that is, the minimum earnings that could be observed in one year at the bank with a 95% level of confidence. The benefit to the bank: its allocation of economic capital is backed by a simulation of how the organization operates rather than based on some strange combination of industry-wide historical data and accounting magic. If the model is good, regulators accept it more easily and the bank does not have to put aside 10 times the amount of economic capital it really needs. For an asset management business, economic capital is a fraction of assets under management. Reducing the fraction by just 0.01% means millions of dollars. Measuring is just the first step, though. An added benefit of simulation is that one can identify where losses come from and test mitigation procedures.

When deciding to model a bank by using ABM, one is not making an arbitrary modeling decision. One is modeling the bank in a way that is natural to the practitioners, because one is modeling the activities of the bank by looking at what every actor does. If one is modeling the bank's processes instead, it is more difficult for people to understand the model because one person's activities span many processes. That has important consequences when it comes to populating, validating, and calibrating the model. If people “connect” to the simulation model, in the sense that they recognize and understand what the model is doing, they can improve it, more easily quantify what needs to be quantified, etc. Because they have a deep understanding of the risk drivers related to their own activity, it is easier to incorporate the relevant risk drivers into the model. Once they have their activities and the corresponding risk drivers in the model, they can suggest control and mitigation procedures and test them by using the simulation tool. In other words, ABM is not only a simulation tool; it is a naturally structured repository for self-assessment and ideas for redesigning the organization.

ABM is perfect not just for operational risk in financial institution but for modeling risk in general. Modeling risk in an organization using ABM is THE right approach to modeling risk because most often risk is a property of the actors in the organization: risk events impact people's activities, not processes. For example, it is more natural to say that someone in accounting made a mistake (sent the wrong invoice to a customer) than to say the receivables process was impacted by an error event in the invoicing subprocess. ABM will revolutionize business risk advisory services because it constitutes a paradigm shift from spreadsheet-based and process-oriented models. Populating, validating, and calibrating an agent-based model of risk is an order of magnitude easier and makes much more sense than other models. The agent-based model also makes the formulation of mitigation strategies easier. Within 3–6 years, ABM should be used routinely in audit.

What the Société Générale Asset Management example has hinted at is the idea of using ABM to design better organizations. Indeed, once one has a reliable model of an organization, it is possible to play with it, change some of the organizational parameters, and measure how the performance of the organization varies in response to these changes. Performance measurements can range from how fast information propagates in the organization to how good the organization is at collectively performing its task—inventing new products, selling, or managing receivables.

For additional information, or to view the entire article, please visit: http://www.pubmedcentral.gov/articlerender.fcgi?tool=pmcentrez&artid=128598.

Copyright © 2002, The National Academy of Sciences

Proc Natl Acad Sci U S A. 2002 May 14; 99(Suppl. 3): 7280–7287.

doi: 10.1073/pnas.082080899.

Platforms and Methodologies for Enhancing the Social Sciences through Agent-Based Simulation

  • Eric Bonabeau
    Eric is one of the world's leading experts in complex systems and distributed adaptive problem solving. He has a Ph.D. in Theoretical Physics from Paris-Sud University in France, and he spent several years as a Research Fellow at the Santa Fe Institute. He is also an alumnus of the two premier French universities: Ecole Polytechnique and Ecole Nationale Supérieure des Télécommunications. Eric can be reached at eric@icosystem.com.
 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!