Categories
Seminar

CSS Seminar: Learning in Linear Public Goods Games

Date:  Friday, September 27, 2013

Time:  3:00pm

Location:  Center for Social Complexity Suite 373-381, Research Building

Presenter:  Chenna Reddy Cotla, CSS PhD Candidate

Title:  Learning in Linear Public Goods Games: A Comparative Analysis

The talk will be followed by a Q&A session along with light refreshments.

Abstract:    This paper examines learning in repeated linear public goods games. Experimental data from previously published papers is considered in testing several learning models in terms of how accurately they describe individuals’ round-by-round choices. In total 18 datasets are considered and each dataset differs from the others in at least one of the following aspects: marginal per capita return, group size, matching protocol, number of rounds, and endowment that determines the number of stage-game strategies. Both ex post descriptive power of learning models and their ex ante predictive power are examined. Descriptive power of learning models is examined by comparing mean quadratic scores computed for each dataset using the parameters that are estimated using all datasets. Predictive power of the learning models is evaluated by comparing mean quadratic scores computed for each dataset using parameters estimated using the other datasets. The following learning models are considered to model individual level adaptive behavior: reinforcement learning, normalized reinforcement learning, stochastic fictious play, normalized stochastic fictious play, experience weighted attraction learning (EWA), self-tuning EWA, individual evolutionary learning and Impulse matching learning. In addition to these prominent learning models, this paper also introduces a new learning model: Experience weighted attraction learning with inertia and experimentation (EWAIE). The main result is that EWAIE outperforms the other learning models in modeling individuals’ round-by-round choices in repeated linear public goods games. Furthermore, while all the learning models out-perform a random choice benchmark, only EWA and EWAIE out-perform the empirical choice frequencies in predicting behavior, which indicates that they adjust their individual level predictions more accurately over time.