Pranjal Rawat
Traditional methods for computing equilibria in auctions become computationally intractable as auction complexity increases, particularly in multi-item and dynamic auctions. This paper introduces a self-play based reinforcement learning approach that employs advanced algorithms such as Proximal Policy Optimization and Neural Fictitious Self-Play to approximate Bayes-Nash equilibria. This framework allows for continuous action spaces, high-dimensional information states, and delayed payoffs. Through self-play, these algorithms can learn robust and near-optimal bidding strategies in auctions with known equilibria, including those with symmetric and asymmetric valuations, private and interdependent values, and multi-round auctions.
Quantitative mode stability for the wave equation on the Kerr-Newman spacetime
Risk-Aware Objective-Based Forecasting in Inertia Management
Chainalysis: Geography of Cryptocurrency 2023
Periodicity in Cryptocurrency Volatility and Liquidity
Impact of Geometric Uncertainty on the Computation of Abdominal Aortic Aneurysm Wall Strain
Simulation-based Bayesian inference with ameliorative learned summary statistics -- Part I