Hi! PARIS AI SYMPOSIUM

We consider the problem of fairly allocating a set of indivisible goods to equally entitled agents, with no monetary transfers. A share function maps a pair of agent valuation and number of agents to a non-negative value, with the interpretation that if an allocation fails to give the agent a bundle of value at least equal to her share, this serves as evidence that the allocation is not fair towards the agent. We embark on a systematic study of feasible shares – a share is feasible if we can always give every agent her share. We introduce the notion of a self-maximizing share to capture the incentive for truthful valuation reporting.  We prove the inexistence of an “ultimate feasible share”: one that dominates any other feasible share that is self-maximizing. We then present several feasible shares that are self-maximizing and polynomial-time computable, and approximately dominate all other shares.

 

Joint work with Uri Feige.

We propose a comprehensive model of digital commerce in which data and heterogeneity are defining features. A digital platform matches consumers and advertisers online. Each consumer has heterogenous preferences for each advertiser's brand, and the advertisers can tailor their product lines to the preferences of the consumer. Each consumer can access each seller's products online or offline. The digital platform can improve the quality of the match through its data collection, and monetizes its data by selling digital advertising space in (generalized) second-price auctions.

 

We derive the equilibrium surplus sharing between consumer, advertisers and the digital platform. We evaluate how different data-governance rules affect the creation and distribution of the surplus. We contrast the unrestricted use of data with contextual and cohort-restricted uses of data. We show that privacy-enhancing data-governance rules, such as those corresponding to federated learning, can increase the competition among the advertisers and lead to welfare for the digital platform and for the consumers.

 

(joint with Alessandro Bonatti (MIT)).

We consider two fundamental questions in data-driven decision making: 1) how should a decision-maker construct a mapping from historical data to decisions? 2) how much data is needed to operate ``effectively”? We discuss various central applications in pricing and capacity decisions, together with different associated data structures. We present recent results that enable to quantify (robustly) achievable performance across data sizes, small and big. These results yield a fundamental practical insight on the economics of data sizes: in many applications, a little data can go a long way in optimizing decisions.

 

The talk will draw on results from a series of papers:

- Allouah, Bahamou, and Besbes, Pricing with Samples (OR, 2022). Available at SSRN: https://ssrn.com/abstract=3334650 

- Allouah, Bahamou, and Besbes, Optimal Pricing with a Single Point (EC’21). Available at SSRN: https://ssrn.com/abstract=3801056 

- Besbes and Mouchtaki, How Big Should Your Data Really Be? Data-Driven Newsvendor: Learning One Sample at a Time. Available at SSRN: https://ssrn.com/abstract=3878155 

Consider an urn filled with balls, each labeled with one of several possible collective decisions. Now, draw two balls from the urn, let a random voter pick her more preferred as the collective decision, relabel the losing ball with the collective decision, put both balls back into the urn, and repeat. In order to prevent the permanent disappearance of some types of balls, once in a while, a randomly drawn ball is relabeled with a random collective decision. We prove that the empirical distribution of collective decisions converges towards a maximal lottery, a celebrated probabilistic voting rule proposed by Peter C. Fishburn (Rev. Econ. Stud., 51(4), 1984). In fact, the probability that the collective decision in round n is made according to a maximal lottery increases exponentially in n. The proposed procedure is more flexible than traditional voting rules and bears strong similarities to natural processes studied in biology, physics, and chemistry as well as algorithms proposed in machine learning. Joint work with Florian Brandl (Bonn) https://pub.dss.in.tum.de/brandt-research/dynamic_ml.pdf

Digital Democracy (aka e-democracy or interactive democracy) aims to enhance democratic decision-making processes by utilizing digital technology. A common goal of these approaches is to make collective decision-making more engaging, inclusive, and responsive to participants' opinions. For example, online decision-making platforms often provide much more flexibility and interaction possibilities than traditional democratic systems. It is without doubt that the successful design of digital democracy systems presents a multidisciplinary research challenge. I argue that tools and techniques from computational social choice should be employed to aid the design of online decision-making platforms and other digital democracy systems.

We study the design of profit-maximizing mechanisms in environments with interdependent values. A single unit of a good is for sale. There is a known joint distribution of the bidders’ values for the good. Two programs are considered: (i) Max (over mechanisms) min (over information structures and equilibria) profit; (ii) Min (over information structures) max (over mechanisms and equilibria) profit. We show that it is without loss to restrict attention to solutions of (i) and (ii) in which actions and signals belong to the same linearly ordered space, equilibrium actions are equal to signals, and the only binding equilibrium constraints are those associated with local deviations. Under such restrictions, the non-linear programs (i) and (ii) become linear programs that are dual to one another in an approximate sense. In particular, the restricted programs have the same optimal value, which we term the profit guarantee. These results simplify the task of computing and characterizing informationally robust optimal auctions and worst-case information structures with general value distributions. The framework can be generalized to include additional feasibility constraints, multiple goods, and ambiguous value distributions. http://benjaminbrooks.net/downloads/brooksdu_structure.pdf

The explosive growth of online markets has created complex ecosystems of algorithmic agents. To optimize their revenue, agents need to understand how the market works, and to do so they often resort to strategies that learn from past observations. In this talk, we describe some recent results characterizing the strengths and limitations of sequential decision-making approaches applied to various problems arising in digital markets. These include dynamic pricing, bilateral trading, supply chain management, and adaptive taxation. The analysis sheds light on how the learning rates depend on the interplay between the form of the revenue function and the feedback provided during the learning process.

This paper studies the interplay between worker supply and firm demand, and their effect on sorting and wages in the labor market. I build a model of one-to-many matching with multidimensional types in which several workers are employed by a single firm. Matching is dictated by worker preferences, their relative productivity in the firm, and substitution patterns with other workers. Using tools from the optimal transport literature, I solve the model and structurally estimate it on Portuguese matched employer-employee data. The Portuguese labor market is characterized by an increase in the relative supply of high school graduates, an increasingly unbalanced distribution of high school graduates versus non-graduates across industries, and a decreasing high school wage premium between 1987 and 2017. Counterfactual exercises suggest that both changes in worker preferences and the increasing relative productivity of high school graduates over non-graduates act as a mitigating force on the decreasing high school wage premium, but do not fully compensate for high school graduates’ rise in relative supply. Paper

A cornerstone in the modern political organization of societies is the existence of a deliberative assembly, reflecting the needs of different population segments. As modern societies become more complex, representation according to dimensions beyond political affiliation and geography is demanded; examples include gender balance and ethnicity. As this dimensionality increases, the task becomes more challenging and requires more sophisticated mathematical tools. In this paper, we initiate the study of multidimensional apportionments and show that, in three and more dimensions, their existence is not guaranteed. However, our main result states that it is possible to elect a house nearly respecting proportionality of representation along several dimensions simultaneously. We finally illustrate the potential of our approach with recent election data.

https://www.pnas.org/doi/abs/10.1073/pnas.2109305119

In order to analyze employer-employee matching patterns over time, we build a novel framework called *dynamic inverse optimal transport*. We investigate in particular the stationary case, and computation and estimation issues are investigated. A preliminary application is given to Swedish engineers. Based on joint work with Pauline Corblet and Jeremy Fox

Many modern machine learning paradigms require large amounts of data and computation power that is rarely seen in one place or owned by one agent. In recent years, methods such as federated learning have been embraced as an approach for bringing about collaboration across learning agents. In practice, the success of these methods relies upon our ability to pool together the efforts of large numbers of individual learning agents, data set owners, and curators. In this talk, I will discuss how recruiting, serving, and retaining these agents requires us to address agents’ needs, limitations, and responsibilities. In particular, I will discuss two major questions in this field. First, how can we design collaborative learning mechanisms that benefit agents with heterogeneous learning objectives? Second, how can we ensure that the burden of data collection and learning is shared equitably between agents?

Algorithmic predictions steer markets, drive consumption, shape communities, and alter life trajectories. The theory and practice of machine learning, however, has long neglected the often invisible causal forces of prediction. A recent conceptual framework, called performative prediction, draws attention to the fundamental difference between learning from a population and steering a population through predictions. After covering some emerging insights on performative prediction, the lecture turns to an application of performativity to the question of power in digital economies. Traditional economic concepts struggle with identifying anti-competitive patterns in digital platforms not least due to the difficulty of defining the market. I will introduce the notion of performative power that sidesteps the complexity of market definition and directly measures how much a firm can benefit from steering consumer behavior. I end on a discussion of the normative implications of high performative power, its connections to measures of market power in economics, and its relationship to ongoing antitrust debates.

Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Managing such sharing is one of the classical goals of microeconomics, and it is given new relevance in the modern setting of large, human-focused datasets, and in data-analytic contexts such as classifiers and recommendation systems. I'll discuss several recent projects that aim to explore the interface between machine learning and microeconomics, including leader/follower dynamics in strategic classification, the robust learning of optimal auctions, a Lyapunov theory for matching markets with transfers, and the use of contract theory as a way to design mechanisms for statistical inference.

In this talk, we explore two novel algorithms for multitask online learning, phrased as a cooperative and asynchronous multiagent game. First, we study MT-OMD, a multitask generalization of Online Mirror Descent (OMD) which operates by sharing updates between tasks. We prove that the regret of MT-OMD is of order $\sqrt{1 + \sigma^2(N - 1)}\sqrt{T}$, where $\sigma^2$ is the task variance according to the geometry induced by the regularizer, $N$ is the number of tasks, and $T$ is the time horizon. Whenever tasks are similar, that is $\sigma^2 \le 1$, this method improves upon the $\sqrt{NT}$ bound obtained by running independent OMDs on each task. We then introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown task structure. When the $N$ tasks are stochastically activated, we show that the regret of AdaTask is better, by a factor that can be as large as $\sqrt{N}$ than the regret achieved by running $N$ independent algorithms, one for each task. AdaTask can be seen as a comparator-adaptive version of Follow-the-Regularized-Leader with a Mahalanobis norm potential. Through a variational formulation of this potential, our analysis reveals how AdaTask jointly learns the predictors and the task interaction matrix. Experiments supporting our theoretical findings are presented.

An analyst is tasked with producing a statistical study. The analyst is not monitored and is able to manipulate the study. He can receive payments contingent on his report and trusted data collected from an independent source, modeled as a statistical experiment. We describe the information that can be elicited with appropriately shaped incentives, and apply our framework to a variety of common statistical models. We then compare experiments based on the information they enable us to elicit. This order is connected to, but different from, the Blackwell order. Data preferred for estimation are also preferred for elicitation, but not conversely. Our results shed light on how using data as incentive generator in payment schemes differs from using data for learning. https://ai.stanford.edu/~nlambert/elicitability.pdf

Economists often estimate models using data from a particular setting, e.g. estimating risk preferences in a specific subject pool. Whether a model's predictions extrapolate well across settings depends on whether the model has learned generalizable structure. We provide a tractable formulation for this "out-of-domain" prediction problem, and define the transfer error of a model to be its performance on data from a new domain. We derive finite-sample forecast intervals that are guaranteed to cover realized transfer errors with a user-selected probability when domains are drawn iid from some population, and apply our approach to compare the transferability of economic models and black box algorithms for predicting certainty equivalents. While the black box algorithms outperform the economic models in traditional "within-domain" tests (i.e., when estimated and tested on data from the same domain), in this application models motivated by economic theory transfer more reliably than black-box machine learning methods do.

We consider the problem of controlling a partially observed dynamic process on a graph by a limited number of interventions. This problem naturally arises in contexts such as scheduling virus tests to curb an epidemic; targeted marketing in order to promote a product; and manually inspecting posts to detect fake news spreading on social networks. We formulate this setup as a sequential decision problem over a temporal graph process. In face of an exponential state space, combinatorial action space and partial observability, we design a novel tractable scheme to control dynamical processes on temporal graphs. We successfully apply our approach to two popular problems that fall into our framework: prioritizing which nodes should be tested in order to curb the spread of an epidemic, and influence maximization on a graph. Paper

We analyze a scenario in which software agents implemented as regret-minimizing algorithms engage in a repeated auction on behalf of their users. We study first price and second price auctions, as well as their generalized versions (e.g., as those used for ad auctions). Using both theoretical analysis and simulations, we show that, surprisingly, in second price auctions the players have incentives to mis-report their true valuations to their own learning agents, while in the first price auction it is a dominant strategy for all players to truthfully report their valuations to their agents. See paper on arXiv: https://arxiv.org/abs/2110.11855, as well as a related companion paper: https://arxiv.org/abs/2112.07640.

A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to incorporate these findings into the theory of mechanism design. We consider players that have a preference for truth-telling, and will only lie if their benefit from lying is sufficiently larger than the loss of the others. To accommodate such players, we introduce α-moral mechanisms, in which the gain of a player from misreporting his true value, compared to truth-telling, is at most α fraction of the loss that the others incur due to misreporting. We develop a theory of moral mechanisms in the canonical setting of single-item auctions. We identify similarities and disparities to the standard theory of truthful mechanisms. In particular, we show that the auctioneer can take advantage of the players' preference for truth-telling when the values of the players are correlated, even when there are only two players. In contrast, we show that truthful mechanisms are optimal even among moral ones when the values of the players are independently drawn from certain identical distributions (e.g., the uniform and exponential distributions). A by-product of the latter proof is an alternative proof to Myerson’s optimal truthful mechanism characterization in the settings that we consider.

Joint work with Shahar Dobzinski

Classic voting theory assumes that a set of voters and a set of alternatives are given in form of explicit lists. This assumption is no longer valid when voters have to choose from an exponential number of outcomes, implicitly defined by a combinatorial structure. This is for example the case in public housing markets where the agents' preferences over houses are lifted to preferences over all possible assignments of houses to agents. A crucial question is whether a winning alternative under some optimality criterion can be computed in time polynomial in the number of agents. In the first part of the talk we apply the optimality criterion of popularity (also known as the Condorcet principle) to single-winner elections on matchings and branchings and present a general technique for designing algorithms that find popular alternatives. In the second part of the talk we turn to multiwinner elections and the criterion of proportional representation. We show how to select a set of matchings that satisfies the well-studied property of core-stability, and by that guarantees to respect majority as well as minority opinions of the electorate.

Contract design is a celebrated area of microeconomics that has achieved considerably less algorithmic attention so far than mechanism design. While mechanisms govern the allocation of scarce resources, contracts play an important role in economics and society as they govern the allocation of effort. How can algorithmic and AI techniques help optimize this allocation? Such techniques seem increasingly relevant as contracts emerge in large-scale, complex, data-driven environments like online labor markets. And how can contract design inform the design of algorithms (in particular learning ones) that interact with strategic agents? In this talk I describe some recent results motivated by these questions, with a focus on weakening knowledge assumptions about the agent we’re trying to elicit effort from.

In a private private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information. Paper

Over the last two decades we have developed good understanding how to quantify the impact of strategic user behavior on outcomes in many games (including traffic routing and online auctions) and showed that the resulting bounds extend to repeated games assuming players use a form of no-regret learning to adapt to the environment. In this talk, we will discuss the results know as well as their limitations. In both packet routing and online auctions, the state of the game evolves depending on outcomes in previous rounds, which makes the know extension inapplicable. In the context of simple routing game, we find that the no-regret condition for learning is strong enough to guarantee stability assuming a constant factor increase in server capacity, but examples also suggest that no-regret learning may not be a good enough learning strategy in such environments.

 

Abstract. A key tension in the study of experimentation revolves around the exploration of new possibilities and the exploitation of prior discoveries. Starting from Robbins (1952), a large literature in economics and statistics has wed the two: Agents experiment by selecting potentially risky options and observing their resulting payoffs. This framework has been used in many applications, ranging from pricing decisions to labor market search. Nonetheless, in many applications, agents'exploration and exploitation need not be intertwined. An investor may study stocks she is not invested in, an employee may explore alternative jobs while working, etc. The current paper focuses on the consequences of disentangling exploration from exploitation in the classical Poisson bandit problem. We fully characterize the solution when exploration and exploitation are disentangled, both for "good news" and "bad news" settings. We illustrate the stark differences the optimal exploration policy exhibits compared to the standard setting. In particular, we show that agents optimally utilize the option to explore projects different than the ones they exploit. The separation of exploration from exploitation guarantees asymptotic efficiency, and the payoff benefits of disentanglement are particularly pronounced for intermediate parameters. Optimal exploration and exploitation exhibit a lot of persistence when disentangled. (joint with Alessandro Lizzeri and Eran Shmaya)

They trust us

Members

Founding members

Get in Touch

Pr. Gaël RICHARD

Executive Director

contact@hi-paris.fr

Executive Director

Phone

+33 (0)1 75 31 96 60

Copyright © 2022 • Hi! PARIS • All right reserved