so the second new award that was just created is a Lifetime Achievement Award for the S Community the award recognizes at sending individuals who have made significant and long lasting contribution to the field of economics and computation influencing and setting direction for decades of research um this is the inagural price and the committee uh consist of os fat for Mam and myself with me shairing it and we found it that the two people who really started this uh have to somehow share the word and we're arguing that there should be two of them no Nissan kisto Papu and we only have a single Citation for the two of them for ground baking work that inspired vast community and led to the creation of field of economics and competion and I hope you guys agree that they work and they early work in I don't know 99 has is really what created this whole Community here so we owe them a lot of thanks and I want no one to cheat cheat anymore of the 15 minutes that no has to tell him tell us his name so thank you Eva and I am really honored getting this award and I'm doubly honored sharing it with kristos and I'm triply honored being part of this really amazing scientific community that I don't sometimes we are just part of it and we don't really appreciate how really amazingly nice and good this scientific Community is so I'm really proud being part of it so I have like 15 minutes to talk about my whole research and I think that the occasion talks about for me to talk about something that I worked on a long time ago and yet hopefully it should also be relevant so the best thing is an open problem that's still problem that's still open an open problem from Zen that's still open and that's what I want to talk about so uh you know around 25 years ago we started worrying about where does our input come from because we couldn't just trust it like algorithms used to do and we had this idea that h no not new idea maybe in computer science definitely not in economics that it comes from rational agents and if it comes from rational agents let's just motivate them to tell us the truth and then we can continue out out continue with our algorithmic work of actually processing the data so so then you know basically married the field of mechanism design from economics with algorithms and that was the algorithmic mechanism design so now we have mechanisms where we take algorithms add for to them a little bit of incentives to hopefully induce rational agents to give us the truth and once you have this new concept of a mechanism a subtype of algorithm if you wish the natural question is are efficient mechanisms now efficient in the sense of computer science that we can compute them easily are are they as good as efficient algorithms what would we would really want to do is take any arbitrary algorithm that does something good add to it a proper incentive and say voila everything works and that Al almost is the case so vcg payments basically let you do that if you want to optimize to maximize the sum of utilities of players of the output of players from the output then vcg payment exactly do that for you and uh okay so is that everything that you want to do so it's not a trivial question so there could be several reasons why you wouldn't want to do there but I just want to say it's not a trivial issue because we're in a model with money with transferable utility so you need some kind of argument why you don't just want to maximize the sum of utilities okay so there are two types of arguments basically why you want something else that vcg doesn't just immediately solve for you the first one is sometimes you just don't want to so for example in the situation of scheduling sometimes you worry about the last finishing job for like systemy type reasons and then you don't really want worry about the sum of utilities but on the worst utility if you wish the West with the worst running time and for these kind of question there was an you know an open problem that Amir and me put out in our first paper on the subject that basically said okay efficient algorithms can do a pretty good job of actually ER you know solving this problem at least a two approximation as as m said approximation is a new everything so this is good can mechanisms do the same okay so now we can't just use vcg and we didn't know and we conjecture that they couldn't and after only 25 years H finally now it's a Serum by chodo kupas and Kovach really just from this year that indeed mechanisms are much worse than algorithms they can't do anything for this so vcg does not let you solve these kind of problems so this is one type of open problem of f one time of gap between mechanism efficient mechanisms and efficient algorithms that was just solved what I want to talk in this time in this talk is the second reason why you can't do it why the second reason why you may want to optimize you may want to mechanism that doesn't optimize a sum of utilities and that reason is that you really do want to maximize a sum of utilities but you can't as we know in computer science you usually can only approximate things so and the main problem is that if you just have an approximation algorithm rather than exact optimization algorithms for some of utilities which is a common and reasonable goal then you can't just put vcg payments onto it generally speaking it does not work you do not get an incentive compatible mechanism so that's a that's second kind of problem and probably the most famous Pro questions of that flavor are those of combinatorial auctions so we're in the so this is is like the the point of view that we had like 20 years ago H we have valuations by people so we have M items each user each player has a valuation from any giving a value to every subset of the items and we just want to maximize the sum of utilities okay but we can't really because it's going to be a computationally hard problem and I'm going to focus in this talk about three variants of this problem the first one is a completely General variant you can have n in arbitrary monotone valuation the second version is now the the valuations are restricted to be subm modular which is sort of an nice intermediate class and the third variant is a symmetric case like mult sometimes called the multi- item auction where there everything is like exponentially smaller so really we want to be efficient in the log in the in the amount in the size of the number of items and how the description of a number of items that that is in log the number of items rather than the number items and for each one of these three cases there is a gap between what we know algorithmically and what we know in terms of incentive compatible mechanisms so for General uh valuations algorithms can do an square root of M approximation but we don't know any efficient mechanism that's better than almost linear for submodular we can do a constant approximation the exact Conant depends on the on the on some details of the model and we don't know any mechanism that's better than sare word of N and for the multi-unit case we can do an FP task but we don't know better than a two approximation for each one of these examples you can think of other classes there are other classes there and typically the same kind of phenomena happens there's a gap between what you can do algorithmically and What mechanisms can do and the question is is there really a gap so one thing that I want to take get out of the way before I continue with it's not completely clear how do you provide the Val valuations to the algorithm and there are various approaches for that H you can actually find some kind of bidding language a Ain way to describe a complex valuation H you can talk about like a black box kind of thing where you have a valuation represented the black box that can answer queries for you H and uh and of course these two H approaches you may get somewhat different answers according to your bidding language according to the queries H generally the questions are you know a general enough bidding language a general enough family of queries and to be precise we can talk about a completely general family of queries which is captured by a communication complexity model that is any question of query about evaluation you can just ask and get it so everything that I continue to say is going to be more or less the same for each one of these models as long as the first ones are know General enough but if you want to fix on one particular model just consider the communication model okay so this is the question so what where what do we know and by the way I should say I don't have actual references here but almost everything that I'm talking about what we know is not what I did but other people have did probably mostly shakar dubinski but many others as well okay so first of all I just want to get something out of the way here are like two easy cases and they're not easy in the sense that they're trivial each one of the lines in this slide has like a sequence of papers getting it but it is in the sense that we know that basically for these these three classes at least mechanisms are essentially as good as algorithms so if you only have single parameter it's a single parameter mechanism designed that is just one number you can basically use Myerson and not just vcg then essentially you can do everything as well as you could do uh algorithms okay again not trivial lots of work but that is true if you allow yourself for randomization then again you can essentially do anything that algorithms could do with Rand rized mechanisms of course you can see that the log log M Cub is not the right answer but you know for our purposes I'm thinking about it as one so it's good enough okay so a little bit of open problems there but basically we know if we allow ourselves rations or have a single parameter things are good then the situation is pretty good so uh what can we do deterministically in the general case so basically I would say to a large extent we have one uh one General strategy one tool and this is what's called vcg based mechanisms and that is We There is a clear identification of the situations where the vcg payment rule does extend itself also to approximations and that's basically exactly when the algorithms completely maximizes over some sub range of the outut okay so H essentially all the bounds that I had before of what we know how to do with mechanisms came from these types of vcg based mechanisms okay but the problem is that these vcg based mechanisms are not very strong and one can actually prove lower bounds and how good an approximation they can give you and these are the basic lower bounds so as you can see the lower bounds basically say that you are far away from what you can be done algorithmically okay but maybe there are other techniques that we don't know okay so uh that's basically where we are and let me switch the side and I would like to put forward a conjecture because the previous conjecture was solved so quickly and for this problem which I find even more interesting H there was no clear conjecture so uh you know it was not solved maybe just because of that so so let me put a an exact conjecture and the conjecture is that you can't do anything better than vcg based at least for these classes it's not something that I want to say completely generically I think it's pretty generic but not completely generic but for these three CL classes of you know combinatorial options I want to conjecture is the best you can do is what you can do with vcg based mechanisms which is significantly worse than what algorithms can do and that calls for a lower bound and basically I would say there are if I try to characterize the lower bounds in the literature there are maybe three families the first one is going using is Allah Roberts so Roberts Economist from the 70s actually showed that if your class evaluation is Rich enough you really can't do anything except for weighted versions of vcg and uh of course these generic enough classes are not a generic so our combinatorial auction classes are not generic enough in that sense sense but sometimes you can do something and so some results are known for like extensions of combinator auctions for example to public project to combinatorial public projects or for some restricted classes of mechanisms so that's one class one set of techniques that again doesn't solve what we have but you can get some results there are also some direct techniques sometimes called where you actually analyze what the mechanism can do this is usually works only for very restricted types of queries basically Val valuation queries and sometimes you get some results there and probably the most spectacular result along these lines is if you only have value value queries then H you the subm modular case you can't do anything there there real Gap so that's like a strong result of shinski and finally there's a new framework that shinski put forward that basically tries to understand the menus that are somehow implied by communication and uh there is a hope that you can get some lower bounds from there and there's one example where you actually have a constant Gap so actually there is an approvable constant gap for some subass of two player xos valuations so something you can get out of it and of course the most promising lowerbound technique is the fourth one that we don't know yet so uh so that's my advertisement for an open problem that's been open for almost 25 years and I hope it's solved in less than 25 more years so thank you so thank you n that was really great talk uh we have time for one question while they switching mics and stuff um questions oh good maybe I'll ask you what would be your favorite class of problems here other than auctions oh wow yeah so I'm not sure I'm surprised by this so I have my first favorite now you're asking for my second favorite um I don't know I can't answer just like I think there are like the auctions world so wide that there are many variants there that are basically you know it captures lots of allocation problems which is really what we so okay question having trouble with the set how should we think about how a principal might choose a mechanism if there are significant gaps between modeling assumptions like requiring deterministic mean Ms versus allowing randomized mechanisms yeah okay so you know I'm trying to answer this philosophically right so the the way that so I think that the problems that I refer to this here are really the problem the way that we looked at it 20 years ago where we had a very simple model and we want to understand this simple model even though we knew that reality was much more varied and I think an important part of what this community is done in the last 20 years is exactly go to more realistic more varied richer models and try to study them and sometimes solve a lot of problems that we had with the basic models in a richer setting which I think is great and that's probably the you know looking it from lmic mechanism design point of view that's really the main development of this community has seen in the last 20 years and it's great but I just so but this talk wanted to actually say you know let's go back to the very basic model just try to understand it scientifically and so I understood your question as some kind of criticism and which I agree to okay thank no and let kristos have his [Applause]