hello everybody Welcome to the closing ceremony and panel for the 2024 kkit Global summer school the pth to utility we have over 6,000 of you who joined us this year which is a record-breaking number so congrats to everybody that participated you did an amazing job um some of you have already completed all four Labs which is amazing I know they were challenging this year some of you are still working on it which is totally fine you still have plenty of time but for those of you who have already completed all the labs just an extra big congratulations because some of these Solutions are super creative and they look awesome so like I said we have had over 6,000 participants this year which is a record for us um this is the fifth year that we've done this and and the number of countries that we've had represented the number of languages that we've had has been increasing every single year that we've did done it and that's just amazing to see uh we have had lots of University and high school students but also a good amount of Industry professionals so it's great to see such a wide mix of people in the audience and then we have had almost uh well over 70% of you join us on Discord as well and I've seen some really interesting discussions I've tried to participate as much as I've could hand in the Discord this year as well but this is an all-time high for us too um so I just wanted to I don't want to take too much time away from the panel but I just wanted to say a big thank you to our entire Team all of the lectures all the people who worked on the labs uh and John watus as well who has been helping me a lot with the content in the curation of the summer school and then there are also a lot of people behind the scenes that you may or may not have seen um such as some of our events coord ators like ala and Serena VA who does all of our software on the back end and then of course all of the people who are helping out in the Discord in the chat in the crowdcast actually running the event and the videographers as well so big thank you to all of those people we hope that you're going to walk away from the summer school this year knowing a whole bunch of things I put just four of the big ones here um we hope that you will understand what utility scale quantum Computing is and which application areas are probably going to be most well suited for this area of quantum Computing we hope you'll know how to transpile and run circuits using The Primitives at utility scale and lastly now that you have become familiar with state-of-the-art mitigation techniques you will know how to apply them to these large scale quarm circuits thank you all for the community that you have built during this summer school Alia did a really fun job on the slide of putting some of the memes that you guys have been working really hard on um I've had a a real kick reading some of these in the Discord but again the discord's still going to be open for a while so there's always more time to add memes um and just some closing reminders before we get into the panel um the deadline for all lab submissions is Wednesday July 31st at 11:59 p.m. eastern time um the Discord server is going to be closing that Friday August 2nd and then the deadline for certificate and Quantum badge requests are going to be the following Friday August 9th so feel free to screenshot this to remind yourself of these important dates um it's really important that you fill out the event survey please please please fill out the event survey this really helps us understand what we did well and what we can improve upon for next year if you want to continue to be part of the community you want to continue to get free awesome educational content for IB M we need you to fill out the survey um and with that I think we're ready to basically just jump into the panel I'm going to be moderating the panel and these are our valued and esteemed participants we have zco Pedro Jen abanov and Derek who all have slightly different jobs and functions at IBM Quantum um so without maybe any further Ado I'll have all of our panelists just briefly introduce themselves and tell you a little bit about the work that they do and I think I'm going to start with SLO SLO you want to introduce yourself and we'll get going all right thank you Olivia and congratulations to 6,000 of you over what 109 countries I think Olivia said um that is incredible that is really incredible um I'm zlatko minev and you didn't see me in your lectures this year but I've had the pleasure to lecture up until this year and now I think uh I get to send you off uh so I'm a Quantum Wrangler I basically heard quasi particles all day long hoping to unlock a little bit of the Mysteries uh of quantum and to build quantum computers for uh useful quantum computers for the world so uh how did I get here uh I think that's part of the question right Olivia so uh let's see I'll never forget back in 2007 my first semester in college at UC Berkeley uh I was taking a class from Professor Eran sadiki electromagnetics and at the end of this class he asked me to join his lab and I told him well what do you do and he said Quantum Computing and I said I've never heard of that before but it sounds like my two passions and ever since I've been in Quantum Computing for I guess quite a bit of time now so then I went off to grad school in yo uh continuing and superc conducting cubits with Michelle devor um I guess the summary of that chapter of my time in Quantum was proposing and Publishing an experiment that overturned NE Bor view on Quantum jumps refined this debate between schoninger and Einstein demonstrated that Quantum jumps which you might have heard a bit about as errors here in quantum computers actually possess a degree of predictability and can be coherent continuous and even deterministic like and then I had this incredible privilege of joining IBM you know the the leading place in Quantum Computing um starting and kick kickstarting an open- Source Quantum Hardware design and software product team launching the world's first Quantum Eda uh software which I believe is still the most widely adopted today uh meeting and working closely with so many of the other Stellar Quantum theorists you see here at IBM um you know that's with Kristen Tam kickstarting our uh experiment on probabilistic air cancellation and then you hear more about that from abinav uh who we had the pleasure of having that paper out and you might have used in Kiss kid runtime and you know most recently really focusing on what can we do with large scale quantum computers uh you know starting this Quantum team here on uh what on working closely with folks in the community and now most recently really focusing more on this kind of research in our Theory and capabilities team to really help bring that utility and useful Quantum Computing to the world which is the whole subject of all 6,000 of you today so I'm really excited to be here and with that um I'll pass the Baton over to Pedro thank you so much for for that um introduction SLO um congratulations everyone as SL and Olivia have already said um and thanks for attending this uh closing ceremony today uh my name is Pedro Rivero and to answer the question of how did I make it here actually my my journey has been a little bit uh with jams I started doing aerospace engineering uh for my bachelors um in in Spain where I'm from and I transition from there to the US uh where I picked up on physics and I ended up doing a a PhD in in Quantum Computing and right after covid um when I was uh trying to attend some conferences and uh meet people I got the opportunity to interview for IBM and eventually I joined the team uh where I've been working for the past two and a half years uh first of all developing software prototypes uh with things mostly related to error mitigation um and then after that I I joined a team uh who works closely with partners and clients to try to uh deliver useful Quantum Computing um to their institutions so it's been a privilege as SLO mentioned to be able to collaborate with many extremely bright Minds some of them here today in the in the panel uh and I really hope that you enjoy um this session today so with that I will pass it on to to Jen all right hey everyone uh I'm Jen I'm happy to be here and chat with you all um I'm a technical product product manager on the quantum team and so what that means is basically getting to work with some really amazing software engineers and researchers to identify and build out um software capabilities that we can get into the hands of users and help them push you know the the size and complexity of the problems that uh we can tackle with today's Hardware so happy to chat with you all right pass it to you Phenix yeah uh hi everyone um I am abav kandala and I uh lead the quantum capabilities and demonstrations team here basically what that means as we try and build the tools for executing large scale Quantum circuits um and exploring uh Beyond classical Quantum computation um at my time at IBM had the the privilege of working across many different areas uh aspects of coherence and and control of superc conducting cubits cuid characterization um also deployment but uh a large part of my work has been focused on exploring whether we can extract um useful information from preall tolerant um quantum computers and this is in the context of Aram mitigation which will be a very strong theme of our discussion I guess today um yeah um I think uh to Derek yeah thanks abov and thanks Olivia for having me here today today uh my name is Derek I'm a research scientist here I come from a background in computational physics and quantum chemistry and Quantum Optics and came to IBM to continue studying similar systems but using the quantum computers uh since then have worked with basically everybody on this panel in various projects and Quantum simulations of molecules materials high energy physics uh developing new error mitigation techniques and recently exploring experimental demonstrations of dynamic circuits and exploring what kind of algorithm we can unlock by using this new capability of ours um so with that thanks for having me I'm excited to be here okay awesome uh thanks everybody for introducing yourselves so the way this is going to work is uh ala and I have collected some questions and I have curated some questions in preparation for this panel so we'll begin with those but while I'm asking those I'm also going to be moderating the chat in YouTube so if there are any additional questions please post them there and hopefully we'll get to well we'll get to as many as possible hopefully we'll get to yours um so the first question I have and I think I'm going to point this to abanov first um but maybe zlako and others want to chime in as well so the first question is um is the hardware that we have now the quantum Hardware uh good enough and if not what are the main things that we need to improve upon to get value in the utility era yeah yeah um that's a great question so in if if one looks at how Hardware has evolved across many different fields right it's it's almost never good enough uh but but but that said you know this always has to be looked at from the perspective of where where things were a few years ago you know in in 2016 we we launched the first um 5 Cubit Quantum machine as part of the the quantum experience and now we've built machines with over 1,000 cubits um uh and are able to do pretty amazing things already with our with our 100 Cupid machines so I think the the kind of progress Quantum Hardware has made superc conducting Quantum Hardware in particular has made over the last decade has been absolutely incredible I I myself you know wouldn't have been able to predict this was where things were able to go um uh there are already very compelling examples of of computations one can run with these with these systems already and um yeah I I think if if we compare our large systems to the kind of metrics that we have on on on on smaller subsystems I think there it's almost an order of magnitude in error rates and coherence that's waiting to be tapped into for the large system so I think it's going to get much better and and and and folks are going to be able to to to run even more challenging computations very soon I've been up nailed that um I'll maybe just rephrase it the way it hits me recently you know imagine your first smartphone you owned it was state-of-the-art it had incredible features and it was a flip phone and well mine was and today it just looks sluggish outdated and you know not good enough in Quantum Computing we have a similar thing you know the hardware that met our needs uh yesterday it does not meet them today today's Hardware will not meet them tomorrow especially as advancements are made especially as uh the the capabilities that we can perform on them catch up so in that sense Hardware will never truly be good enough as that's a moving lifestyle creep bar but that doesn't mean that there aren't key Milestones along the way uh as I'm have mentioned you know first it's good enough to begin to get us to utility where we roughly are now then Quantum Advantage then air corrected cubits then 10 then 100 then a th air corrected CU so and so on so as always we need to lower the aor rates have more efficient airor mitigation reduce cross talk better modelbased error handling and uh and more smart people like you guys to help us solve these big challenges awesome all right next question which I think I'm gonna point this one in Pedro's Direction first um mapping problems to Quantum circuits seems really hard how did you learn how to do this and how can we make it more streamlined thank you so much for the question um it's actually a really good question um so I would say two things uh first of all there's of course a whole theoretical part of how you can formulate problems from a specific domains into something that is quantum like right so you first need to have a really good handling on that part of the theory to understand what is exactly um what you're going to be facing once you do the your mapping right um usually this is the way that it's been addressed historically and this is what people have been trying to do however more recently what we have found out is that bringing in knowledge of the hardware both the advantages that you can find in the hardware that you're going to be using as well as the limitations gives you an advantage in order to to get the right results that you're that you're expecting right so in today's uity era I would say and I think ABF could uh Hing here as well and and and maybe uh share some of his insights in today's utility era I think that it's very very important to um do this process of understanding what the limitations and advantages of your Hardware are as well as understanding exactly what is it that you want to run and make it maybe making some compromises between the two of them right that's probably the best way that you can map a particular problem and solve a particular scientific question using today's uh Quantum quantum computers um that's the best answer that I think I can give but I have probably uh got chime in yeah feel free abov if you want to add anything or anyone else feel free to chime in as well I'll just top off quickly to say that it's getting easier at least in my experience it has because now we have patterns building blocks we can reuse more and more tools you know The Primitives are there so you know when we were doing our experiments earlier it was way harder to do much more basic stuff now most of that stuff is like plug-and playay and it's it's like trivial it's no longer good enough it's like you're flip phone to smartphone um so I think the future is in use and reuse yeah okay um next question I am going to direct to you Jen I know this is not exactly your area of expertise but as a technical product manager I think you might be the best suited to try to answer this one um how do you envision the integration of quantum Computing with classical high performance Computing and in particular what potential challenges need to be addressed yeah that's a good question um so I think um maybe I'll point to like a specific example that has that's come out recently I think that is a good um it's a good example of how we can leverage like the best of what classical is bringing together with like the best what Quantum is bringing and this is some of the work that came out of um the IBM team and collaboration with others from Rin on this like Quantum Subspace diagonalization um and so in that in that case the idea is um how can I draw so you know our Hardware is noisy but like how can I draw enough decent enough quality samples from the qpu and then throw those at some classical machines to do some postprocessing of that to try and improve the sample the the signal that I extracted from there um and that can that can be very heavy on the classical computation side um but it's like a really awesome example of uh how Quantum is bringing what it can do well which is the like the quantum signal that is underlying the problem we're trying to study and then massively parallel classical computation on the post-processing side um I think that starts to introduce you know other kinds of considerations like how are we transferring data back and forth in an efficient way between a qpu and CPUs and um do we care about latencies or maybe we don't um do these things need to be collocated uh you know um in order to like address those constraints or not um I think there's been a lot of I would say ongoing work in identifying what those use cases would look like and then really pinpointing what the constraints are um so it's an definitely like an interesting era to be in um great that's a perfect answer thank you Jen um next question for Derek I we always get this question but I always think it's such a good question so I ask it every single time um when we go beyond the limit of things that are able to be simulated classically with a quantum computer how do we know they are correct what do we do to validate our answers yeah I guess it mean it kind of depends what you mean by validate like if you mean by validate how do you know you have the exactly right answer if that's the point you don't really know um but how do you do it in any other field when you're trying to learn something new right you have controls make comparisons you take things of certain extreme limits to make sure that they all make sense and we can do the same thing in Quantum Computing um more precisely for example we know that Clifford circuits are easy to simulate so we can take some circuit that you're trying to simulate in some hard regime put it in the Clifford limit and then do the calculation on the quam device make sure that result makes sense and then hopefully when you bring to the non-li limit um the result continues to make sense and this is how in utility experiment for example that abanov led this is basically how they um confirm some of their results in some regimes then you can take other methods that are known to fail in different places so for example in tester networks you know that they fail due to entanglement structure not due to non- cliffer and so then you can also make sure that your result makes sense in the regimes it should make sense and then if you find that you're checking all these boxes and these different orthogonal methods all match your result um in those limits then you have better confidence that your result is correct but I think ultimately you don't really know um and that is the point yeah anyone else feel free to chime in there because I know this is a question with a lot of different possible answers yeah no I think I think Derek uh really kned it uh with what one can do for for verifying you know if sockets are being executed reliably on on qpu um this is this is a very common Playbook you know even in the in the classical simulation space u once once you get beyond the scale of what one can do um exactly you essentially try and compare to other simulation methods that exist this is certainly one I think I think once um we get to uh aspects of trying to simulate real molecules and and and and real materials and things like that of course you know uh experiment could be the true uh uh you know verifying tool if if you're trying to simulate binding energies for instance of of something you're you're trying to design on a Quantum machine um one could actually go and measure this experimentally and and and see if if things line up that's that's certainly another way in which things can go um you know often people uh look at at that Quantum Computing is a potential tool to to get these answers you know before the expense I or time consuming path of actually having to do the you know to do the experiment in in in in in practice um so I I think yeah I I think there there are a few different ways but but yeah it's a it's a very um interesting time to be where you can you can run computations on on on these processors and and and not know what the right result is fantastic okay um this next question is about transpilation so I don't know who here is the best transpilation expert I feel like all of you are very good at it um how do we know when transpilation is again good enough what happens if I use kiss kit to transpile something and it the fidelities are still bad are there any other options maybe I'll point this one to Pedro first sure yeah thanks for the question again very very thoughtful so I would say usually um I think the the most limiting factor when we try to run Quantum circuits usually the death right so a good indicator of whether your transpilation has been successful or not is uh just by exploring what's the two Cubit depth that you see in the in the final transpile circuit right there are other things to take into account here um maybe one thing to to keep in mind is that usually when you when you take kkit transpiler and you use it directly on your circuit um this transpiler is very very powerful and it's able to do many many things but it also has very generic tools right so if you want to make sure that the um the transpilation that you're getting out of a circuit is the best that you can possibly get usually looking at subsections of your circuit and trying to transpile them uh uh or trying to see if that particular subsection has been optimized in the in the best possible way uh it's a it's a good step uh towards analyzing if the global transpilation of the entire circuit was was good enough um I know that Derek forston has been uh playing around uh for previous work that he's done where he was actually just transpiling his circuit in chunks and then stitching it together to uh to get better results so I would say that usually um you know leveraging these techniques and trying to analyze things um as they call it in in computer science in a peep ho way where uh you basically just take a look at at parts of your circuit and try to analyze if those are are are correct is a is a good strategy to see if your if your U transpile circuit is good enough the other side of the coin is that transpilation is not magical right like there's always going to be a limit as to how much you can optimize or transpile a circuit so if you start with something that is uh you know already too deep uh you should not expect the transpiler to suddenly make magic and make your circuit super small super tiny and and and make it such that you're going to be able to get the the best possible results so bear that in mind as well I think transpilation is is important of course but you need to keep that in mind from the very beginning from the design of your circuit uh as to what is that you're trying to to get at yeah maybe i' add on to that um I think that's a good point Pedro like it's translation can maybe get you so far depending on how you've designed your kind of input problem so like I think there's value in sort of taking a step back and thinking carefully about the original computational problem you have and how you are defining it or constructing it in such a way that like transpilation can be um most effective and I think part of that like previous step um Can involve other sort of maybe more algorithmic techniques where um you can break down your computational problem um before it gets to transpilation and there's usually lots of trade-offs that you pay in that case so um maybe you end up with lots of circuits maybe your observable becomes more complicated and you have to make more measurements um maybe you have introduce more classical pre-processing um things like that we there's some you know techniques that we know about like operator back propagation multi-product formula things like that that um are kind of in a sense um maybe more algorithmic type considerations before then you pass things off to the transpiler um so I think that's that's another like very interesting active area of research and maybe one other area to to add to what Jen said is this um AI transpilers right or approximate transpilers there's a bunch of new techniques that are getting brought up that that might help depending on the problem not all problems though so it's kind of like Success is Not final failure is not fatal you know just keep going okay this is sort of related to that question but I think it's different enough that it still merits some discussion um I I'll pass this one to you zlako uh can you comment on the boundaries that we should be aware of that we have to work within when we're in the area of utility using error mitigation but don't yet have error correction yeah that's uh that is a hard question honestly but is it is the question because almost anytime you run anything on a quantum computer you're going to run up you want to run up and push the boundaries especially if you're doing research so you're inherently going to find yourself in this uh tight spot um you know certainly the philosophy today if you look at a lot of the the the research is to tailor tailor to the architecture of the hardware tailored to the native Gates tailored to the connectivity tailored to the mitigation methods you can implement or that we know how to do and work and that's that will get you pretty far um but you do have to be aware of what noise noise does and how it acts in your computations so you know uh what's possible how far you can take it when things fail because it's kind of like transpilation experiments you know in the modern era are a bit of you know keep trying until you resolve all the issues along the way all the little bugs um so kind of My Philosophy for it especially with the experiments and the boundaries is that we don't really yet have the answer we will of course tomorrow but tomorrow we'll have a new question it's this Perpetual boundary that's why the boundary is Shifting and evolving um so I kind of think of it like how marathoners marathoners try different shoes on different terrains so for each problem I'll try a couple of different things and try to tailor which set of mitigation techniques or which settings I use for that particular problem but of course the the fundamental limits for us are that the mitigation you can do in utility depends on how much noise strikes your observable that you measure and how much noise strikes that observable is generally exponential in uh or or it goes in the exponent of a Decay Factor so it's e to the power of something and that something is the volume inside uh the light cone of that observable propagated all the way back so it scales in the exponent with the number of gates that are in that sort of butterfly velocity light cone and also the strength of the noise uh but a lot of our job is to you know make that noise smaller reduce it undo it so I'll PA pause there and uh turn it over to the rest of you yeah maybe I can present like a very practical approach to this question uh when we do things on the device and exploring the applications uh very very broadly speaking roughly you know missing a lot of details if one Tailors and suppresses noise in a particular way error mitigation becomes a problem of dividing a number by another number and the noisy result by some kind of calibrated result and when you divide a number by another number you want to make sure that to minimize errors that your numerator is right your denominator is right and you know them to sufficient Precision so that given some error you don't divide a big number by a really small number and have this crazy amount of error so you know more bluntly it's do you have enough signal and are you resolving that signal precisely enough if you do those two things then you can probably get a good result from the device if you have not enough signal or not enough resources to resolve that signal well then you won't get good results I think you might be mutant yeah oh thanks sorry anyone else want to chime in here or next question I think that was a pretty good summary yeah I agree all right next question mixes it up a little bit um slightly less technical but still important um I this is directed at all of you so whoever wants to chime in here first can uh how do you learn new stuff in Quantum Computing that is outside maybe your field of expertise do you ask people first read the theory try to do an experim what is your approach SL Co you are not muted so I'm just going to make it you oh god um well you know curiosity is the first thing so if you're asking question great you're already on the right path so great job asking this question I think that's that's helped me so many times uh all this random stuff I learn over the years and eventually it clicks and comes together and is useful for something so keep doing that um I think the in in a way the best thing is if you do have somebody around you who happens to know this stuff really well and is good at explaining it but most of the time that's not the case if you do go with that uh if you don't have that then um you know I personally go on YouTube and look for lectures videos pedagogic lectures then I'll go once I want to dig a Little Deeper you know listen while cooking or something then I'll go a little deeper and actually uh look at textbooks uh that I can find that might have this especially from authors I know are quite pedagogic then I'll look at tutorials online but the way that I I personally ultimately find that I understand and have learned something is by making a cheat sheet on that subject and then trying to you know share that cheat sheet or explain it to somebody else uh so I have these massive like 100 page latag documents with you know cheat sheets of different topics in Quantum uh that have accumulated over the years and and that's the way that for me I'm able to kind of keep track of a bunch of different at first disperate areas so hopefully that helps you if I may chiming right after SLO I think those that's a really good summary SLO and that's lines up with a lot of things that I do as well maybe one thing to add to that is that I think that right now uh you all guys are in a very privileged position where you have access to these huge devices right plus like 100 plus cubits that IBM puts out and you can run things on them uh for free basically right so I would encourage you to also try to run things and sometimes experiments and practice Mayes perfect and that's Sometimes the best way to learn uh I remember when I was doing my PhD um I was trying to run something on I think 25 cubits and I could not do it because we didn't have access to those devices right so thinking that today um we can actually do that on you know 127 133 even um larger devices it's is fantastic and it's a resource that you should not underestimate so by all means try try to practice and and see what you learn from from experiment and who knows maybe you see something that is remarkable enough and that is not well explained from the theory and you can um try to come up with a with a Innovative idea out of that so yeah I think that's a great Point Pedro I think a lot of times people think unless you've been doing it a long time that breakthroughs and science come at a moment that you expect but normally it's a tiny little wiggle on a graph and it's a bunch of people staring at a computer screen going huh what is that that's what science looks like anyone else Jen you want to share because you're sort of in a slightly different job than the rest of these guys here um yeah I guess not to like just uh say the same things which I agreee I think they're great um maybe some other other things are you know like the archive is an amazing resource of like preprint of papers and ideas that people are coming out with so um I guess keeping your eye on that for things that just come across that look interesting to you um I think to your point Z I think curiosity is like probably the biggest thing just if something strikes you like go pull on that thread a little bit more um I think for myself um I think what I've found in my experience is the most interesting way to learn new things is just by talking to people going up to them and asking them like what's an interesting thing you've been thinking about lately um so that's always to me like the most enjoyable uh way to start learning some different stuff I totally agree with that approach okay okay cool um next question is about Cubit connectivity can you comment on IBM's Cubit connectivity and why it is not excuse why it is not all to all I mean that do you maybe want to pick that one up uh sure sure I can I can dig that yeah um that's that's a great question as well so often um they trade-offs basically with with connectivity and and aspects like cross talk um you know of course there's there's a there's a planer on chip layout that that we're we're typically uh wed to with with super conducting cubits so so that that places some some limitations but but why why the heavy hex and why not something a little more connected like the square or you know um uh something like that uh so this this comes back to the evolution of our of our larger devices um uh with with superconducting cubits one has you know a pretty amazing ability to to to leverage microwave design to essentially you know engineer these artificial atoms and engineer their frequencies these Cubit frequencies and so on and and and while this is this is true largely there there are aspects of it which which we can't control you know with with Incredible you know with with the kind of precision one might necessarily want um and and and so as a result of that there can be undesired interactions uh based on the the kind of frequencies we have uh on our uh on our cubits and and and and and one way to to to reduce the possibility of these kinds of uh unwanted interactions Undead interactions is to back off on the connectivity a little bit right and and then um you know uh previously in the context of of trying to build uh an eror corrected quantum computer with the surface code it was also shown by by theorists at IBM that that even with this kind of topology um one one could build uh an eror corrected machine and um um and so that that's largely where the the the the motivation for building this kind of AR came historically um and then more recently there've been there've been other codes that have been proposed you which which require uh um stronger connectivity and and and then the team is is is working towards building these things as well okay great um the next question is a little bit long so just bear with me and I apologized for my coughing fit earlier I'm okay um I know this there are a few people in the chat ask ask if I was dying not D I'm okay um looking at the utility grade Quantum Computing that we have available now what is it for you that primarily embodies the quantum advantage and the value that this brings with it um for example is it the dimensionality of the Hilbert space that can be reached to simulate a larger chunk of nature or is it uh used to more elegantly solve non-physical questions and increasing our standing I'm not sure what that last part means exactly um but yeah I think the what the question is getting at is like where is the advantage or the utility coming from exactly and maybe I'll point that to uh Derek do you want to take this one on sure I can give a try I think it's hard to precisely describe this is a very hard question yeah yeah um where is the utility coming from I would say the bare minimum is that you have to h hbert space that you can't handle on a classical computer that is just the bare minimum but I wouldn't say that is the reason for it right from there you have to be very clever about the kinds of systems you're simulating uh com the kinds of systems that cannot be simulated well with classical methods um if you're going to make this kind of comparison um and so I think if we go back to the utility experiment and of course aov can correct me more here right the the bare minimum was that it was done on 127 cubits but the fact that uh that pushed it to this regime where it was hard to simulate or harder to simulate is that they chose circuits where you could kind of tune the difficulty from Clifford to non Clifford um and so that you could choose a problem given that you had this large hver space to make it a bit harder um and in general there's like several things you can do to make problems hard often increasing dimensionality makes things hard so that's why that study was done in 2D as opposed to 1D of course going to larger system sizes evolving for longer times um especially when you compare against tensor Network methods where you know the entanglement can kind of saturate in the network and therefore um you have this exponential growth of error for the tensor network but not necessarily in Quantum device so there's a lot of knobs you can tune to get to that point and I wouldn't say there's a single cause for it great yeah anyone else want to chime in there or I'll move on to the next one okay uh next question um is it appropriate to model depolarizing error using RB data for ECR gates for two Cubit Gates and averaged single Cubit data for single Cubit operations for modeling noise for a specific qpu AB you want to jump on this one when I see you smiling and it it this question certainly wins the award for most technical question of panel um but uh no so yeah um the trying to use the the numbers one gets from randomized benchmarking you know this is this is information that we typically have uh you know as part of the calibration data that we release with our devices it's it's often a good first indicator trying to use that to to to predict or simulate what you expect the device to do um but but certainly there are aspects um that are not captured by these by these local uh subsystem metrics and and and and so noise on the device in general is is not uh depolarizing um there can be aspects of cross stock that are not captured by by just you know single Cubit or two Cubit randomized benchmarking um and and so while it's it's a it's a it's a very powerful and useful first indicator to to even get a sense of you know uh can I hope to get any signal you know you know in the context of what Derek was saying for instance on on on how do you determine if error mitigation can get you further um I I I think one can one one can one can take a first shot with with that kind of information you know if if you already seem to struggle with um with with with the amount of signal you're seeing in simulation using those randomized benchmarking numbers it's very likely that the experiment will be even worse um um that's that that would be my my my guess or hunch um um but but it's a it's a very good first test I think yeah maybe I'll I'll uh ask brainer here who asked this question are you our referee because this is exactly the kind of question we got with Derek uh from our referee recently to compare the model we used with uh RB data when now it depends how you take the RB data right if you take the RB data of one cubit at a time versus cubits in parallel versus densely pack cubits and parallel you get different numbers because of cross talk uh so what's the right number to take how you model cross talk so as abanov said all of this matters um and so it's a great first crude approximation but all of these other effects about cross talk context dependence are not taken into account which is why roses are red violets are blue and reviewer two said this will never do but that's what reviewer two is known for there must be some good memes waiting around the corner for a reviewer two oh I'm sure um okay next question says um first of all thank you so much for fantastic summer school um it seems like many of these speakers were initially involved in Academia I was wondering what the key things you had to learn or change when you moved to industry was and do you have any advice for people that want to do the same do you want to take this one on first Jen sure um that's a great question um what had to learn to move to Industry um so I would say like you know at IBM it's it's an interesting mix though of like an academic kind of uh environment plus industry obviously but um so it's a little bit of like living in both worlds I would say um I think you know there's it's still very different obviously than than Academia things move I think one of the most obvious things like things move at a much faster Pace uh in general so being comfortable with uh that change of pace um and you know um kind of obvious things like there are milestones and deliverables in Industry that one has to try to meet and so learning how to kind of grapple with that and like manage your time in a good way um I would say a a big at least for myself a big piece of it is learning how to collaborate with people that do not come from my background um so a lot of software Engineers um developers uh maybe people who are a little more tied to like Enterprise application domains um and kind of figuring out how to bring that all together to accomplish the things that you need to get done uh on a relatively short time scale um and I think the only way to like really learn that is to just do it um and try to do it as many times as you can and learn from each of those experiences and try to do it a little bit better the next time um yeah I I think this is like a question that we could talk about for a very long time but I'll pause there yeah I think from from my point of view um I think that the key two distinctions that I make between research in Academia versus research in industry is that industry is usually faster based at Le at least in my in my own experience and uh and also in when you do research in in in industry is also um more focused if if that makes sense whereas in Academia you usually have less constraints and you have more the ability to explore uh in in industry are um you kind of know what you what you have to to to investigate and what are the the relevant problems right so I would say that faster based and and more focused is uh kind of the the trade-offs in or the the key signature of uh research in Industry um and also as y was saying the ability to collaborate uh with people from different backgrounds uh which may not be the same case when you're when you're in Academia right in Academia sometimes there are more more silos and people working things are more similar so yeah just as as Jen mentioned the best way is to just give it a try go for it and and Learn by doing and and it's not as daunting as as maybe you think from uh before making the step so I want to note something very quickly uh that I think people underestimate about working in the industry research environment is that we're all on the same team at the end of the day so during my PhD I was in a theory group mostly during the pandemic most of my papers have one or two authors on it and here I don't know if I had to guess I've been on papers with like 30 or 40 50 people from IBM and work closely with all of them and so like the range of knowledge you have access to is just incredible um and so I've been I felt very fortunate about that aspect yeah maybe maybe one thing I can I can add here is and and this was this was quite uh interesting for me to see as well when when I began to write my first papers at IBM it's that the the the end goal is not just the paper like we're we're we're constantly and actively trying to get the tools that we you know invent or discover and and all of that into a product that that that others can use I I think I think this is this is a very uh strong aspect of of of this kind of research environment um so it it really what does it take to you know to to have a device or a method that that just you know two people trying to really fine-tune you know uh you know are able to to use and run versus an entire community of researchers and I think that's that's really been very very important in in in essentially having such a large community uh join us on this this journey right of of of of really working with with research scale devices essentially you know devices that don't work all the time or devices that have lots of imperfections and so on um and and actually get to the point where we're now talking of utility and talking of accurate computations of these devices at scales that are beyond what one can do exactly um on on classical systems and and so certainly it's that's that's definitely one one pretty nice aspect of of of research in this kind of setting awesome so we're coming up on just about 10 more minutes left here so we might have time for just a few more questions um this is an interesting question maybe for Pedro and Jen um this person wants to know if you can comment on any of the really cool work even if it's just experiments that you're doing with clients out in the real world anything that's not proprietary obviously anything that's published already that you can talk about well this is a difficult one because obviously the ones that I think about right now we cannot discuss uh for privacy reasons um so I will mention maybe um a piece of work that uh we collaborated with recently with uh with um Desi which is an institution in in Germany um where they were essentially um trying to do classification using qml methods but they were trying to run that on on 100 plus Cub cubic devices right uh this was something that hasn't been done uh before at least to the extent that I that I'm aware of uh and so it was a very very thrilling um um project to see kind of how it all came together and at the end of the day even with you know such large uh circuits you could still see that the classification was actually working um so that's just one example of of things that we've been uh working on uh maybe Jen you can take it away and if I think of another project that I can speak about I'll I'll jump in after yeah I would say um so for myself like you know in in the recent past it's my work has been much less research focused and more playing the wearing the of being a product manager um and so for me the the kind of without I'll I'll not give specifics um right now but just like give you a sense of the kind of work it is which is um you know at the end of the day we're trying to harvest interesting things from research put them into turn those into software tools and get them into the hands of our end users and a lot of those end users are the clients that we work with and so um a lot of what like me and my team will do is um work real closely with some of our clients on um kind of like testing out these software components we're building like do they find these things intuitive is it giving them the features or capabilities that they would expect like that they would need for their use case um is it like relevant to their higher level applications um things like that so that's the kind of like um I would say interaction with clients that I'm kind of involved in these days but um and you'll start to see more of like specific examples those that we roll out into open source uh over the next coming months so I'd say stay tuned on that front yeah that's fair definitely stay tuned okay um trying to think how many more questions we can answer here um okay I'm gonna try to fit in maybe two more this is a really quick one maybe for for Derek um during the summer school we saw a lot of the transpiler um and a lot of people by hand basically reduce more than two cubic Gates into two Cubit and single Cubit Gates why can't we perform more than two Cubit Gates uh I'm not sure exactly what this is referring to wait sorry a circuit that has more than two cubits in it I think the gates like why are the gates reduced to only 2 Cubit and 1 Cubit Gates like why can't we perform a 3 Cubit or 4 Cubit gate I think was oh I see so on any Quantum device you have a native basis gate set so Gates that have been tuned up that you can just use and on our devices we happen to have tuned up one and two Cupid Gates specific ones and so any circuit you create has to be reduced down to those gates to be executed on the hardware in principle on these devices one could Envision three for cubic Gates there are proposals out there in the literature but typically you'll find that the fidelities for these Gates aren't as good as if you were to just break them down into the single in two cubic Gates and so in many cases we just use this limited native case set great okay I think with the last five minutes we have left here I'm going to go around and ask maybe all of you the same question it's probably the question you expect me to ask uh this last question comes from the audience though not for me um and it says what do you envision or how do you envision the path to utility in the next five to 10 years I'll let you all think about it for maybe just a second but maybe we'll start with use Loco just because you're at the top of my screen here all right you give me the hard ones um I think Carl Sean said it in science there are no shortcuts to the truth so I Envision a stepbystep uh approach I think as Jen and maybe Aban have also mentioned or maybe almost everybody said um I think it's it's a continuous Glide and slide into that kind of utility I think it's a lot of what we said today the same lessons you guys are learning about tailoring to the problem understanding the noise in the device understanding why you can apply certain Gates not others how you use a mitigation that we have and how you adapt it modify it to the new problems we have you know exploiting dynamic circuits as they become increasingly better for finding shortcuts uh so I I think I kind of see this um tight integration you know between the classical world of computations and understanding exactly that earlier question that someone asked that Derek commented on which is how do classical methods approximate methods scale relative to noisy Quantum simulations because the noise acts uh in a degrading way kind of like an approximation so you can begin to compare those two and the real question to Advantage and to utility is getting the Hardware erir is low enough and the air mitigation efficient enough to where the loss rate of the quantum computer is lower than that of the classic computer including the overheads so there's going to be hopefully a threshold where the quantum computer can begin to outperform the classical computer at a few Niche applications then probably a wider and wider funnel of tasks and uh interesting questions to answer but you never know there might be a big breakthrough or APR right around the corner that's true you always have to keep that in the back of your mind uh Pedro how do you envision the future um it's a fantastic question and also a very risky question to to answer because I don't think anyone here can can actually see into the future um but I would say three things um the first one would be as slacko has already mentioned is how do we make sure that we can integrate these high performance Computing capabilities right into a framework that is say we call it an IBM Quantum Centric supercomputing right uh to reach scales in computation um assisted by Quantum devices that have not been achieved before right I think this is uh Point number one for the Future Point number two I would say how do we transition smoothly from a regime where we're doing er error mitigation into a regime where we start incorporating error correction right and eventually uh reaching fall tolerance uh I think this is a this a very interesting scientific question uh to which probably not many people have answers for um and so I I Envision that this will take a relevant Place uh moving forward and then the third and last one is uh I really Envision people uh starting to pick up on on error mitigation developing new techniques and expanding what we now think is possible in this utility era right um so far there's been uh a few papers already uh since the first one that was published on uh in this regime uh I really think that uh with the help of of you guys uh and your your talent we can actually push farther right and I think this is this is also a very very important a very relevant thing right bringing all the field and all the community into actually trying to push and get results uh beyond what we can do today uh so these are the three three aspects for the future that I think are relevant and maybe I'll pass it on to um Jen since we're following the introduction order from the beginning yeah I think um I'll I'll take this one from like a a software lens I suppose um since a lot of the people on this call are a little more research focused um I would say a big challenge that we have is how do we take all of the capabilities coming out of research that let us push to larger and larger problem sizes and turn those into tools that like an end user can actually use productively I think that's really it's really challenging in many ways um to do that a lot of the things that a lot of the experiments problems that we are running today require a lot of very deep level expertise to be able to like really get the performance that we want and how do we surface that up or abstract that up in a way that someone who doesn't have that expertise can still kind of plug and play with these capabilities um I think that's a really challenging problem but um we eventually have to get to that point right like we can't expect everyone to uh be like like an abanov in the lab like uh working magic and getting great results so how do we how do we make this a little bit more accessible to people um and yeah I mean that's and that's where like a lot of the the work that I'm doing is focused um but that all of course hinges on having like great research and capabilities to like uh feed from so I'm dependent on you guys I like that possible title for next year quum Computing for people that are not abanov uh speaking of which a enough you want to offer any of your thoughts sure um I can I can comment a little more about the the shorter term perhaps I think I think Pedro quite nicely mentioned uh his his longer term Outlook um so so uh you know we're we're actively working towards trying to build the kind of tool that Jen was just talking about um you know something that that that users can can can use um uh to run about 5,000 Gates on our on our systems and and and still obtain accurate computation this is this is the absolute near-term goal and I think this this will happen very soon um and I think with tools like that uh what we'll see are many many more demonstrations of utility scale uh uh Explorations and devices uh and all of these will will work towards building our trust in these Quantum processors uh we'll have increasing comparisons of these computations with classical methods uh and there'll be a you know a a vigorous back and forth I think uh between between uh classical simulation uh and and and uh and and results from from uh quantum computers um and and certainly yeah I'm quite optimistic that this back and forth will help us find the the interesting circuits problems which will take us from utility to Advantage so I think this is another aspect which which I expect playing out in which is already playing out but will will continue to to to play out in uh in the near future and yeah I I I think I think others on this call mentioned this uh already but but certainly I'm I'm very optimistic about what uh um Quantum will be able to do with HPC in you know uh in in in parallel I I I think there's an opportunity there to to Really extend the kind of circuit volumes one can simulate um you know ultimately noise place you know has has has some is it places an eventual roadblock on on the kind of circuits we can we can run on Quantum devices even with things like arom mitigation um and and if we can you can add the support of an HPC to to simulate some more depth in the circuit you know things like this can can really push on on um what's what's feasible uh um uh to to compute with these kinds of joint systems um almost so yeah I think I think these are a few of the near-term things I I expect will play out over the next couple years awesome and last but not least Derek you get the final word here I see that's the hard part about going last is everything profound has already been said so instead I'll say something specific kind of in the spirit of one of my favorite podcasts where they bring on economists to predict like where the interest risk will go and then six months later they bring them back and they're either really embarrassed or they're like really proud of what their prediction was so I'm going to do the same thing for Quantum simulations uh which is what I work on and why I came to IBM today we're at the scale where we can do unitary Dynamics for uh jts or like hamiltonian time units of about 1 to 10 only spin models mostly with 1 to 10% error I think in five years we'll be able to go from unitary to any kind of dynamic circuit with about the same error so any kind of open steady state driven system I think we'll be able to go the longer times at least 10x if not 100x longer times and that gets very competitive with classical methods and I think that we'll be going far beyond these simple like hardcore bz on systems will go to molecules actually already understood quite well and maybe go to even more complicated systems like you have in high energy physics so I think in five years um we'll really see an explosion across all these fronts all while maintaining hopefully that like 1 to 10% accuracy or even better that we hope to see when we do these simulations so let's see what it looks like in five years and if I'm embarrassed when this is a replayed to me that's awesome um okay guys we're actually over time by a few minutes so I'm going to just wrap it up here uh thank you again to all of our panelists thank you zako Pedro abanov Derek Jen um we really appreciate your time thank you for answering all these questions and thank you to you guys in the chat in the Discord um to any of you guys who are watching this later on from a different time zone uh your participation this year was really awesome um you did a great job following along with the lectures participating in the Q&A and it's just so inspiring to see you guys contribute every single year um and until then you know we've had a lot of people ask like what should I do next keep up with the IBM Quantum learning platform there's tons of new material that we publish there all the time the kkit YouTube channel is another great source of information I've been working with Derek and ZL both on that we have tons of new videos that come out every single month on the YouTube channel and besides that you know there are tons of other good resources as well that we can continually update people on in the slack join the slack um I think I've exhausted basically all of the resources I can think of off the top of my head but again thank you guys for participating stay curious and we'll see you next time