Ep 03 let's Talk Ethics in AI with Sarita Bahl Ft Amritendu Mukherjee
Published: Sep 03, 2024
Duration: 00:14:50
Category: Entertainment
Trending searches: ethics
[Music] what would implications of AI you know these ethical biases not being taken care of on the on the bias side of things and all because in that it's a generative model which we built now that is primarily because it's a it's a end user what they want right if they want a particular model a particular kind of race a particular kind of or particular kind of attributes is their choice the algorithm or what what what basically runs to generate that um is has is that is kind of we try to make sure that there is no bias but it's also at the end of the day the user's choice to generate uh the personal according to their preference Namaste good morning morning how are you good good good good good Bangalore is Spar the Mumbai Reigns it's so true I mean started reading here also not that much you know I was just going through your profile and reading of your general interest and I I'm enthralled by the fact that uh you know you your company neopixel was amongst the first ones to use AI generative models yes that that's true um we have started working on the generative model deep generative models in 2021 that time we are working primarily with the auto encoders and generative adval networks G and later we of course shifted towards diffusion based stack so that time we have used to have difficult to kind of convince our investors that kind of what we are building whe of building synthetic human beings modify their attributes and kind putting apps onto them so it is difficult to convince our investor that what we are building and now Al it is difficult to convince our investor that how we are differentiate so with your hands-on experience on this work of AI generative models how did take care of ethical issues yeah so uh before I think get into that so if if you start to understand right that broadly like what what are the um kind of the ethics and kind of what do you mean by ethics in AI now there's a great book by Kathy o Neil called weapons of math destruction it's basically it Tres to kind of say if what are the risk that if we leave everything to the algorithm and there is no human process involved in that right now I mean the some of the great examples you can think of like feedback loops and all what happens is um let's say you watch particular YouTube video right it could be of certain kind of content and all now uh the the algorithm the way it works is the more you watch it uh so more it will be kind of it will be tuned to kind of give you recommendations of the same sort of kind of videos which you have watched based on your history etc etc so so let's say I watch certain content video the probability that I get recommendations of similar content is significantly higher that called feedback move right now in the same way so I was just reading about it and then there like this meetups and so there is a probability that so let's say the the women uh attending a Meetup or Tech Meetup is less now if I have a recommendation engine which takes care of the gender as a predictor variable what will happen and it will start giving less recommendation to the women to attend the tech meters and then they will attend less you have again a so you understand it will have a kind of a spiral effect they will have and less they will see the more less data so more they basically excluding the entire kind of set um and now of course there is bias um now bias could be bias can be kind of broadly from the historical bias which is there in the society then it could be uh representation bias so you try to um basically have the data set that you create that representation is not properly so that's a representation bias and the measurement bias in the measurement also the way you measure and could be aggregation the way you aggregate the data and all and then at final even the deployment bias are in the evaluation bias and then at the last will be disinformation so which is also very important because this is information with the which you are training the algorithm because at the end of the day any ml algorithm is learn from the data the data is not correct there is disinformation in the data there is nothing kind of to do much I mean from algorithm you have to make sure that the data quality and there is no dis inform now coming to neopixel and all we try to take care of from the very beginning and all we always have tried to take care of the kind of while creating the data sets and all we try to take care of the right representations and all uh we also try to uh take care of like have a proper consents from all for the data set that we are creating uh from the model that it should be used for the training purpose this is the purpose you clearly explain them get their consent and all and then you create the data sets and all which we try to have a representation um which broadly of course me meet our criteria as well as a kind of have a kind of symmetric representation right that's one thing however on the on the bias side of things and all because in that it's a generative model which we buil uh now that is primarily because it's a it's a end user what they want right if they want a particular a model with a particular kind of race or particular kind of ethnicity or particular kind of attributes is their choice the algorithm or what what what basically runs to generate that um is has is that is kind of we try to make sure that there is no bias but it's also at the end of the day a user's choice to generate uh the Persona according to their preference and also we try to uh have people involved in that entire process to ensure that I mean that if there are certain errors and all Happening by the output of the AI we Rectify it uh the human interventions and all uh through the edit process and all before sending it to the customer so it's fascinating to hear you and understand that biases could be across a spectrum not just gender but disinformation and others yes you are taking steps but what if one loses control what could implications of AI you know these ethical bias is not being taken care of this is very true uh and this is true for technology right um again I there is a beautiful book by adwin black on IBM and Holocaust so uh so so basically talks about the contribution of IBM and helping the T and all on to kind of at the time of doing census and how do the categorize Jews and others right and how that help kind of know to facilitate Holocaust and all right it's any technology it is there and that's responsibility to the entire Tech Community who are building there is no denial of that it is always extremely extremely important to kind of be aware that where it can go wrong I mean there's a there's a center of marula center of Applied ethic Santa University so they Define the kind of Ethics as like kind of well-founded standards of you know rights and wrong that prescribe what humans ought to be correct so I I I mean it can go wrong there no denial of the fact and that's why it is always important to have certain checks and balances what what would you you know I'm getting a little scared also right at one level there's an excitement at the kind of opportunities that AI brings in especially if you it as a friend and someone who takes away your job but on the other hand what I'm understanding from you that there things could go just strong if you don't take care of the part yes yes yes absolutely I think about it like same same for the nuclear weapon you it can be mass destruction it can be also a huge source of power so how yeah how do you see your participation in the FI committee as uh something that feeds into your own passion for ethical AI yes so uh I mean I I think uh the F Community is is a great things to do because it is very very new when it again say is all responsibility to all of us now I I expect that F Community to work closely with other bodies like nascom NC is doing a fantastic job yesterday I was in another discussion with ncom responsible AI um they're doing a fantastic job to work with other bodies and of course with the government and to come up with a very kind of framework and all prescriptive framework and sometimes could be uh kind of directive as well um to uh basically have certain things uh these things to uh take care right the kind of an I'm talking about the ethical framework which each organization should follow however there's a fine line that also need to make sure that uh it does not compromise The Innovation what I'm trying to say is too much of uh too much of regulations also kind of damp The Innovation so it's basically a fine balance or maybe kind of for a kind of a large organization to be more who are kind of initially at the building phase it has to be slightly uh kind of certain flexibilities and all or maybe at the time of training and when the algorithm is learning because that's also another part right it's very important so Professor andw once mentioned this that and which is I completely subscribe that like a humans also learn from lot of data right and we go to websites we read so many books and when I write so all the readers all the sorry writers who have influenced me I will be influenced by that when I write that's style would come right let say whoever the my favorite writers right from the kind of Charles deons St to the Roy so that would influence me in some way now the important part of it right is that so I am not restricted right and I'll have my kind of own way of writing also while getting influenced from all these the same thing applies to AI as well if I restrict that for Learning and all then then the learning would be kind of constrainted it will not be able to learn kind of the the potential and that would basically kind of also hamper the intelligence level that it and know at least a to achieve however while inferencing and all right that's the same way right we learn all the words and all but when you write when you infer we we put all the guards in the chick and all right what is right what is wrong what is it should not harm any brought the kind of protection and all is being made which is my intention to write and all those things right the same way when it when I when I in from a model those guards are much more important so not from the learning side mode from the INF side because training side that learning could be dampened and all so but again I mean the five five Community I that that's my broad expectation that you'll work with other bodies and then come up with a kind of very kind of framework and could be both prescriptive and directive and then work with government as well to implement this my last question is what would you you know sort of tell the genzi who are so much into a you know what do they really need to be careful how how what should they do when things go horribly wrong so I my only thing is just to be uh I mean think right like any other tool what you you use right when you use it and all so there is a chance like if we do not use without thinking it's a tool right it can help us to a large extent it's it's a very kind of um initial days also go a long way long there is no denial of that fact as well right and initial are so exciting so you can think how much it could but like any other tool if we do not use it without thinking then it can go wrong horribly wrong so whatever we are using right you just think and it should be a kind of a assistance not a replacement these are the two things which are very very important right so I mean the way we see like it's always people plus AI um not only AI um and I I kind of come from that school of thought to give you an example like if I if I let's say if I ask a kind of um AI tool to write a letter to someone and all um and I just without reviewing it I send it then that's on me it can assist me in helping in writing and all absolutely fine but at the same time you have a review right and then you modify it have your own style or whatever you make sure that it is it basically represents your message before you send it if that's not there that and however that also be I expect to be hyper personalized a lot in near future but anyway it's always that check whether I want to do it that so there are two things right in the free society and all everything should be available and I I completely subscribe to that everything should be available but at the same time then it's in the Linux and all you always used to say that with great power come great responsibility so it's always the responsibility also is on us yeah so if I may say what you're talking is to zenzi basically you know to be very careful be responsible as to how you use Ai and always look at it as an assistance and not a replacement yes and apply your own intelligence as well I mean that's that's that's what we are we are and it does not replace everything thank you so much for your time AMU and all the best to you thank you thank you so much good day bye [Music]