AI can effect Rawls Veil of Ignorance

preview_player
Показать описание
Using Artificial Intelligence to establish the solutions for moral and social problems as it can better reduce the biases and adopt the original position that Rawls suggests
Рекомендации по теме
Комментарии
Автор

Found your YouTube content by pure chance. I saw some videos and found them incredibly interesting. I'm an 18 year old living in Argentina and figuering out what I can do for the rest of my life. I find a lot of topics really interesting and I don't now what to focus on. I just want to pass through my life doing something meaningful that makes the world a better place for all. But thinking of all the terrible things than can happen in the future is really discouraging and sometimes I see better being a nihilistic child just enjoying the pleasures of our civilization and not giving a fuck about anything else. But i decided to try at least to see if I can give my life certain level of meaning and if things go really wrong I can be the nihilistic selfish boy. I see that you are person who is very interested in the future and have investigated a lot. So, I would like to ask you some questions. What do you think that could be the 3 most meaningful paths someone could take to have a life that is worth its suffering? I know that is a very generic question but I would like to see what you think. And my second question is: which are the best books you read you would recommend to an 18 year old trying to figure out what to do with his life? I would really appreciate if you took the time to respond it. You have a new sub, good luck in your life!

ndndndnnduwjqams
Автор

Do you recommend Life 3.0 to think about the impact of AI in the future? I read some reviews that said it has many opinions that are unfounded, do you think it's the case?

ndndndnnduwjqams
Автор

What if the AI discovers that people from certain areas tend to be worse for a job? For example. What if it discovers that black people from certain neighborhood are less eficcent. So automatically when the AI recognizes that they are from that neighborhood they discard them. Wouldn't that be racist? I'm genuinely asking, I'm not sure.

ndndndnnduwjqams
Автор

Yes, but most Machine Learning Data is flawed by having been labelled by humans. Hence most AIs actually develop the same stereotypes

MS-ilht