Nina da Hora: “Technology reinforces the problem of structural racism in Brazil” | future america

EL PAÍS offers the América Futura section open for its daily and global informative contribution on sustainable development. If you want to support our journalism, subscribe here.

The scientist Nina da Hora (Rio de Janeiro, 27 years old) is one of the most active young voices in the Brazilian movement that seeks to increase the participation of black women in the field of technology and innovation in Brazil. Born on the outskirts of Rio and raised by five women teachers, including a mother, aunts and grandmother, she says that at the age of 6 she discovered that she would dedicate herself to computing, fascinated by the possibilities of play and creation offered by her first computer. Along the way, however, she observed that very few black people had access to the thriving Brazilian technological universe. And black women, even less. According to various investigations cited by the Preta Lab laboratory, they represent 28% of the country’s total population, but only 3% of those enrolled in computer engineering careers and 11% of those who work in technology companies.

To begin to change this reality, Nina da Hora, computer scientist, researcher, professor, and hacker Anti-racist, she proposes democratizing access to technology and making its operation transparent with an accessible language, like the one she herself uses with her grandmother to explain the reason for algorithms. “We have to take some time to reflect on artificial intelligence and what it can generate,” she tells América Futura after having participated in the KHIPU Latin American meeting on artificial intelligence, which took place in Montevideo at the beginning of March. Convinced, she defends the idea of ​​a plural science, open to society, and especially advocates for the prohibition of facial recognition technology, as has happened in cities like San Francisco, because she considers that it is inefficient and reinforces the structural racism that persists in Brazil.

Ask: What does one do? hacker antiracist?

Answer: This is someone who uses their cybersecurity or programming skills to combat racism and promote equality. For example, exposing racist individuals, removing discriminatory content online, protecting marginalized communities from cyber attacks. Also, in my case, when I started to study computing, I found myself in a universe that marginalized people like me: a woman, black, from the outskirts of Rio de Janeiro. So I set out to find ways to bring this group closer to technology. I am a hacker to break those social patterns and mitigate the damage of racism in society.

Q. How did you do on that task?

R. I have created some initiatives, such as the Ogunhê podcast, in which I present the history of black scientists and their contributions to the world. In addition, I started a research institute with a team made up entirely of indigenous and black people. With them we are opening ways to make technologies accessible to marginalized communities in Brazil.

Q. What have you explained to your grandmother about algorithms or artificial intelligence?

R. I taught him what an algorithm is by taking a cake recipe as an example. Both can be similar because they are sets of instructions that, if followed correctly, will produce a specific result. The algorithm is used to solve a problem or perform a specific task in the field of computing or mathematics. Artificial intelligence is a developing area that studies the possibilities of creating machines that use algorithms to perform repetitive tasks that can help society.

Q. Generally speaking, do you think you are helping us or making us lazier and more uncritical?

R. We need more critical thinking. We don’t reason about what we are wearing, we do repetitive tasks, like machines. That is why we have to talk with children, with young people, to make concepts related to technology more accessible and take time to reflect on artificial intelligence and what it can generate. But the opening of science to society takes time.

Q. What areas of artificial intelligence are most problematic?

R. artificial vision computer vision) is one of them, because it is invasive and there is no privacy for those who use it. For example, when I unlock my phone with a face photo, it’s invading my privacy. The risk is that we don’t know where that image is going, where it is going to be stored, or what it is capable of doing with the reconstruction of that face. In Brazil, many blacks have been mistakenly detained, because in image banks people with dark skin are labeled as dangerous or more likely to commit a crime. The Security Observatory Network monitored facial recognition technology in five states in 2019 and found that it exacerbates the incarceration of blacks, as well as being inefficient.

Nina da Hora at the Latin American Artificial Intelligence Meeting, in Montevideo.Courtesy

Q. But this racial bias of the machines is not magic, it comes in any case from those who program them.

R. Brazil uses imported technologies and uses them in our society, which has a problem of structural racism. Technology reinforces that when a person is detained for facial recognition, through cameras that are in public space. Those cameras have an algorithm that recognizes faces and searches its photo bank for who that person might be. We do not have access to that base, there is no transparency. We do not know the stages of its development and we do not understand its associations. The movement Pull my face into your sight (Get my face out of your sightin Spanish), in which I participate, tries to prohibit the use of this tool in Brazil.

Q. Rule out that it can be improved?

R. From my point of view, facial recognition technology has no chance of improving, it is extremely dangerous and as a society we do not have the necessary maturity to have a technology like that, without first discussing racism, violence against women or the LGBTI community. We’re trying to combat these problems, and that technology only reinforces them.

Q. You talk about decolonizing technology to improve the use of artificial intelligence. Implying?

R. The first step is to listen and observe the territory where we live, from a vision of Brazil and not of Silicon Valley, in the US. I have looked for references in technology in Mexico, Chile, Uruguay or Argentina, which are closer to our culture and social movements. For example, learning languages ​​other than English is one way of putting that decolonization into practice. If I only learn English, I will think of reference people in English and I will do research in that language, with which I am already being directed, to agree and not to disagree. There is a lot of power concentrated in technology, a few dominate many countries. The decentralization of that power would imply having more digital sovereignty and creating our own technologies instead of importing them. But today we don’t have a strategy to organize and govern our own data.

Q. According to the UN, of the 15 most important digital platforms, 11 are from the US and four from China.

R. Of those 15 big companies, there are five, Amazon, Google, Apple, Google and Microsoft, that share with each other what we talk about, how we exchange ideas, how we investigate. My proposal is to develop more open programs that are transparent as to the way they were made. In other words, that science is more accessible to reduce concentration and control. Of course, these companies do not want it and develop a more aggressive surveillance capitalism, in which people do not matter, what matters is the data.

Q. However, those companies raise the flag of diversity. Don’t you see it that way?

R. They seek to adapt to what we claim; for example, that there are more black people in the technology sector. But the average profile of those who research and develop these technologies is that of a white man, a middle- or upper-class researcher, who speaks several languages ​​and does not know how to listen.

Q. What possibilities does Brazil have to develop its own technology?

R. Various representatives of civil society and research centers are in dialogue with this government (of Lula da Silva), which is more democratic, to develop an Internet governance strategy in Brazil. We have excellent researchers and professionals who are organizing a digital sovereignty strategy so that our data stays in the country, with an investment from the State and not from the private sector.

Q. That sounds complex in these times of digital limits so diffuse

R. If we start little by little, in stages, it is possible. And everything we do today, someone is going to continue.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button