The numerous technologies that keep making smartphones smarter have had a significant impact on how we organize, mobilize, and communicate. Smart devices like your phone or watch may have helped you find the information you needed if you've ever led, attended, or even pondered taking part in a protest, but it's also possible that you've been told to leave them at home to avoid being seen. Smartphones have made it easier for people to access knowledge and educational resources through online tools and learning, especially in places where in-person/physical learning is not feasible or convenient. The ability to exercise some rights and freedoms, such as the freedom of speech, the freedom of expression, and the right to protest, has grown significantly reliant on mobile technology and the internet.
However, technologies like facial recognition and geolocation, which enable you to use your mobile device and some of its applications, are also used outside of your mobile devices and can be used by systems like traffic and security cameras, by public and commercial entities looking for data, and by systems like traffic and security cameras. This was proven in Hong Kong, where it was stated that police were using information gleaned from social media and surveillance cameras to identify those who had participated in protests. A new need for research into how these technologies affect civil rights, civic space, and everything in between has arisen as a result of artificial intelligence's (AI) expanding use and capabilities.
At the Center for Responsible AI at New York University, Dr. Mona Sloane conducts senior research studies on the convergence of design, technology, and society with a focus on AI design and policy. Sloane says that while the data used to develop most AI systems is incorrect, they were designed to make decision-making processes much easier and quicker in everyday life. According to Sloane, who works for Global Citizen, "entities that build and deploy AI often have a vested interest in avoiding sharing the assumptions that underlie a model, as well as the data it was based on, and the code that encodes it." "To function sporadically well, AI systems often require enormous volumes of data. The procedures used to obtain extractive data can violate privacy. Data will always reflect historical injustices and disparities since it is historically based. Therefore, using it as the foundation for deciding what should occur in the future strengthens existing injustices."
AI affects us not only in terms of the data it can gather, but also in terms of influencing what we believe to be true. A person's preferences, including political preferences, can be determined by algorithms, which can then be utilized to affect the type of political messaging that person could encounter on their social media feeds. One significant data-gathering incident in this respect involves consulting company Cambridge Analytica, which, according to the New York Times, worked for former US President Donald Trump's campaign in 2016 and obtained sensitive information from the Facebook profiles of more than 50 million users.
According to Jamie Susskind, author of Future Politics: Living Together in a World Transformed by Tech, people are more susceptible to manipulation the more you know about them and are also more inclined to alter their behavior once they become aware that they are being monitored. "Digital is politically charged. We need to view these technologies as citizens rather than as customers or capitalists. Future generations will progressively be under the authority of those who possess and operate the most potent digital systems, "Susskind stated in a Forbes interview.
By presenting people with various media that support their political views, algorithms also enable different people to have distinct perspectives of reality. Mobile technology has undoubtedly contributed to the opening up of civic space in some situations, albeit posing additional difficulties, particularly in terms of the tracking and supervision of activists and/or protesters. Academics like Sloane discuss AI because technology has the potential to be employed in ways that restrict dissent and/or result in the profiling of particular populations.
There is no magic solution that will instantly make AI more fair; we must approach this issue from multiple directions. Researchers must engage in greater transdisciplinary work. Engineers who want to better grasp how social structures and technologies are intertwined should consult social scientists "She spoke. "The general public and the affected communities must be included (and paid for) in the AI design process. We need barriers that encourage innovation rather than stifle it."
Stark concurs, adding that there is still much to be done in terms of how AI technologies engage with actual people in the real world, such as by posing the following questions: "How are you aware of our presumptions? What methods do we employ to draw conclusions about people? What sort of inference is involved here? When we draw such conclusions, what narratives are we telling?" Social scientists have been worried about these issues for a while, he said. And [AI] as a set of tools, in my opinion, really brings that issue to light.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.