Tag: digital rights

Suspending X: Brazil’s Ongoing Struggle to Govern Big Tech

We live in a scary world where someone with Elon Musk’s reach and influence can call a Brazilian Supreme Court judge an “evil dictator” and threaten him with imprisonment with apparent impunity, so it’s easy sometimes to miss what’s behind the news and the inflammatory tweets.

You might hear a lot about the suspension of X (formerly Twitter) in Brazil as a violation of free speech, which is the framing Musk prefers, arguing that the actions taken by Brazilian authorities are politically motivated attacks against his companies. But the real reason X has been suspended is that X has refused to comply with directives to name a legal representative in Brazil and remove certain accounts accused of spreading disinformation and inciting unrest.

What’s most striking about Musk’s tone is his apparent disbelief at Brazil’s audacity to challenge and potentially block his platform. It raises the question: why should Majority World countries be expected to accept Big Tech platforms uncritically, as though these platforms are the sole harbingers of development and free speech?

Now, the irony isn’t completely lost on me that the reported heir of an emerald mining family is pretending not to understand why companies extracting value while completely disregarding the negative impact of their business activities is bad. In fact, this isn’t even the first case for one of Musk’s companies in Brazil.

As Lua Cruz argues brilliantly in this article titled “Starlink in the Amazon: Reflections on Humbleness,” Starlink’s introduction to Brazil also carries the same complexities that illustrate how Big Tech techsolutionism and colonial legacies intertwine. Despite expecting a wholly negative impression of Starlink based on the media coverage, by visiting the affected communities and seeing the effects of Starlink on the ground, the complexity of the situation became readily apparent.

While the widely reported negative impacts of disrupting the social fabric and the environmental effects of such technologies do have a toll and are somewhat acknowledged by the communities, the people of the Amazons have been also able to use the technology to their advantage.

Cruz observes that Starlink has brought internet access to Amazon communities previously isolated from digital infrastructure, facilitating access to essential services, improving communication, and enabling territorial monitoring. Moreover, Cruz highlights that communication networks can empower communities by supporting civic rights, such as the right to organize, express opinions, and engage in public decision-making.

“Communities have shown resilience and adaptability in the face of such changes, often finding ways to integrate new technologies in ways that support their needs and goals. However, this resilience should not be taken as a justification for disregarding the potential harms”

While these benefits are significant, they do not erase the ethical concerns surrounding the deployment of such technologies without full engagement with the communities involved. It’s also important to understand how we got here in the first place. The very fact that Starlink has been able to position itself in this tech savior role can be attributed to years of neglect by the state and its deference to the private sector and international companies.

In contrast with the X case, this is an example where the state has failed in its duty, in particular to provide the people with meaningful access to the internet. Instead, they left that role to Starlink and the major corporations exploiting the Amazons who are financing the antennas. The danger of letting these technosolutionist approaches fill the void left by the state is that they often fail to engage meaningfully with affected communities and often overlook complex socio-political dynamics at play in favour of simplistic tech savior narratives.

Technosolutionism is often defined as the idea that any problem can be simply solved with technology, but it’s actually more complex than that, especially when it intersects with colonialism and imperialism. You can tell an approach is technosolutionist when it treats Indigenous communities as passive recipients of “technological aid”, rather than recognizing them as active agents with their own voices, needs, and complexities.

This disenfranchisement of Indigenous voices can often lead to disastrous consequences when they’re not involved in the governance of the technologies deployed for their supposed benefit. After all, the same communication networks that enable participation and access are the ones that can potentially bring disinformation in, as evidenced by the X case.

But when the “tech saviour” fails to deliver on their lofty promises, it is never the technology’s fault. The author brings up the example of how the rather nuanced coverage of Starlink in Brazil by the New York Times was picked up and reduced to racist caricatures by other media outlets, including Brazilian ones, whereas the critique of Starlink was less emphasized or ignored in those derivative reports.

Musk’s refusal to comply with Brazil’s judicial system is yet another a textbook example of this technological imperialism, cloaked in the guise of defending free speech. After all, his disregard for the socio-political impact of his companies is evident; after acquiring Twitter, his first moves included dismantling teams focused on public policy, human rights, accessibility (!) and content moderation.

At the end of the day, X should face the consequences of its business activities in Brazil. Brazil, alongside other Majority World countries, must assert their right and duty to regulate Big Tech, ensuring they respect local public policy and human rights. Ideally, all communities should have both the agency and the sovereignty over technologies that affect their lives, and tech companies should engage with them as such. Please read Lua Cruz’s full article on The Green Web Foundation website.

What’s Elections got to EU with IT

It’s EU Parliament elections time, and I thought it would be a good chance to give a short recap on significant and recent EU digital regulations, for those wondering how the elections can impact our digital lives. If you’re deep into digital policy, this probably isn’t for you. I’m also not trying to convince anyone to vote one way or another (or not to vote either).

From regulating AI technology to data privacy and cybersecurity, the EU decides on rules and regulations that don’t only affect those living within its borders, but also far beyond. This particularly applies to digital issues and the open source movement, which transcend borders. If you’ve ever had to deal with an annoying cookie banner, you’ve felt the EU’s effect. So what has the EU been up to recently?

Digital Security and Privacy

The EU has taken some massive steps in regulating the security of digital products. You might have heard of the the Cyber Resilience Act (CRA), which regulates products with digital elements maintain high-security standards. There are lots of positive things that the CRA brings, such as mandating that products should be “secure by design” and ensuring when you buy a digital product, it receives updates throughout it’s lifetime.

We are yet to see how the CRA will be implemented, but I think if it’s elaborated and enforced the right way, it will enhance trust in open-source software by setting a high baseline of security across the board. If the definitions and requirements remain opaque, it can also introduce undue burdens and friction particularly on open source software projects that don’t have the resources to ensure compliance. There are also wider ecosystem concerns.

The CRA, along with some General Data Protection Regulation (GDPR) updates and the newer Network and Information Security Directive (NIS2), place significant obligations on people who develop and deploy software. Also worth mentioning the updated Product Liability Directive, which holds manufacturers accountable for damages caused by defective digital products.

If it’s the first time you hear about all these regulations and you’re a bit confused and worried, I don’t blame you. There is a lot to catch up on, some positive, a lol of it could use some improvement. But all in all, I think it’s generally positive that the union is take security seriously and putting in the work to ensure people stay safe in the digital world, and we’ll likely see the standards set here improve the security of software used in Europe and beyond.

Digital Services Act (DSA) and Digital Markets Act (DMA)

From enhancing user rights and creating safer digital environment, to dismantling online monopolies and big platforms the Digital Services Act (DSA) and Digital Markets Act (DMA) were introduced this year by the EU to provide a framework for improving user safety, ensuring fair competition, and fostering creativity online.

The DSA improves user safety and platform accountability by regulating how they handle illegal content and requiring transparency in online advertising and content moderation. The DMA on the other hand focuses on promoting fair competition by targeting major digital platforms which it calls “gatekeepers,” setting obligations to prevent anti-competitive practices and promoting interoperability, fair access to data, and non-discriminatory practices​.

Artificial Intelligence Regulation: A Skeptical Eye

I had to mention the AI Act, since it was recently passed. It’s designed to ensure safety, transparency, and protection of fundamental rights. The law focuses on ensuring the safety, transparency, and ethical use of AI systems, classifying them based on risk levels and imposing stringent requirements on high-risk applications. Nobody on either side of the debate is happy with it as far as I can tell. As an AI luddite, my criticism is that doesn’t go far enough to address the environmental impact of machine learning and training large models, particularly as we live in a climate emergency.

Chat Control Legislation: Privacy at Risk

One of the most worrying developments at the moment is the chat control provisions under the Regulation to Prevent and Combat Child Sexual Abuse (CSAR). Recent proposals includes requirements for users to consent to scanning their media content as a condition for using certain messaging features. If users refuse, they would be restricted from sharing images and videos.

Obviously I don’t have to tell you what a privacy nightmare that is. It fundamentally undermines the integrity of secure messaging services and effectively turns user devices into surveillance tools​. Furthermore, experts have doubted the effectiveness of this scanning in combatting CSA material, as these controls can be evaded or alternative platforms can be used to share them. Even private messaging app Signal’s CEO Meredith Whittaker has stated that they would rather leave the EU market than implement these requirements.

Fingers Crossed for the Elections

In conclusion, we’ve seen how the EU is shaping our daily lives and the global digital ecosystem beyond just cookie banners. Regulations like the Cyber Resilience Act, Digital Services Act, and Digital Markets Act are already affecting how we make decisions and interact with software and hardware, and will bring improvements in digital security, competition, and enjoyment of rights for years to come.

Proposals like the chat control one demonstrate the potential of how it can also negatively impact us. I’ll be watching as those elections unfold, and urge to all to stay informed to follow these developments. We’ve seen from the CRA process how positive engagement by subject matter experts can sometimes help steer the ship away from unseen icebergs.