The power and the people behind our technology
11 February 2021 at 8:22 am
A new report uncovers the fragile state of our digital rights following the crises of 2020
While the role of digital technology to keep society functioning and connected is more important than ever before, advocates say community groups can play a part in ensuring the rights and privacy of vulnerable people are not exploited.
When COVID-19 hit, people across the globe shut their doors and opened up their laptops to stay safe and connected.
While this meant workplaces were able to remain open, and people could still see their friends and family’s faces, a new report from Digital Rights Watch (DRW) explores the public’s dependance on the web and the ways in which government and corporations used the crisis to expand their powers of surveillance and encroach on the boundaries of digital privacy.
In the foreword of the report the chair of DRW, Lizzie O’Shea, described the state of digital rights as “more fragile than ever”, with the crisis offering an opportunity for those in positions of authority to wield their power over others.
“We have seen the expansion of technologies like facial recognition and surveillance drones, without any meaningful public consultation or discussion,” O’Shea said.
“Technology at work is increasingly used as a way to keep tabs on workers, a tendency that has accelerated as we transitioned to working from home. But equally, it is clearer now that power needs to be held accountable in a functional democracy, including in digital settings.”
Vulnerable people at particular risk
O’Shea told Pro Bono News that the concept of digital privacy was often discussed as something that only affected journalists, whistleblowers, or people evading the law, but that there was a class element to the issue that was regularly overlooked.
“Information of people who are vulnerable and poor is packaged up and used by different kinds of companies in the data mining industry and exploited,” she said.
“I think it’s unacceptable that we are allowing that to happen just because it’s out of sight.”
The report pointed to the introduction of facial recognition technology for those accessing government services such as Centrelink and Medicare, as well as the robo-debt scandal. O’Shea said these changes flew under the radar because of the type of people accessing these services.
“There’s lots of evidence out there that poor people are more likely to be processed and have decisions made about them by machines than people who are wealthier,” she said.
“That makes the poorer parts of society more vulnerable to mistakes in how these decision-making tools are designed, or [to] intentional design problems that we might consider problematic if applied to anybody else, but because there’s a rhetoric around welfare that people aren’t actually entitled to it, this is considered acceptable.”
Changing the people at the centre of technology
She said that it was important to remember that technology wasn’t just something that was “discovered”, but was built by people.
“And those people need to be held accountable so that it works in the way that we want it to,” she said.
O’Shea added that stronger collaboration and diversity of organisations working on the issue would also make a big difference.
“There are lots of opportunities for cross-pollination between digital rights organisations and people who’ve worked with communities, particularly people who access welfare services, to ensure that the digital aspects of these services are able to be discussed openly,” she said.
“That way, we can make sure that we don’t wait until mistakes happen to catch these problems.”
And for vulnerable people at the centre of this, she said it meant living a life that was their own.
“You get to decide what your life looks like online, and [it shouldn’t be] pre-determined by a bunch of companies that have already made decisions about what you might like based on data gathered from you,” she said.