Tariq Rashid discusses the need for the left to embrace the future of digital technology.
Too often the left is caught out by digital technology disrupting how we work and live, leaving us struggling to catch up and design regulation to soften its worst effects on society.
Each innovation seems to empower already powerful corporations, create new victims of automated injustice, deepen the isolation of workers increasingly managed by faceless apps driven by mutant algorithms, and nudge our world ever closer to a totalitarian surveillance panopticon in which our behaviour is gamified by Orwellian social credit scores.
The advancement of technology is not only an inevitable constant, it will happen at an ever faster rate. Today is the slowest it’ll be in the next few decades.
In the face of this relentless barrage, it is not surprising that some on the left look back at an imagined simpler slower era, with jobs for life, in industries that didn’t change, and where the only threat was no more complex than a factory boss.
And this is why the left is perceived as being less technologically savvy than the right, unwilling to understand it, and therefore unable to govern it. This accusation has some merit - where are the thought leaders on the left speaking knowledgeably about AI ethics and algorithmic accountability, the automation of work and universal basic income, the fallacy of anti-encryption laws, a vision for data protection post-Brexit, or even a modern digital bill of rights?
But we can’t fight the future. It will happen - with or without us.
We need to embrace the future, and embrace it with confidence.
To do this, we can’t react to each new technology - each new Facebook, each new military robot, each new coronavirus app, each new Cambridge Analytica - and in a panic try to cobble together regulation to make it safe, fair and ethical. Not only is the pace of change too fast to keep this up, we’ll end up applying regulation inconsistently, and in the regulatory gaps and cracks, new monsters will emerge.
The only workable approach is to go back to the timeless values and principles of the left, and apply them to the 21st century, itself now moving beyond the Internet Age to the Age of Automation and AI.
Not only will this exercise allow us to disentangle and see each new technology for what it is more quickly, our ability to design new regulation, if it is even needed, will be easier when it is based on values which we’ve agreed have been valid for centuries.
And new regulation won’t be necessary for each new disruption if those values and principles are carefully encoded into a Digital Bill of Rights - human rights for the digital age - that apply broadly and define the framework for how technology should and shouldn’t be used in a healthy society. That clarity not only guides the development of new technologies, it also makes clear what is and isn’t an acceptable business model.
For example, from a fundamental principle of human dignity, it isn’t a leap to assert that nobody should suffer the day-to-day insecurity of precarious income offered by uber-style tech platforms, a business model which stacks all the risk, and none of the benefits, on the so-called employees.
As another example, we might clarify that any life-changing decision an organisation makes about you must legally be a human decision. This doesn’t preclude the use of technology to aid that decision, but it ensures the accountability and responsibility is not displaced nor dispersed. In contrast, today we have organisations making illiterate use of poorly designed machine learning algorithms to decide whether you get a job or not, with no explanation, no recourse, no transparency and no accountability. What is this, if it isn’t algorithmic tyranny?
We can be bolder. If we assert that all personal data is inalienably and irrevocably ours, there is no dark market of personal data being passed around and sold to anyone for whatever good or malign purpose. We can expand on this principle with rights to know where our data is at any time, and what it is being used for, and to be able to revoke that consent at any time. Yes, this would kill a significant part of the data surveillance economy - but it would create space for more ethical business models to emerge. All that from a basic principle about personal data being part of who we are.
We can be ambitious. If we assert that the economy is there to serve society, and not the other way around, we can justify taxing the hugely successful areas of the tech economy to invest in training and education for those left behind in industries disrupted by it.
Going through this exercise will force us to revisit what our core values and principles are, and the bumpy process of interpreting them for the digital age will only clarify and strengthen them. It will thrust forward those on the left who understand technology and can communicate its benefits and risks in plain English, grounded in our timeless values and principles, and these will be the leaders of the future.
Tariq Rashid is passionate about a sustainable and just technological future, which is why he teaches children and tech-shy adults to code, organises grassroots tech and digital art communities, and runs a business focused on safe, fair and ethical AI.