Making the Digital Welfare State Work for People

Making the Digital Welfare State Work for People


So, Philip the subject of your lecture this
evening is, “Making the Digital Welfare State Work for People.” In your report on the United
Kingdom as special rapporteur on extreme poverty and human rights, you had a number of criticisms
of what you call the disappearance of the welfare states behind the digital welfare
state. Can you just say a few words about those criticisms? Well, the UK has had a very deliberate strategy
to transform quote, governance, throughout the country in digital terms, and the big effort,
the big push has actually come in the welfare area. So, the digital-by-default nature of universal
credit is actually an overstatement in the sense that it’s really close to digital-by-compulsion.
So, if people want to get universal credit, want to get access to their benefits, they have to
go online, they have to be able to operate a computer, they have to have access to the
internet, and they then have to continue to engage with the Department of Work and Pensions
on a very regular basis electronically. But, what sort of problems does that really
pose in terms of access? How many people, what kinds of people are affected by that
change? Well, the Department is relatively dismissive
or over optimistic, saying, “Look, most people can do this, it’s not a problem. For
those few who can’t, we’ll take care of it.” That’s not the picture that you get if you
look at, there’s a Lloyd’s survey of consumer access to the internet, there’s Ofcom surveys
of the extent to which people have difficulty accessing electronic databases and so on.
And, what you find is that there are significant percentages. Now, that might only be ten percent,
but of all the people, significantly higher percentages of people with disabilities, even
in the youth group 16 – 24, there’s a lot of difficulty of access to the internet. So,
in fact, there are quite large segments of the community who are potentially either excluded
or at least prejudiced by having to be online on a very regular basis for benefit purposes. So, one issue is access to the necessary infrastructure,
but do your concerns about so-called digital welfare or the use of technology such as artificial
intelligence go wider than that? I think there’s no doubt that these new technologies
have immense potential, I’m not opposed to them at all. I think they should be used to
the full. I think what we see in the digital area, is that they’ve been overwhemingly used
in a negative way. Governments and others see them as an opportunity to cut back on
welfare budgets, not to make them smarter, not to make them better targeted, but rather
to produce large savings to pursue other agendas, such as the fraud prevention agenda, which
looks neutral on its face, but in fact is driven by an ideology that tries to discredit
welfare. It’s driven also by a privatisation agenda, because so much of this work is outsourced,
it goes to companies, or it’s actually privatised. That takes government more out of
the picture than it was. I think that the basic concern I have is with the way
in which the digital welfare state is being implemented. So, you focus on in your UK report on initiatives
here, but is your sense that this is a trend that is wider than the UK, is worldwide? At New York University Law School, we’ve set
up a project called, “The Digital Welfare State and Human Rights,” and it’s driven precisely
by the realisation that in a very wide range of states, these issues are becoming more
and more prominent. Whether it’s Aadhaar in India, whether it’s the new biometric recognition
system in Kenya, whether it’s a number of developments in Australia, whether it’s the
Siri system in the Netherlands, there are many different manifestations, but it’s the
same general set of issues. There’s a lot of experimentation, particularly with the
poor, because they’re the most vulnerable, they’re the ones who have to provide data
to the government. These systems are ever more intrusive and demanding, they provide
an opportunity for government to really control in ways that we wouldn’t contemplate for the
rest of the population. So, I think there are a lot of major concerns with the way in
which these digital welfare states are evolving. One of the observations you make in your report
on the UK, in which you made elsewhere is you think that the discussion of the present
about the ethics of AI is inadequate to help us really grapple with the consequences of
this kind of technological development. Why is human rights law fit for purpose? I think ethics is completely open-ended, I’m
sorry to say that, but it’s basically in the eye of the beholder. What we have are the
major tech companies as the key players in the development of various ethical codes adopted
in different contexts. These ethical codes will almost always acknowledge human rights,
not always, but usually, upfront somewhere, human rights are important. That’s the end
of human rights. From then on, it’s a pretty open-ended list of various criteria of fairness
and transparency and openness and whatever. But, these have no solid meaning, no clear
definition, there’s certainly not matters of law, and of course, the tech companies
will say, “Well, human rights is out of date. They’re not flexible enough. We need to be
able to move faster and more quickly and to adapt, etc. Human rights are yesterday’s benchmark.”
I think that is basically leaving the tech companies, as they want, with a carte blanche
where they determine what are the main principles that are going to govern their own activities.
It means we move away from the state, it means we move away from any agreed conception of
what the basic values are. It’s true that human rights language is not written in a
way that can very readily be applied to a lot of the very sophisticated settings in the
tech area, but we just have to adapt them and make them fit for purpose, but they remain
the bedrock foundation that should be used rather than simply side stepped, which is
what’s happening now. So, one last question, in your on-going project
on the digital welfare state, do you anticipate developing interpretations or refinements
of human rights law that might well be able to be used in the design of some of these
new digital systems? I think there’s a challenge in getting the
right mix between the universal and the local. In other words, I would certainly like to
see the universal human rights regime made more appropriate and like it to become more
conscious, more sophisticated in relation to these matters, but it’s also true that
a lot of this is local. The system that’s going to work in India, the protections that
are needed there, the system of accountability and so, is going to look quite different from
that in the United Kingdom or Australia. Okay, thank you very much. My pleasure.

Leave a Reply

Your email address will not be published. Required fields are marked *