MC433 - week 10
« Back to MC433These are my notes from November 30 for MC433 at the London School of Economics for the 2017-2018 school year. I took this module as part of the one-year Inequalities and Social Science MSc program.
The usual disclaimer: all notes are my personal impressions and do not necessarily reflect the view of the lecturer.
Automated Technologies and Autonomy
Readings
Smart Technologies and the End(s) of Law by Mireille Hildebrandt (chapter 2)
- defines “smartness” in terms of agency
- deterministic agency (which she defines kind of weirdly, in order to contrast with “machine learning” which she considers not deterministic?)
- agents that involve machine learning
- multi-agent systems
- complete agents, which can control their material world (don’t really exist yet)
The Definitive Guide to Do Data Science for Good on the DataLook blog
Extremely naive imo but at least it’s a start
Doing good in the cognitive era on the IBM website
literally just corporate propaganda
Lecture
- on Iris Marion Young: injustice isn’t just Habermasian oppression/rule, but occurs in everyday life, through the outcomes of institutional rules
- for every oppressed group there is its dialectical opposite (the privileged group)
- 5 faces: systematic violence / marginalisation / exploitation / cultural imperialism / powerlessness (diminish agency of individuals in intersectional ways)
- Hildebrandt defines automated tech as a mindless, distributed form of technology
- something is changing in this new automated era
- technology is an artefact of design, not due to legislation; it’s not enacted
- technology can prevent its overruling and prevent disobediece (e.g., accepting TOS in order to use a site or product)
- no court to dispute problems of regulation due to hostility
- agent: something that can autonomously adapt to changes in environment over time
- even if deterministic, can still adapt in unpredictable/unforeseen ways
- produces emergent behaviour
- decisions produced based on inferred patterns from analysis of data, not cognitive reasoning the way humans would make decisions
- can find patterns in data that humans might not find (or agree with)
- we can use these systems to extend our own cognitive resources, but in another sense, they use our cognitive resources to extend their own capabilities
- translating human agency into a distributed digital form -> amplifies inequalities
- we’ve cordoned ourselves off from a universe of other possibilities (path dependency, just like with politics)
- we end up limiting our own agency
- by abdicating our responsibility to algorithms/intelligent systems, society can maintain or even deepen oppression while claiming a neutral shield
- how to fight back against algorithm-mediated discrimination in the intelligent era?
- strengthening consumer privacy laws
- upholding anti-discrimiation laws (e.g., fair housing, fair credit)
- Hildebrandt’s point is that law is not adequate for intervening in these processes due to the opacity of their design
- on the data science for good thing
- based on the ethics of care
- representation issues when designing the tech
- this is essentially forced to be a self-regulating system since the law can’t keep up
- should we, as consumers and producers of data, have a say in how this industry behaves?
- where is the accountability in this self-regulatory model?
- (cus imo there is no real accountability unless there is someone specific to guillotine; any individual can just hide behind the data/algo)
Seminar
- if self-regulation and legal regulation don’t work, what alternative is there?
- grassroots movement to raise awareness?
- question is: what level of awareness is needed? do we all need to understand how algorithms work?
- or is it just that the people in positions of power need to understand the ethical implications
- (they need to understand the materialist implications too imo)