“Move fast and break things,” is an idea that started with Facebook, and yet, has become a philosophy for many tech companies and is in some ways demanded by the market which functions on the whims of venture capitalists who see two options: get in early, or fail. Moving fast, by itself does not seem to be an ethical concern, except for that moving fast in some ways seems to necessarily involve incidental breakage from lack of planning or anticipation of future consequences. It’s also worth questioning which things tech breaks.
In my experience, rather than breaking down barriers, breaking down structures of discrimination, breaking down capitalism, it breaks down rules and regulations, it breaks down communities, and it breaks down its workers. But that isn’t the end of it, and while moving fast and breaking things seeming straightforwardly bad tells us to avoid it, its not very interesting philosophically, and most importantly, having a feeling that it is straightforwardly bad doesn’t tell us why that is the case.
Well, last week I had the pleasure of attending the Waterloo Symposium on Technology and Society, hosted by CIGI, and excellent panel discussion gave me the aha moment I needed to answer the question as to why moving fast and breaking things is an especially dangerous mindset in tech.
The Importance of Decisions
One thing that was emphasized by the speaker and panel was that computers, even those powered by artificial intelligence, do not on their own make decisions. They make outputs. Then, those outputs are used, either by other processes, or by human actors to make decisions. But, even when outputs are used by other processes, this is still not a decision the computer is making, it is a decision that a programmer or team of programmers has made when writing the program.
Even when we have things like autonomous vehicles, the vehicle may appear to make a decision, but that is an illusion. The decision or decision tree will have been pre-programmed in in some sense (even if the programming is simply “behave as you learn the driver behaves”).
Obviously, the power of offloading decisions into a central location (the programmer) allows us to do very powerful things, and this capability has streamlined and powered our world for the last 20+ years. However, there are two concerns we should be attuned to when we recognize this power.
- That decisions, when they are only made once, and when one decision essentially counts for hundreds, thousands, or millions of decisions, become much more important.
- That when decision making power is unregulated and centralized in the same place that economic power is centralized, this is a threat to our democracy, social institutions, and freedoms, especially when that decision making power is leveraged into even more economic power. (As a sidenote, this is especially dangerous when decisions are opaque to outside sources, when the black box cannot be penetrated, when there is no algorithmic accountability.)
Now, that is not to say that all decisions should be in the hands of all persons. Quite the opposite. We understand that we should have representatives in government who understand government and vote for those representatives because they are experts. We trust scientists to advise those representatives on scientific matters because they are experts, etc.etc.
However, as some tech companies have pointed out when asked to build ethics into their systems, programmers cannot be expected to be ethical experts, and yet, they are building things and making decisions about things that have ethical ramifications. Furthermore, employees themselves may be exploited and under pressure from investors to act in ways that are contrary to their ethical interests, or simply put on a timeline in which ethics cannot be properly considered, and the result is breaking things that shouldn’t be broken, and that as a result will remain broken because that breakage has through computing become systematized.
In essence: if we are going to concentrate anything in one spot: money, power, decisions, we better be damn sure that that one spot is going to be the most responsible, and the most judicious with those things. We want that one spot to be diverse, to be inclusive of minorities, to include disparate perspectives and considerations so that they are better able to anticipate future problems with a variety of stakeholders. Tech already has a problem with this, and if they are under pressure to move fast at all costs, which they often are, I have a hard time understanding how responsible or judicious decision making can be done in that environment.
So what is a tech company to do if it is unethical to do what the market demands?
Well, the answer is we have to change the market. We have the make the disincentives for concentrating power and making mistakes with it so high that the potential reward for becoming a successful tech company is not enough to offset it. We have to enact government power to ensure diversity, algorithmic accountability, and oversight to understand the ways in which decisions are being made, who they are being made by, and for what end are they being made. If the answer to those questions, are rich white men, and to finish faster, well, as the ladies in those men’s lives might tell them- that’s just not okay.
Finally, as consumers, we can start to consciously think about what kinds of decisions we are offloading to computers and their programmers, and if those are decisions if it would actually be better if we made ourselves. We can try to support responsible tech, and we can elect representatives who care about regulating businesses that would try to take power away from them and by extension, their constituents (i.e. us.).