Skip to main content

AI in Reflection

There is so much to parse in this Times column inspired by a paper examining alleged political leanings of large language models. First, the myth of a “center” is imposed on the machine as it is on journalism. That is an impossibility, especially when extremists weigh down the equation & move “center” by gravity downhill, […] The post AI in Reflection appeared first on BuzzMachine .

There is so much to parse in this Times column inspired by a paper examining alleged political leanings of large language models.

First, the myth of a “center” is imposed on the machine as it is on journalism. That is an impossibility, especially when extremists weigh down the equation & move “center” by gravity downhill, towards them.

Second, in its raw state the model reflects the collected corpus of digital content from those who had the power to publish. Thus, it will reflect that worldview; it is a reflection of that power. Imposing left/right/center on that says little about the machine, much about the that imposition.

But, third, given that — as the Stochastic Parrots paper preaches — the models are too huge to audit. It is impossible to judge the effect of the choices made in training them. That is the problem with size-matters, macho model-making.

Fourth, when, as the article says, models are “fine-tuned,” some of that effort comes in reaction to fears of political and media pressure on the model-makers, to compensate for what is lacking in the material used to train the model. Thus, the tuning says more about that pressure and those fears than it does about the technology per se. See: the fuss over Gemini’s images. So when a model puts a Black person at the US Constitutional Convention, its makers are trying to account for society’s biases; when it is called “woke” by the right, that reveals their further biases. The issues are all human.

All this is why I argue that we must study technology in the context of humanity, examining not the software as if it had a worldview but instead understanding the conflicting worldviews imposed on it and what that reveals not about the machine but about us.

The tl;dr of all this is that just as there is no mythical center in politics, there is no neutrality in technology (AI or social media) and there is no objectivity in journalism. Each is an attempt to impose a given view as a norm.

In my upcoming book (coming not soon enough), The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic, I quote Terrence Sejnowski on his theory of the Reverse Turing Test in the context of Kevin Roose’s affair with ChatGPT. The episode says more about the reporter than the machine he reported on . That snippet:


In my book, I argue we should understand the internet not as a technology but as a human network and enterprise. Similarly, we should not try to analyze the biases in AI so much as we should endeavor to \understand human biases imposed upon it in design or reaction.

AI as it exists could be understood as a reflection on society, in what it summarizes of the collection of human text and image it is trained on, in how our reaction to it exposes our fears or dreams, in how we choose to command it. 

The post AI in Reflection appeared first on BuzzMachine.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.