On my shelves, I have many books, some read but most unread. Among these are collections of stories that I’m slowly working, or reworking, my way through. (And my but some folk wrote a lot of stories: Sturgeon collected stories run to 12 volumes, and others like Vonnegut are large enough to count as weight lifting.)
One I like picking up is the collected stories of Fredric Brown. Writing in the 1940s to 1960s, Brown had some lovely ideas, albeit wrapped in stories that are problematically of their time, and one idea that I was reminded of today is from his 1951 story The Weapon. If you haven’t yet read it, there will be spoilers.
Dr James Graham is working on an ultimate weapon, a fact evidently publicly known. He is visited by a stranger named Niemand, who asks the question, ‘Is humanity ready for an ultimate weapon?’ Graham doesn’t listen and so Niemand then provides a demonstration of the dangers of giving a weapon to someone not capable of wielding it.
I’m sure that some amount has been written about possible interpretations of the story, but I will note that Ivy Mike, the first fusion bomb, was tested on 1 November 1952, and so this story sits in that complicated time in the early days of what became the nuclear arms race.
But if we take a step back and focus not on weapons but rather ideas, are we in the same situation? Are we, the collective we of humanity, developing ideas that we may not be capable of handling well in the end?
The first possible answer goes back to what might have been Brown’s initial inspiration and the nuclear bomb. If I remember correctly things I read a long time ago, there is an upper limit to the size of a fission bomb but there is no theoretical upper bound to the size of a fusion bomb, except where limited by imagination. And so this had to count.
But there are others. There may be an argument that the idea of a corporation might be an idea that we’re not entirely capable of handling well. One aspect of this I’ve encountered on more than one occasion is the view that corporations are the first instances of an artificial intelligence, something ultimately non-human (though made up of human agents) that has agency and desires.
The version of this filling the news at the moment are the chatbots. Again, I think the argument that we’re not prepared for the consequences of this creation is a strong one, even taking into account that this isn’t an artificial intelligence as such, but rather is a reflection of who we are. After all, the chatbots are trained one what’s been written and so in some strong sense, they’re a distillation of what we collectively have expressed over the years, decades, centuries.
This is a slightly different conversation than doomsday devices, another topic I feel worthy of consideration if only to understand paths not to walk, but still a conversation that we need to have lest we sleepwalk into a world that is significantly less comfortable than our current world with all its faults and complications.
So one very short story with a moral, are we ready for the things we make with our hands, but regardless of the answer, the big question remains. What next. What happens when we have built a weapon, in however general a sense. This ties into other thoughts about the toolmaker koan, that loosely it’s easier to build a tool than to use it well, and perhaps developing our understanding here is the next great quest.