I noticed a brief article in The Guardian with the captivating headline “Can Google be taught poetry?”.
By feeding poems to the robots, the researchers want to “teach the database the metaphors” that humans associate with pictures, “and see what happens,” explains Corey Pressman from Neologic Labs, who are behind the project, along with Webvisions and Arizona State University….
The hope is that, with a big enough dataset, “we’ll be delighted to see we can teach the robots metaphors, that computers can be more like us, rather than the other way around,” says Pressman. “I’d like them to meet us more halfway.”
That sounds utopian, magnificent, turning away from the harsh and narrow-minded informaticism to grand humane concerns. And yet, it reminded me of a recent article in the New Yorker “Why Jihadists Write Poetry”:
Analysts have generally ignored these texts, as if poetry were a colorful but ultimately distracting by-product of jihad. But this is a mistake. It is impossible to understand jihadism—its objectives, its appeal for new recruits, and its durability—without examining its culture. This culture finds expression in a number of forms, including anthems and documentary videos, but poetry is its heart. And, unlike the videos of beheadings and burnings, which are made primarily for foreign consumption, poetry provides a window onto the movement talking to itself. It is in verse that militants most clearly articulate the fantasy life of jihad.
Whatever the motives of Neologic Labs — and I’m guessing they have a pitch to investors that doesn’t rely upon the self-actualisation of smartphones, nor on the profits to be turned from improving the quality of poetry — can we doubt that sooner or later this technology is going to be applied to improving the quality of government surveillance, escaping the literal to follow human prey down into the warrens of metaphor and allusion. It will start with terrorists, but that’s not where it will stop.
Imagine, just to begin with, China equipping its internet with a cybernetic real-time censor that can’t be fooled by symbolic language or references to obscure rock lyrics, which the software will be more familiar with than any fan. Protest movements will be extinguished before people are even aware that they were ever part of a movement.
Cool post. I think that fear has to apply to any new technology though, right? So then all we can do is try to work as hard on our politics as we do on our tech. Some people think that eventually tech will get so advanved that we’ll be able to use it to overcome our moral and political deficits. What do you think about that?
Since it’s inevitably going to be much easier to build the technology that turns a profit by solving some puzzle than to make the device that solves the puzzle and also solves humanity’s moral conundrums, I’m not going to hold my breath waiting for the machines to save us.