I love watching talks and seminars online. It is in so many ways superior to watching them in person. You can pause the talk to discuss it with your friends out loud, or to look something up online. You can skip the boring introduction. You can stop watching the talk if it’s lame, and try another one, and keep trying until you find a good one. Maybe best of all, there are vastly more talks available online than even at a large and diverse institution. The one plausible weakness is the lack of interactivity – you can’t ask questions. But it turns out that the Q&A part of most public talks (and even departmental colloquia) kind of suck. You can mitigate this weakness by watching the talk with other people who are thoughtful and intelligent, and talking to them about it during and after.
Rene, Michelle and I sat down last night and watched this excellent debate between Drew Endy from Stanford/MIT and Jim Thomas, put on by The Long Now Foundation. The formal presentation/debate portion is an hour long, and is followed by another hour of discussion. Endy is in favor of an open source type model for synthetic biology, with the technology being available to basically anyone. Thomas thinks it should be controlled, and kept out of the hands of potentially dangerous actors: the military, the corporate oligarchy, etc. Their positions are of course more subtle and well thought out than that, but you can only fit so much into a nutshell.
What they both kind of danced around, but seemed to understand, was the question of how much power we actually have to make a decision about the technology. Ever since reading The Making of the Atomic Bomb I’ve been aware that in history, especially the history of technology, sometimes there are fewer choices available than we might like to think. I believe we had the ability to choose not to use the atomic bomb at the end of WWII, but I don’t think we actually had much choice in whether the bomb was eventually developed. Once it became clear that it could be developed, the incentives for someone somewhere to do so were irresistibly large.
I think we are at a similar point now with synthetic biology. Irrespective of what we “decide”, it will be developed. We potentially have some control over who develops it, and in what context, and how fast. Certainly governments will develop it, especially in wartime, if they see a potential advantage in doing so (which I suspect someone somewhere would – regardless of whether they’re correct or not). Isolated crazies (disgruntled and antisocial biotech graduate students?) will also develop it. These categories of actors don’t care what the consensus decision (the law) is, and given the rate at which the barrier to entry into the field is dropping, an ever larger number of such actors potentially exists.
At the other end of the spectrum are things like iGem: teenagers competing openly in an academic setting. We can shut them down fairly easily. We can functionally prohibit, or censor, open collaboration and R&D by making it illegal, or simply denying such work public funds.
In between are corporations. They tend to try and do things they think will be profitable. To some extent, the profitability of synthetic biology will be a function of intellectual property protections. We can certainly deny the industry those protections, and thus impair the field’s potential profitability for the time being. However, it’s unclear that synthetic biology would only be profitable in the presence of strong IP. Even without bio-patents, one might very well be able to build a successful business model on engineered bugs producing hydrocarbon fuels from cellulose. Maybe more interestingly, I think it’s actually unclear whether granting bio-patents would enhance or degrade industrial development incentives. There’s some evidence that excessive ownership rights actually inhibit innovation in sectors where it is a function of combining many small pre-existing parts into a new, greater whole. The overhead of coordinating dozens, or hundreds, or thousands of rightsholders, each of whom knows that they can hold out for a better deal, because the machine won’t work without their tiny cog, effectively kills collaborative creation. So ironically, if our goal is to retard commercialization, it’s at least possible that the best choice is to grant strong IP protections to synthetic biological creations. We could also potentially keep corporations (and MIT students) out of the game entirely by simply criminalizing synthetic biology altogether, but could we really do that absolutely everywhere, in the face of the enormous perceived benefits of developing synthetic biology? Sixty years on, India, Iran, Pakistan, N. Korea, Israel, S. Africa, Germany, Japan, Canada, and Australia are all along a spectrum of nuclear power and capability, official or unofficial, legal or not, and developing a nuclear fuel cycle is a massive industrial undertaking. Re-assembling the 1918 H5N1 flu is not, and at least potentially, neither is developing a bug that can bioremediate plastics, or disassemble pesticides and pharmeceuticals in our domestic water supplies, or scavenge and immobilize plutonium or heavy metals in soils, or sequester carbon dioxide from the atmosphere in biologically precipitated carbonate minerals, or manufacture “synthetic” materials which are biodegradable, etc. I think building, and then enforcing, a global legal consensus that all synthetic biology is wrong would be very, very difficult.
So which is more dangerous? A relatively small number of outlaw actors, or a huge number of young Craig Venters, who don’t really know what they’re doing, in addition to the outlaws? With a broad array of expertise and tools at large, we would be more agile, more able to respond to problems or malicious actions, but at the same time, there would be more unintended consequences because the techniques would be widespread. I can sympathize with both positions. Ultimately though, the truth is we just don’t know.
2 thoughts on “How inevitable is synthetic biology?”