Barriers to open science: From big business to Watson and Crick

No readers like this yet.
To compete or collaborate

Science can only advance when discoveries are shared, but scientists often have a disincentive to disclose their research. So says a group of researchers from Georgia Institute of Technology in their recent article on, Do academic scientists share information with their colleagues? Not necessarily. In fact, scientists often make complex, calculated decisions when asked to share data:

“Consider the behaviour of two scientists deciding whether to share materials (such as a cell line) or data with each other. If one scientist shares her materials, she increases the likelihood that the other scientist will solve the problem before she does. On the other hand, sharing has the potential benefit that the other scientist may share his materials in the future. This has clear elements of the well-known prisoners’ dilemma; both scientists would be better off if they shared, but neither does so unless the situation is expected to arise again and often. When the prospect of sharing occurs repeatedly, the scientists weigh the current gain from refusing to share against the expected loss from the lack of access to materials or data in the future.”

Nature magazine puts it more succinctly: “Sharing data is good. But sharing your own data? That can get complicated.”


By far the biggest fear seems to be scooping: someone else beats you to the big discovery, using some of your shared information. In 1953, geneticist James Watson and biophysicist Francis Crick were racing against chemist Linus Pauling and X-ray crystallographers Rosalind Franklin and Maurice Wilkins to discover the structure of DNA. According to Nature's annotated version of the famous Watson and Crick paper, Linus Pauling was the first to discover the helical structure of some proteins, publishing his proposal of a triple helix model for DNA in early 1953. Though flawed, Watson and Crick found the necessary details in this model to beat Pauling to the solution:

“Pauling's proposed three-stranded helix had the bases facing out. While the model was wrong, Watson and Crick were sure Pauling would soon learn his error. They estimated that he was six weeks away from the right answer. Electrified by the urgency—and by the prospect of beating a science superstar—Watson and Crick spent four weeks obsessing about DNA in endless conversations and bouts of model-building to arrive at the correct structure.

In 1952, [the head of the King’s laboratory] denied Pauling's request to view their X-ray photos of DNA—crucial evidence that inspired Watson's vision of the double helix. Pauling had to settle for inferior older photographs.”

We've all heard of Watson and Crick, but who among us knows of Pauling's work? The fear of being scooped is no small paranoia, and a big barrier to information sharing. And for scientists in the business world, sharing with competitors can mean losing the advantage of being first-to-market with a new product.


Many scientists have experienced the misappropriation of their ideas, but it remains a stinging experience. So the idea of freely presenting to the scientific community the research they've spent hours preparing—perhaps even prior to publication—is daunting. Can a scientist trust his colleagues not to pocket those ideas and represent them in a self-serving manner?

Even Watson and Crick were guilty of as much. Nature details the slight to Rosalind Franklin:

“Here, Watson and Crick say that they "were not aware of the details" of the work of King’s College scientist Rosalind Franklin—a statement that marks what many consider an inexcusable failure to give Franklin proper credit.

According to Lynne Elkin, a science historian at California State University, Hayward, it’s true that Watson and Crick were not aware of all the details of Franklin’s work, but they were aware of enough of the details to discover the structure of DNA. Yet this paper does not ever formally acknowledge her, instead concealing her significant role by saying they "were not aware" of her work.”

Franklin had been working on the DNA model question using X-ray crystollography to determine the structure. Just a few months before Watson and Crick's landmark paper, Franklin came tantalizingly close to discovering the correct model, but she did not recognize it in the photographs she took. Frustrated with the attitude of her college toward women, Franklin was preparing to leave King's College when fellow scientist Maurice Wilkins, a close friend of Crick, showed Watson a critical photograph.

In his 1968 book, The Double Helix, Watson describes how he traveled to King's College to persuade Franklin and Wilkins to work with himself and Crick before Pauling discovered the error in his model. Franklin refused, but as Wilkins and Watson left, Wilkins decided to show Watson Franklin's photograph 51. “The instant I saw the picture,” Watson wrote, “My mouth fell open and my pulse began to race.” The photograph's cross-shaped pattern of spots clearly indicated a helical shape.

Meanwhile, Watson and Crick also obtained the bulk of Franklin's (unpublished) research, including precise DNA measurements, from a colleague who had copies of it. Franklin's work gave Crick the necessary information to determine that DNA has two chains running in opposite directions.

Despite these significant contributions, Franklin was merely listed in the acknowledgements section without any details about her work. (Check out the whole controversy, including links to more information, in the annotated Nature paper.)

So it's easy to see how someone's research can be “borrowed” without appropriate recognition. But if anything, the Franklin-Watson story illustrates how a culture that avoids systematic sharing of knowledge actually encourages this sort of poaching. When the whole world has access to your data and research, it's far more likely that an unscrupulous or careless researcher will be caught failing to acknowledge the significance of existing shared information to their own work. And having twice obtained Franklin's work through questionable means, Watson and Crick had difficulty giving her suitable credit.


Though not as professionally damaging as having your work poached or scooped, there is perhaps nothing more frustrating to a researcher than seeing it misused or misinterpreted. Whether the data is used wrongly to advance a political position or simply misinterpreted out of ignorance, the experience makes that researcher and others hesitant to share in the future.

Open science must be partnered with a strong accountability system, perhaps more formal than the “Internet-as-public-record” variety used to establish reputation in open source software development communities.

Extraordinary benefits

But if the risks to practicing open source science are great, the benefits surpass them. Harvard professor Karim R. Lakhani has said that innovation happens “at the intersection of disciplines,” and the solution to one scientific field's perplexing problem can often be found by a scientist with a different discipline's perspective.

To a great degree, we have seen this with InnoCentive challenges, where organizations broadcast their tough problems and reward the submitter of the best solution—rarely a scientist from the same discipline—with hefty amounts of prize money. But InnoCentive does not release the winning solution for input, which Lakhani believes necessarily reduces the quality of the final implementation.

In addition, the Georgia Tech researchers have found that competition, especially where the commercial and intellectual value of prizes is significant, actually stifles the practice of open science. So much the same for patents and consulting opportunities. But when increased government funding relaxes the competition in a field, scientists are far more willing to share their research with each other.

Certainly prodding from funding agencies, journals, and scientific societies would help, Nature author Bryn Nelson reports. Among the disciplines that do share large amount of data, like atmospheric science or genomics, significant ground work has been done by these organizations and scientists in the field. Beyond professional expectations for sharing, centralized data repositories must be created and maintained, then standards for entering data  established, which is more difficult for some disciplines than others.

But then, no one said open science would be easy. (Ok, maybe someone did. But he probably wasn't a scientist.) I'd love to read about your experiences with data sharing in science. Log in and tell your story in the Comments section below.

User profile image.
Rebecca Fernandez is a Principal Program Manager at Red Hat, leading projects to help the company scale its open culture. She's an Open Organization Ambassador, contributed to The Open Organization book, and maintains the Open Decision Framework. She is interested in the intersection of open source principles and practices, and how they can transform organizations for the better.


The two main obstacles to Open Science (which by the way, should be simply called "Real Science"), are:

1) The current scientific publishing system
2) The tenure track system

Regarding (1), The publishing system has become a goal instead of a mean. Scientist now work for the publishing system and see papers are the actual final product of their research.

The scientific publishing system should be dismantled. In its current form it is a remnant of the industrial revolution approach to dissemination of information. (See Yochai Benkler It must be replaced by a system that takes advantage of modern electronic communications, and that enforces the verification of reproducibility.

The current peer-review system must be abolished as well, since currently doesn't do anything to verify reproducibility, and it is mostly based on a collection of experts providing subjective opinions, similar to what the natural sciences used to be in Aristotelian times, long before Galileo introduced the scientific method. (See Karl Popper: "The Logic of Scientific Discovery" :

Regarding (2): The tenure track system pins down researchers to "produce" papers as the only measurable outcome of scientific research, and linked that metric directly to their career development (See This leads to the ironic situation in which researchers are more interested in publishing papers than in getting problems solved, and therefore, appropriating their data takes precedence over sharing their data.

Agencies that provide public funds for research must require researchers to make their data, papers and source code available as soon as acquired / developed and under licenses that allow others to make unrestricted use of these materials. Some US Federal agencies already have these requirements (for example NIH), but do not enforce them enough for researchers to take them seriously.

Can you elaborate? I've never heard criticism of this aspect, in fact, the opposite. For example, I often hear doctors refer to studies in peer-reviewed journals as the industry standard, and they tend to be critical of, say, chiropractic studies that are not published there.

I think you're right about tenure. I have seen firsthand how it can shape allowable research and curb topics that are out of fashion. Although hopefully once tenure is received, the incentives are greater to research for discovery rather than publication.

Although a bit late...
Let me elaborate on this:

<strong>The Original Idea:</strong>
The scientific method requires that your hypothesis must be tested in experiments; and for those experiments to be taken seriously, they must be reproducible by others.


This is how Galileo help us get rid of the three thousand years of misconceptions introduced by Aristotle.

The sin of the Greek thinkers:

They mostly "think"...

and because they lived in a society based on slavery, they despised the manual and mechanical work that would be needed to run physical experiments.

Aristotelian thinkers built mental constructs that have logical consistency. For example, the orbits of planets "must" be circular, because the circle has a large collection of elegant geometrical (and philosophical) features. No need to measure the orbits, that would have been "slave" work... undignifying for such high value philosophers. It took until Kepler to measure the orbits, and realize that the ellipse was a better description of planetary trajectories.

The Aristotelian philosophers had the arrogance to assume that their "expert judgment" was sufficient to determining the truth and that experimental verification was an "unnecessary distraction".

Galileo, a more pragmatical man, demonstrated that those little experiments actually made a big difference, and that they lead to show that Aristotle got most of physics wrong (forces, accelerations, gravity, optics, weather...). Aristotle also got many things wrong in the natural sciences and biology as well, for example, he believed that vision worked by emitting rays from our eyes, and he even got wrong the number of teeth in women (clearly, he never bother to look and count).

Galileo created the modern scientific method by making clear that philosophical preconceptions are only worth <strong>if they have experiments that back them up</strong>.

<strong>Fast forward to today.</strong>
"The scientific method requires that your hypothesis must be tested in experiments; and for those experiments to be taken seriously, must be reproducible by <strong>others</strong>."

Due to the specialization of science, we tend to assume that the "others" must be the peers of those who posted the original experiment. For example, if this is an experiment in quantum physics, then other quantum physicists are probably the best qualified to repeat the experiment correctly. That's where the "peers" come in.

So, journal and conferences today, will send your paper for review to your expert "peers" for them to review.

In a Galilean world, your peers will have the <strong>decency</strong> to follow the instructions of your paper in order to <strong>repeat</strong> your experiments, and they will report back describing whether it worked for them or not.

in today's "peer-review", your peers will <strong>skip</strong> that little detail of "repeating the experiment", and instead they supplant it with their "expert judgment" that tell them:

<em>"Yes, this is consistent with what I know, or what I have seen... particularly with MY previous work".</em>

In that treacherous switch, your peers have sent us back <strong>three thousand years</strong> to the pre-scientific age.

The "peer-review" process today <strong>DOES NOT</strong> enforces the verification of reproducibility. Therefore, it only provides the <strong>appearance</strong> of science, instead of delivering the real deal.

Our current expert reviewers, tend to be the <strong>head of laboratories</strong>, and/or highly visible members of a scientific field, who despise the experimental work required to verify the claims of a paper. They consider that manual work to be "graduate student work" and therefore undignifying for an "expert in the field" such as themselves. Just the same way that the Greek philosophers despised experimental work on the basis of being things that only slaves should do.

Our peer-reviewers today think:

<em>"I'm an expert, no need to run that experiment again, I can already tell what the outcome will be"</em>

(I'm literally citing someone here, whose name would have to remain anonymous)

Our current peer-review process take us back to the arrogance of Lord Kelvin <cite></cite>, the <strong>president</strong> of the Royal Society, who in 1902 declared that:

<em>"No balloon and no aeroplane will ever be practically successful."</em>

<strong>One</strong> year later the Wright brothers, who owned a bicycle repair shop (they were not high ranked scientist in any university), flew an airplane in the sandy banks of North Carolina.

We should have learned by now that an "expert opinion" is worth <strong>nothing</strong> in science.

<strong>Only the experiment can be trusted</strong>, and only when it has been <strong>repeated</strong> by others.

Instead of referring to "peer-review" publications we must start demanding "reproduced" publications.

Literally meaning: publications whose experiments have been reproduced by others.

This is why the current peer-review system must be dismantled. It's hypocrisy must be exposed, and it must be replaced immediately by a system based on <strong>experimental reproducibility</strong>.

It can be done !

For example, for six years now we have hosted an Open Access, freely available journal for medical image analysis:

This is a <strong>"real scientific"</strong> journal for Galilean practitioners. The kind of people who only believe in something after they have repeated the experiment.

When submitting to this journal, you are required to provide

<li>Your source code (under an open source license)
<li>Your data (under an Open Data License)
<li>and parameters needed to repeat the experiment.

We must re-educate an entire generation of science workers to restore the practice of the <strong>real scientific method</strong>.

First commenter Luis has absolutely nailed it in one! The two key problems are exactly those he cites; a culture of closed-source publication which runs contrary to the open model of knowledge-sharing upon which the academy is allegedly based; and the citation-counting tenure track system which prevails in both the US and overseas and rewards quantity over quality (to a great extent) and publication in "mainstream" (read: yesterday) journals over newer open journals.

Some scientific disciplines have broken free of the publishing house stranglehold; the vast majority of knowledge-sharing in the physics discipline takes place via the open arXiv system. ArXiv, for its faults, should be broadly replicated across all major disciplines and pay-for-knowledge "mainstream" journals should be permanently sidelined. They no longer add value in the digital age.

KISS--Keep it simple, stupid. While not topping the life lessons I wish to impart upon my son, it is an adage that humanity has faithfully followed. The *whole* truth is normally complicated, convoluted, and conflicting. Most people prefer just a smidgen of truth. Nothing outright fallacious, mind you--just don't bore us with the nuances of truth or reality.

The Nobel Prizes are a prime example. If I'm the first one to tell you this I'm sorry, but the A-team will never win the Nobel Peace Prize. Well, they could if Face were to meet an untimely death, because only three individuals may win a given prize. This means that a team of four who cures AIDS are disqualified from the award. While I do not know the basis for this, I do think it expresses our society's yearning to keep it simple by requiring less names to be printed in a history textbook.

The idea to leave a roll of microfilm on the moon with the names of all the engineers, scientists, draftsmen, machinists, etc, who directly worked on the Apollo program was scraped. Instead we left a plaque with the name the three astronauts and the sitting President who lost to the President who threw down the gauntlet that spurred America to the moon.

How do we reconcile the policy of the Nobel Prize committee with the current information age? How do we present the *whole* truth to a populace that feeds on soundbites? Truth is reality, and reality is truth. A policy of transparency and openness is one where everyone truly benefits, and not in a superficial Benthamian way.

The human race may have a long of tradition of glorifying simplicity to unlearn, but the benefits are tangible and great. Otherwise 22nd century textbooks may label the 'information age' as a misnomer.

---We've all heard of Watson and Crick, but who among us knows of Pauling's work?---

Are you really unfamiliar with the work of the great Linus Pauling? He is, after all, one of the very few to ever win multiple Nobel Prizes, and only one of two people to win a Nobel Prize in more than one field (the other is Marie Curie, you've heard of her, right?). Pauling is a giant in the field of chemistry, and one of the most influential scientists of the 20th century.

Your story of the publication of the double helix also does not jibe with the historical record. Conveniently, a new set of letters recently emerged shedding further light on the cooperation between the two groups and how it was decided how the papers would be written (and simultaneously published in the same issue of Nature):, I hadn't heard of Linus Pauling prior to reading up on Watson & Crick. I asked a few other lay people and none had heard about Pauling or Franklin, so I assumed, perhaps wrongly, that their names were unknown to most other individuals outside of the field. Certainly they didn't make any of the textbooks I had in high school or college, which were non-major science courses, and likely dumbed down. (Madame Curie and Watson & Crick of course did.)

I appreciate the link to the Nature article; unfortunately I don't have access to more than the first paragraph or so. I did see a recent article that mentioned their agreement to publish several papers simultaneously, but that did not seem to outweigh the minimal credit given to Franklin in the landmark discovery paper.

If I have made an error in history, mea culpa, and I would appreciate an indication of where you see a problem. My timeline comes from the sources cited in the article plus another book about Rosalind Franklin specifically. I tried to draw from the other two in because they seemed more balanced.


I'm a little surprised--Pauling is one of the few scientists who transcend science and became fairly well known in popular culture. If you're a biologist or a chemist, his contributions are both obvious and enormous. But for the non-scientist, he was very famous for his nuclear disarmament work, so much so that he won the Nobel Peace Prize in 1962. There was a period where the US State Department refused to let him leave the country. He was also fairly well known for his somewhat crackpot ideas late in his life about the ability of massive doses of vitamins to promote health and cure cancer. He was also on a stamp in 2008. The <a href="">Wikipedia entry on Pauling</a> does a decent job of noting his tremendous body of work.

Sorry for the Nature link, I hadn't realized their news articles were behind their paywall. Here's t<a href="">he Guardian's coverage</a>, it's not great (the author doesn't seem to realize that Nobel Prizes can't be awarded posthumously) but at least you get some idea of the content of the treasure trove of correspondence that's been found. It's a fascinating account (as told in "The Double Helix") of an era where scientists were asked not to work on a particular problem because another scientist had already claimed that area for research. Say what you will about the competitive nature of today's world, it certainly has resulted in a more productive environment than one where only one group was allowed to address a subject. That's really why Watson and Crick were seen as such "rogues" at the time, they were working on a question that someone else had claimed.

As far as the publication of the papers, from the Nature article:

<blockquote>"Watson and Crick announced their double-helical model in one of a group of three papers in Nature on 25 April. The other two — from Wilkins and from Franklin — presented supporting X-ray diffraction data from the King's group...Wilkins had already declined Watson and Crick's offer of co-authorship when he had visited Cambridge to view the new model on 13 March."</blockquote>

So rather than Franklin's work being unacknowledged, the group she worked for (run by Maurice Wilkins) was offered coauthorship on the paper and they declined. They chose instead to write their own accompanying papers, published together with Watson and Crick's. The wording and content of the set of papers published was much debated and agreed on by all involved. Franklin reportedly never held a grudge and was friendly with both Watson and Crick until her death.

While Franklin was certainly treated unfairly due to the sexism of both the time and the culture of science, I do feel she's been given the appropriate amount of credit in the story of the structure of DNA. The lesson that I take from the situation is that collecting the data is not what's important. Understanding the data is what's important. That's the difference between a technician and a scientist. Franklin was unable to make the intellectual leap that Watson and Crick made. They deserve the credit for doing so. She deserves mention for collecting a piece of data that led to their insight, but it's the insight that really matters.

And that's what people often misunderstand when they talk about open science, and getting scooped. Most open science supporters note that if your data is publicly available and timestamped, you will be given credit for collecting it. Which is true, but collecting the data is not what's important. When you release your data before you've had a chance to understand it, you run the risk of someone else understanding it first, and they'll reap the majority of the credit and reward that comes from your hard work.

I do agree that in an ideal world, open science is the way to go. But I am unsure how it can work in a world where funding and jobs are limited, and where science is done by human beings. If credit for accomplishments is how one gets a job, or tenure, or further funding, then it's in one's best interests to fully exploit one's own work before giving it away to others. And scientists are humans--they have egos and more importantly, they need to feed their families and pay their employees. Being cautious about prematurely releasing the fruits of their labors makes it much easier to do these things. It likely does slow the pace of discovery, but it also makes science a viable career.

<p>Thanks for filling in some gaps. I found when I was reading up on the W&amp;C et al. stories, it was sometimes hard to separate attitudes of the time (sexism and other cultural differences) with ethics. It seems ultimately Franklin was satisfied with how she was acknowledged, and that is definitely significant.</p><p>I can see not wanting to release data immediately, for sure. In other areas, like social science, that would easily lead to crackpots and politicians misusing your work and getting the wrong headlines in the media. (Why is it that the corrections never come with the same fanfare?)</p><p>I am generally for openness, but I also have seen in other fields that it can require new ways of generating revenue, and sometimes it could never be profitable.</p><p>But from the other side, as the mother of a 4 year old with a rare and life-threatening form of food allergy (FPIES), I have been able to see tangible benefits of information-sharing. There is something to be said for researchers who are willing to talk in detail about their findings. I was blown away by how helpful one pair of researchers were when my son's allergist called them with questions after reviewing a paper that I sent.</p><p>Ultimately because of these researchers and their willingness to share their findings beyond what was published, my son will be able to have several experimental blood and skin tests done ahead of time to give us an indication of his chances of passing an in-hospital food challenge before actually going through one. (Because of the severity and irreversibility of his type of reaction, a failed challenge is no small misery.)</p><p>Sometimes even just improving accessibility and informal collaboration is a big leap toward openness.</p><p>In any case, thanks for the thoughtful reply. I owe Mr. Pauling some reading time, it seems.</p>

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.