Bacteriality

Exploring chronic disease

Category: featured articles

A couple weeks ago, I had the pleasure of giving a presentation to a tri-chapter meeting of the Medical Library Association. The topic was why some patients with chronic disease are disaffected and how online social networks have met some of their needs. I try to offer a balanced perspective – both the good and bad of online social networks.

The live presentation was filmed but the room was a bit on the dark side. So, despite a laudable job filming by my colleague, Judy, I decided to put up a slideshow with voiceovers.

  • Comments Off on Why patients with chronic disease are disaffected and how online social networks meet their needs
  • Filed under: featured articles
  • Sun-blocking culture among the Chinese

    Not every culture reveres the sun as Americans do. In our recent trip to Chengdu, China with a stopover in Hong Kong, we saw hundreds of people, women especially, blocking light on a daily basis.

    We’re not sure if these people are supplementing with vitamin D (there is certainly no vitamin D added to the food chain!) but they’re certainly not getting a lot of sun.

    The Vitamin D Council insists that people must expose themselves to sunlight and eat vitamin D-fortified products, yet these people are going about their daily lives without any apparent ill effect.

  • Comments Off on Sun-blocking culture among the Chinese
  • Filed under: featured articles, personal, vitamin D
  • Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observation and often neglect very important facts because they do not further their aim….

    Claude Bernard, An Introduction to the Study of Experimental Medicine

    This article discusses our experience at the one-day Institute of Medicine workshop on vitamin D and calcium. Both of us had an opportunity to make comments before the committee. Here are Paul’s comments and slides and here are Amy’s comments and slides. Note that our 2009 paper in Autoimmunity Reviews[1] discusses some of the science we allude to in further detail.

    On the cab ride to the IOM committee meeting on whether to change the dietary reference intake (DRI) of vitamin D, Amy practiced her speech.

    The cabbie had been silent for the whole ride, but broke character by talking to us. “So, let me ask you a question,” he said. “Do you take vitamin D?”

    “Actually, no, we don’t,” Amy said. Amy explained briefly how our data suggests that the form derived from supplementation is immunosuppressive, meaning that while it may temporarily improve signs and symptoms of disease, we have found it may do so at the cost of long-term health.

    We asked him if he took vitamin D. He said yes and explained that a few years back, he had a partially blocked artery. It scared him, so he searched the internet and found that high doses of vitamin D were being recommended for cardiovascular disease. He wasn’t clear about the evidence, but in his words, “I had to do something.”

    Which brings us to this point in time. At least in the United States, rates of chronic disease are rising. One recent study predicted that if current trends continue, all Americans will be obese by 2040.[2] Other studies have shown chronic disease is rising at rates faster than could otherwise be explained by an aging population and/or a general increase in population. One recent estimate says that by 2030, 171 million Americans will have a chronic disease. We have to do something, right?

    A committee to evaluate the DRI of vitamin D is convened

    The Institute of Medicine (IOM) is a non-profit organization that was first chartered in 1970. In 2008, IOM appointed a committee of experts whose charge is to reevaluate the DRI of calcium and vitamin D in light of recent research. The committee is expected to produce a report including these recommendations scheduled to be publicly released in May 2010.

    An IOM committee with the same purpose last met in 1997 and set the current standard of 400 IU of vitamin D per day for adults. But none of the members of the previous committee are on the current committee despite, collectively, hundreds of MEDLINE citations to their names. Perhaps this suggests that the IOM was trying to exclude scientists who most vocally tout vitamin D’s benefits from the committee.

    A great deal has happened since 1997. We learned that hormone replacement therapy (HRT) can cause disease (which led to thousands of premature deaths) even while early observational studies seemed to quite erroneously suggest the opposite.[3] Also, evidence-based medicine has come of age.[4]

    For those who are not from this planet or from a Western country anyway, it’s hard to really express how enthusiastic the support for vitamin D supplementation is – at least in the popular media. A quick search of Google News for “vitamin D” has led us to conclude that the few articles that allude to vitamin D’s risks are vastly outnumbered by stories repeating the same unchallenged claims about vitamin D’s perceived benefits.

    As part of their deliberation process, the IOM committee commissioned a report by the Tufts Evidence-based Practice Center. For this report, the Tufts group used a pre-existing set of criteria to identify only those studies meeting a certain standard of validity. Those studies that made the cut were independently analyzed.

    According to the report’s abstract: “The majority of the findings concerning vitamin D, calcium, or a combination of both nutrients on the different health outcomes were inconsistent.” For a variety of diseases, the report repeatedly finds few or no controlled studies showing an association between vitamin D intake and disease.

    Interestingly, Dr. Boullion, the sole speaker at the meeting from Europe (Belgium) conceded that he was confident that the European Union would not raise its recommendations regarding vitamin D intake based on vitamin D research to date.

    The complete list of presentations including audio and slides is available on the IOM website.

    Dr. Barry Kramer sounds an early note of caution

    Arguably the most illuminating speech of the day came before lunch. Dr. Barry Kramer, MD, MPH, works in the Office of Disease Prevention, a division of the NIH. His speech was somewhat dryly titled, “Weighing Scientific Evidence” (PDF of slides) but might just as well have been titled, “Hey, wait a second.”

    Invoking the work of Leon Gordis, PhD, Dr. Kramer discussed the “Levels of Decision Making,” and how the requisite amount of evidence for a non-conservative (our word) medical decision increases as the number of people it would affect increases. In other words, a person must make decisions for one’s family or even groups of patients with a different standard of evidence than he or she would when making decisions on behalf of the entire nation and possibly the world.

    The evidence-based pyramid. Higher levels of the pyramid have higher levels of validity. Note that while Dr. Kramer’s evidence-based pyramid contains a section for ideas and opinions, many evidence-based pyramids make no such provision.

    Dr. Kramer argued that some levels of evidence are not sufficient – at least not to make decisions on behalf of millions. The evidence must meet a minimum standard of validity:randomized controlled studies (RCTs), if not double-blind, placebo-controlled RCTs. According to Dr. Kramer, the history of research has shown in the cases of high-dose paclitaxel, encainide/flecainide, torcetrapib, and HRT, of course, that confounding variables have a way of compromising researchers’ most certain conclusions.

    A good example of a confounding variable is smoking in alcohol’s relationship with lung cancer. Alcohol consumption is strongly correlated with lung cancer, but only because people who drink are also more likely to smoke. Another commonly cited example: Volvos may be involved in fewer accidents, but that’s probably because people who choose to drive them are generally older and more safety-conscious.

    Dr. Kramer said in the case of observational studies with a relative risk of less than two, he could “spit them [confounding variables] out at the rate of one a second.” His slide lists a few obvious confounders for vitamin D studies: health consciousness, health insurance, and access to care.

    Dr. Kramer also made what should be an obvious point: surrogate outcomes do not substitute for reductions in mortality or disease. A surrogate outcome is a variable that is a substitute for a “true outcome”, used because it is easier, quicker or cheaper to measure – and the most common one used in vitamin D studies is serum 25-D although bone mineral density, polyps, and PTH levels are also used. But Dr. Kramer said that none of these surrogate outcomes, in his words, “measure up.”

    At the end of the speech Dr. Kramer showed the audience a classic Far Side cartoon, explaining, “Especially when you’re dealing with public health issues and millions of people, it pays you not to shoot first, because once you’ve shot, you can’t ask the questions any more, because your credibility is invested in your message. It pays to ask the questions before you shoot.”

    We’re not sure if Dr. Barry Kramer heard our five-minute remarks (we never saw him after lunch), but we were, in essence, presenting a set of explanations for how his note of caution could later prove to be well-justified or even prescient.

    Researchers affiliated with the Vitamin D Council drive the science on vitamin D

    Inarguably the most forceful voices for increasing the DRI of vitamin D come from researchers affiliated with the Vitamin D Council, a California-based organization. At the one-day workshop, a total of seven speakers were affiliated with the Vitamin D Council (only Drs. Hollis and Grant are board members; the remainder are listed as “Vitamin D scientists” on the website), and the balance of other speakers could be fairly characterized as strongly sympathetic to their aims.

    Many of the most influential papers on vitamin D are published by this group. We searched the online database, Web of Knowledge, for papers published since 2005 that mention “vitamin D” in the title or abstract, and then we sorted that list by number of times cited. The top four papers on that list are by researchers with the Vitamin D Council – as are a number more in the top twenty.

    These researchers have a habit of wholeheartedly agreeing with one another; throughout the day, we would hear at least several times something to the effect, “I agree with my colleague.”

    What does a bandwagon look like? If you search for the publications in MEDLINE on vitamin D since 2005 in GoPubMed.com and click on the statistics tab, you see how often Vitamin D Council researchers have co-authored each others’ papers. Below is an annotated screenshot (click for full-size PDF) of the professional collaborations in this relatively close-knit and like-minded group. Researchers affiliated with the Vitamin D Council are in red.

    Despite a notable lack of data derived from RCTs, those researchers associated with the Vitamin D Council are pushing the IOM committee to raise the DRI of vitamin D by a huge increase – around 5-6 times the current DRI. To achieve this goal, the Vitamin D Council markets the form of vitamin D derived from food and supplements to the public as a nutrient. What harm can high levels of a nutrient cause, right?

    Yet although we’re referring to it as vitamin D in this article so that you know what we are talking about, any molecular biologist would confirm that the two main forms of “vitamin” D are actually powerful secosteroids. The active form of vitamin D, 1,25-D, can also function as a hormone. We suspect that people would be less willing to take extremely large amounts of vitamin D if they were actually told, “We’re giving you high doses of a secosteroid that will adjust your hormonal and immune activity in ways not yet fully understood.”

    Yet rather than trying to help the public understand these true properties of “vitamin” D, a number of prominent vitamin D researchers still seem content to refer to it as nothing more than the “sunshine vitamin,” some with impressive consistency.

    Did our human ancestors really have extremely high levels of vitamin D?

    Late in his talk, Dr. Robert Heaney, a researcher affiliated with the Vitamin D Council, said, “We all agree and it is well-established that humans evolved in equatorial East Africa wearing no clothes.” This assumption is repeatedly invoked to justify supplementation with vitamin D at levels that would leave the average American with a 25-D level similar to that of a present-day farmer who works near the equator.

    We’re not sure anyone noticed, but in the next talk, Dr. Michael Holick would undercut this very argument. Dr. Holick said that according to his research, students of African descent need three to five times the exposure to ultraviolet light as Caucasians to “barely raise their blood levels” of 25-D. In short, their skin is “such a good sunscreen.” If ancient man had darkly pigmented skin, (according to a paper by Jablonski et al.,[5] man only evolved lighter skin pigment as he left the tropics) then why would he produce the copious levels of vitamin D referenced by Dr. Heaney?

    What about climate change? That ancient man evolved in a consistently sunny and hot environment makes no provision for several extended ice ages, which corresponded to key periods in hominid evolution.[6]

    What about skin cancer? Say that early man did not hunt and gather at dusk like so many other animals – that early humans did evolve in an unforested environment with no caves, no clothing, and no thick body hair, whiling away his hours sizzling like a big piece of Paleolithic bacon. Why then would just a few burns before the age of 20 dramatically increase[7] the risk of skin cancer? Did humans evolve to get skin cancer?

    To clear up the confusion surrounding this issue, we recently contacted Dr. Peter Bogucki, an archaeologist at Princeton University, who is a leading expert on prehistoric man. We asked him to estimate how much sun prehistoric man actually got.

    Dr. Bogucki responded, and I trust he won’t mind us quoting him, “You raise a very good question, but I don’t know that there’s a good answer. All we have is skeletal remains. There’s no elemental isotope to track sun exposure.” In the absence of such a marker, our understanding of how much vitamin D early man actually synthesized is complicated by several factors including climate variability,[8] migration, and changes in skin pigment.

    Dr. Richard Potts sums up the evidence or lack thereof for inferring how man evolved from specific environmental scenarios:[9]

    The study of human evolution has long sought to explain major adaptations and trends that led to the origin of Homo sapiens. Environmental scenarios have played a pivotal role in this endeavor. They represent statements or, more commonly, assumptions concerning the adaptive context in which key hominin traits emerged. In many cases, however, these scenarios are based on very little if any data about the past settings in which early hominins lived.

    Dr. Richard Potts, Director of The Human Origins Program, Smithsonian Institution

    At this point, it’s probably safe to say that we simply do not know how much sun early man got.

    With this in mind, isn’t it a bit less plausible that, when it comes to the ability of the human body to naturally adjust its vitamin D levels for optimal health, current humans are a complete evolutionary bust and must be given truckloads of pills in order to remain healthy?

    Dr. Michael Holick speaks on sunscreen and vitamin D

    Dr. Michael Holick is a professor at Boston University, a medical doctor, and may be the world’s leading authority on vitamin D. Since 2005, he has authored or co-authored 59 publications appearing in PubMed on vitamin D (26 more than Dr. William Grant, who is second in that category and a frequent co-author) and he has the distinction of being quoted on vitamin D in nearly every magazine, newspaper, television show and website ever. In his 10-minute statement, Dr. Holick was critical of dermatologists, a group which he singled out for advising the public to avoid creating vitamin D by direct sun exposure. As it happens, Dr. Holick receives large amounts of funding from the UV Foundation, which is in turn sponsored by the Indoor Tanning Association.

    Entitled The D-Lightful Vitamin for Health, Dr. Holick remarks sprinkled his speech with a number of pop culture references including mentions of Charlie Brown and Don King. And then there was the clip of Darth Vader telling Luke to come to the Dark Side. It has been a while since we have seen the Star Wars trilogy, but we don’t seem to recall Darth Vader’s evil stemming from his unnecessary prudence.

    Dr. Holick went on to claim that sunscreen use blocks 99% of vitamin D production in the skin. This claim is a featured part of his argument, because there has to be a reason why what he views as vitamin D deficiency is so widespread. If there’s evidence to back up this statistic, then our search of the literature cannot find it.

    What we did find were three small studies, one of which Dr. Holick authored himself.

    One of these studies measured the vitamin D3 (a precursor of 25-D) levels of only eight subjects[10] while another performed no intervention but simply measured the 25-D levels of 20 sunscreen users.[11] The third put only 27 subjects into tanning beds rather than into the sun, which could easily introduce bias.[12] All three are by the same lead author, Dr. Lois Y. Matsuoka.

    As it happens, several reviews have refuted the idea that real-world use of sunscreen entirely halts cutaneous production of vitamin D. By real world, we mean people putting sunscreen on themselves for extended periods of time while exposed to the actual sun.

    Dr. William L. Scarlett writes in his review, “Several large prospective studieshave shown that vitamin D deficiency does not result from regular sunscreen use.”[13]

    A review by Drs. Wolpowitz and Gilchrest states, “There is no evidence that customary sunscreen use causes vitamin D deficiency or insufficiency in otherwise healthy individuals.”[14]

    One research team, studying patients with xeroderma pigmentosum, a genetic disorder in which patients are unable to repair damage caused by ultraviolet light, found that vitamin D levels are maintained even when patients practice at least six years of rigorous photoprotection and not supplementing with vitamin D beyond their normal dietary intake. Most importantly, the researchers also concluded that the clinical manifestations of vitamin D “deficiency” were absent.

    In a 2007 review, Dr. Melanie Palm concludes real-world people tend not to consistently or repeatedly apply sunscreen.[15] She writes: “Most people’s real-life experience with sunscreen is that despite its application, they still sunburn or tan after casual sun exposure.” Dr. Palm goes on to explain, “SPF [sun protection factor] is a strictly defined and Food and Drug Administration (FDA)-regulated measurement based on applying 2 mg/cm2 of product. Studies have shown that most users apply insufficient amounts of sunscreen to meet this FDA standard, and the true SPF obtained is usually less than 50% of that written on the package.”

    Dr. Holick also proudly informed the committee of the manner and amount of his vitamin D intake. If you ask us, this is irrelevant. It’s nice that Dr. Holick believes what he says enough to try it on himself, but this kind of data falls to the very bottom of Dr. Kramer’s evidence-based pyramid – the opinion level that should never be used to guide public health decisions.

    In the remainder of his talk, Dr. Holick went on to say that no one living in a latitude north of Atlanta, Georgia can make vitamin D in their skin during the winter months. Based on everything else we have heard, maybe you can understand why we’re a bit dubious of this claim.

    It seems that one of the unspoken rules of publishing a study on vitamin D is that you must cite Michael Holick – geez, even we have done it. But in light of the conflicting data related to Dr. Holick’s claims, we have to wonder why the man has been accorded that authority and why more people don’t second-guess some of his more definitive statements.

    A concession: vitamin D is not for people with granulomatous disease

    From our perspective, one positive statement Dr. Holick made was when he conceded, as actually many of the pro-vitamin D researchers will do, that vitamin D is not for everyone, specifically not for people with granulomatous diseases such as Crohn’s or sarcoidosis.

    A granuloma is a ball-like collection of immune cells which forms when the immune system attempts to wall off substances such as bacteria. But it looks like patients with granulomatous diseases are going to have a tough time if Holick and his colleagues succeed in drowning us in vitamin D. Raising the DRI of vitamin D would inevitably mean that vitamin D would be added to another slew of foods.

    When Dr. Holick et al. were questioned about the fact that some people have been shown to develop kidney stones after taking extra vitamin D or that people with granulomatous disease could easily ingest excess levels of vitamin D and become significantly more ill, they seemed ambivalent. In their eyes, if a certain number of people are harmed by taking vitamin D, it should not matter, so long as more people benefit. We find this risk-benefit analysis difficult to stomach having seen first-hand the suffering associated with granulomatous diseases.

    Dr. Cedric Garland discusses vitamin D and cancer

    Another member of the Vitamin D Council, Dr. Cedric Garland, spoke in his remarks about vitamin D and cancer. After his remarks, a committee member, Dr. JoAnn Manson challenged him on his claim that vitamin D is protective against cancer at high levels of intake. She asked him about the Women’s Health Initiative-led randomized controlled study which trended in the opposite direction when it comes to breast cancer among women who start out with high intakes of vitamin D.[16][17]

    Dr. Garland brusquely and repeatedly dismissed the cancer study, saying that the dose of vitamin D administered to subjects, 400 IU – which happens to be the current adult DRI – was “not even a placebo.” In other words he believes that 400 IUs of vitamin D has no biological effect whatsoever. Dr. Manson responded, “I don’t buy it.” Actually, neither do we. To put things in perspective, you’d have to consume 20 eggs or four glasses of vitamin D fortified milk a day in order to get 400 IUs of vitamin D.

    Interestingly, when you take a look at the five most frequently cited papers on vitamin D published in the last five years, the first four are authored by researchers affiliated with the Vitamin D Council. But study #5[18] derives its conclusion based on data collected by the Women’s Health Initiative, the same research group whose data Dr. Garland suggested should have no implication on the IOM Committee’s decision-making. That other vitamin D researchers are more than inclined to analyze data from the Women’s Health Initiative suggests that, although Garland may seem like he is an expert speaking on behalf of the entire vitamin D community, not all vitamin D researchers share his views.

    We have taken the liberty of annotating in red several of Dr. Garland’s slides to make points about the presentation of data especially as it pertains to vitamin D.

    Below is Dr. Garland’s slide showing a strong and consistent increase in the rate of breast cancer since 1935, which he used as a general indication for why it is important to significantly increase the amount of vitamin D added to the food supply.

    However, as you can see below, it is very easy to take that same data and “show” the opposite – that vitamin D consumption has led to a dramatic increase in breast cancer.

    Another example: Dr. Garland didn’t mention this publication in his speech, but in a 2008 study, his group found a significant association between “low UVB irradiance and high incidence rates of type 1 childhood diabetes.”[19]

    Data derived in this observational manner could just as readily be used to show something else entirely.

    As you can see in this graphic above, there is a strong apparent association between states that get more sun and teenage pregnancy. But does sun exposure actually cause teen pregnancy? We certainly hope not!

    Obviously, you can try to control for confounding variables, as Dr. Garland did in his ’08 publication, but so too did researchers who repeatedly concluded that hormone replacement therapy was safe. According to Dr. Kramer: “There were literally scores, if not hundreds, of observational studies that showed almost beyond reasonable doubt that hormone replacement therapy would prolong women’s lives, if it were given routinely.”

    In the words of Dr. David Ransohoff (who Dr. Kramer quoted in his talk), observational data are “guilty until proven innocent.”

    When discussing vitamin D, Dr. Garland put up another thought-provoking chart on the effect of vitamin D and calcium on the development of kidney stones (derived from the Women’s Health Initiative).

    Several things about Dr. Garland’s chart are of interest.

    • Although the y-axis could easily have gone up to only 10%, it goes all the way up to 55%. This visually minimizes the apparent negative treatment effect of calcium and vitamin D and barely impresses on the viewer that if the trend observed in the study is accurate and significant, approximately 1.2 million Americans will develop kidney stones if they continue taking vitamin D and calcium.
    • We probably should not be surprised that on this same slide, Dr. Garland opted to display absolute risk rather than relative risk.Absolute risk is a measure of what portion of a population have a disease in a given time period. Relative risk is that percentage increase divided by the risk in a placebo group, e.g. (2.5%–2.1%)/2.1%. In this case, patients who take calcium and vitamin D have an increased absolute risk of 0.4% of developing kidney stones but a relative risk of 19% of getting kidney stones. So by showing absolute risk, Dr. Garland again downplays the sheer number of people who could be negatively affected by taking extra vitamin D and calcium.

    Dr. Reinhold Vieth speaks about safety

    In his slot, Dr. Reinhold Vieth was asked to speak on whether there was a safe upper limit/level of vitamin D. As he has stated in at least one paper, his answer was no. In his words, “A prolonged intake of 250 mug (10,000 IU)/d of vitamin D(3) is likely to pose no risk of adverse effects in almost all individuals in the general population.”[20]

    Dr. Vieth’s comments echoed those of Dr. Garland, who had earlier concluded, “The benefit/risk ratio for 2,000 IU/day of vitamin D is infinite.”

    Obviously, we disagree. We take no comfort in the fact that a person, as demonstrated in case reports, can accidentally take several thousand times the recommended dose of vitamin D and still seem healthy after only several months – which is the only data Dr. Vieth provided. Our attention is directed towards long-term outcomes, time windows which correspond to the slow growth of chronic bacteria and other pathogens that may play a role in causing chronic disease. Also, the full negative effect of immunosuppressants (recall that we have found that 25-D acts as an immunosuppressant) can often only be noted after decades.

    25-D vs. 1,25-D and the long elusive search for biological plausibility

    Most of the talks had us scratching our heads, trying to figure out why, when 1,25-D is the biologically active form of vitamin D and the sole vitamin D metabolite able to activate the Vitamin D Receptor (VDR), almost every speaker focused on research and recommendations pertaining to 25-D levels. For a brief discussion of the different forms of vitamin D see my (Paul’s) speech.

    One of the points both of us tried to make in our own five minute presentations is that the levels of the different forms of vitamin D are jointly regulated by several feedback mechanisms. This means that if one alters the level of one form of vitamin D, levels of the other vitamin D metabolites will almost certainly shift to accommodate the change.

    It seems prudent then, that if a study measures 25-D levels, it should measure 1,25-D levels as well. Without the ability to examine the relationship between the two main vitamin D metabolites, how can a researcher fully understand the spectrum of the changes that occur when vitamin D supplementation takes place? Over a decade ago, even the FDA suggested that “1,25-D should be measured in order to support claims of a drug’s osteoporotic activity.” Yet few researchers seem to have heeded this advice. Thus, we would venture to say that studies absent levels of 1,25-D should at least be regarded with less rigor than those studies that test both metabolites.

    At some point in a discussion with the Committee, one of the experts mentioned how 1,25-D is difficult to detect. We hope that doesn’t serve as an excuse for not testing 1,25-D. Since most major laboratories – including Quest Diagnostics – can easily perform the test, we would expect any vitamin D researcher would be able to do so as well. The real reason 1,25-D might be “hard” to test is that the 1,25-D test costs more than the 25-D test. But we’re all trying to do the best possible research… right?

    The potential significance of 1,25-D is suggested in a forthcoming study published in the Annals of the New York Academy of Sciences. In the study, Dr. Greg Blaney of Vancouver, Canada reported on the 25-D and 1,25-D levels of 100 patients with autoimmune disease.

    While many of the subjects had very low levels of 25-D, even more of the subjects (approximately 85%) had levels of 1,25-D elevated above the normal range. Under these circumstances can those subjects with low levels of 25-D but elevated levels of 1,25-D truly be considered vitamin D deficient? They are certainly not deficient in the sole form of vitamin D that actually activates the VDR to transcribe approximately 913 genes, TLR2, and the antimicrobial peptides vital to the innate immune response.

    When Dr. Heaney was asked to comment on 25-D’s actions by a member of the committee he admitted that he did not know, biologically speaking, how 25-D exerts any of the myriad beneficial effects that he claimed occur when it is elevated. All he could offer was that he knows that 25-D must be present in patients for them to get better.

    Is this what passes for biological plausibility among pro-vitamin D researchers?

    Later that afternoon, one committee member asked Dr. Cedric Garland, “Do you have a mechanism to explain the outcomes you’re reporting?”

    Dr. Garland proceeded to offer his analysis for how supplemental vitamin D, in his words, “eradicates” cancer. Garland pointed to a stack of his papers and asked that it be passed out. When members of the committee seemed hesitant to do so, he went on to explain the details of his model anyway. Dr. Garland shared that he had developed a novel pathogenesis for cancer in which cancer is caused by gaps between cells, which, in simple terms, he believes form as a body becomes vitamin D deficient. This line of inquiry was clearly only in its infancy and had not yet passed muster with cancer researchers. But even if Garland’s model proves to be valid, one would have hoped he would expose it to great scientific scrutiny before using it as the basis for making unequivocal recommendations regarding vitamin D supplementation.

    But as Dr. Garland went on to further describe what he believes are vitamin D’s cancer benefits (he was eventually cut off by a member of the committee), he provided a perfect example of the vitamin D expert that we have trouble following. The reason? He used the broad term “vitamin D” when making claims and by doing so, mixed up research that pertains solely to 25-D or 1,25-D. For example, Garland said that vitamin D is able to “upregulate tumor suppressor genes.” Most audience members probably thought he was referring to 25-D since that was the only vitamin D metabolite he ever mentioned. Yet, only 1,25-D is able to activate the Vitamin D Receptor to express Tumor Metastasis Suppressor 1 and other related genes.

    Similarly, another talk that we believe should have discussed 1,25-D levels but did not was Dr. Stephanie Atkinson’s remarks on vitamin D in pregnancy. That is because researchers have realized for some time now that 1,25-D is over-expressed during pregnancy.[21] Placental conversion was demonstrated in vitroin 1979,[22] over-expression of 1,25-D in vivo in 1980,[23] and the dysregulated vitamin D metabolism was described in 1981.[24] If 1,25-D becomes elevated during pregnancy, then isn’t it only prudent that studies on vitamin D and pregnancy should measure it and its relationship to 25-D?

    We find the relationship between 25-D and 1,25-D important, because it was by observing relationships between the two metabolites that our group was able to realize that in the majority of cases, when a subject’s 25-D level is low, their 1,25-D levels are actually high (AIDS is an exception because HIV completely co-opts the VDR).[25] And it was these relationships that led to our alternate hypothesis for the low levels of 25-D observed in patients with chronic diseases such as cancer. We have found that when 1,25-D is high, the vitamin D feedback pathways naturally downregulate levels of 25-D. This means that what is now viewed as “deficiency” could simply be a result of the chronic disease process. Under such circumstances, allowing people to create extra 25-D by raising the DRI is not only useless but harmful. We believe that our alternative hypothesis at least deserves consideration by the committee, yet are worried that when they are not presented with data on both 25-D and 1,25-D, they will not be able to recognize the pattern that makes our model plausible.

    Vitamin D and the evolving definition of autoimmune/inflammatory disease

    We also find it problematic that none of the experts who spoke at the meeting seem to be aware that microbial metabolites have a profound effect on the activity of the Vitamin D Receptor (VDR). The US NIH now estimates that 90% of cells in the human body are bacterial in origin while only a mere 10% of cells in the body are truly human.[26] Thus, many microbiologists now believe that humans are best viewed as superorganisms in which a plethora of bacterial gene products can effect the activity of our own receptors and genetic pathways.[27]Indeed, independent research teams have found that Mycobacterium tuberculosis downregulates VDR activity by approximately 3.3 times.[28] ActiveBorellia lowers VDR activity by about a factor of 50 and Epstein-Barr Virus by a factor of around 10.[29] HIV completely shuts down VDR activity. It’s quite likely that other pathogens yet to be fully characterized have also evolved ways to decrease VDR activity because by doing so, they slow important components of the innate immune response that might otherwise render them dead. That the experts who spoke before the committee have failed to factor this knowledge into their study designs suggests that they cannot fully account for the actions of the various vitamin D metabolites in an in vivo environment.

    Furthermore, no vitamin D researcher, of whom we are aware, makes provision for research which shows that the current view of autoimmune disease – in which the immune system is believed to attack itself – may be running its course.[30][31][32][33] Many microbiologists now believe that at least some, if not all, of the inflammation that drives the autoimmune disease state is caused by the presence of chronic pathogens.

    Inflammation is a clear potential link between infectious agents and chronic diseases.

    Siobhán M. O’Connor[31]

    With this in mind, the claim by many vitamin D researchers that vitamin D can help patients with autoimmune disease by slowing an “over-active” adaptive immune response no longer jives with an emerging view in the microbiology/immunology community – that both the adaptive and innate immune systems should be kept active in autoimmune disease in order to allow the body to best target disease-causing microbes.

    The possible presence of pathogens in autoimmune and other inflammatory disease states such as cancer and atherosclerosis makes our group’s findings on vitamin D’s actions more plausible. When the immune system is fighting a microbe, it continually releases inflammatory molecules in an effort to kill the pathogen.[34] If the pathogen dies, endotoxins[35] and cellular debris are generated. This leads to increased symptoms of malaise on the part of a person who harbors such microbes.

    It follows that any substance that slows the innate immune response will decrease this battle between man and microbe, causing the patient to feel better. The more the immune response is slowed, the greater the decrease in inflammation and inflammatory markers. But while such measures can make the patient appear as if they are getting better for years, ultimately the bacteria causing their disease are able to spread much more easily and exacerbate the disease state over the long-term.

    Our molecular and clinical data shows that 25-D, like the pathogens we describe above, binds the Vitamin D Receptor and slows its activity.[1] Since the VDR largely controls the innate immune response, increasing 25-D levels could easily display the pattern of immunosuppression described above. This begs the question – is 25-D a miracle curative substance or simply an excellent palliative?

    If we are correct and 25-D slows VDR activity then we have found that patients who are chronically ill benefit from decreasing their vitamin D intake. This is because their VDR activity already appears compromised by the pathogens they harbor. Yet this should not be interpreted to mean we think healthy people can’t consume vitamin D. However, our data suggest that healthy people can get the vitamin D they need by eating a well-rounded diet that does not include fortified foods and getting sun exposure similar to that of a person taking measures to avoid an increase in skin cancer risk.

    Our speeches and the reaction to them

    In our speeches, we raised the possibility that low levels of 25-D are caused by the inflammatory disease process and that taking vitamin D suppresses the immune response.

    In total, the two of us spoke for 600 seconds, and we’re not sure we convinced anyone of anything. By all indications, a discussion of molecular mechanisms was outside the committee’s comfort zone. Most would probably say that they are uninterested in software emulations of molecular interactions, no matter how provocative or far-reaching the conclusions they imply. If we had to pin the members of the committee down on it, I think they would say that when it comes to our clinical trial, we needed better controlled data such as the kind we intend to generate as a part of our West China Hospital collaboration. For this reason, we opted for a more measured tone.

    During Amy’s speech, a Dr. Holick sighting, seated in the front on the left.

    During Paul’s speech, there was some tittering in the audience (not the committee). He saw one prominent researcher, who shall remain nameless, chuckling. For a moment, he thought he had spinach in his teeth or was trailing toilet paper from his shoe, and then he realized that, oh yes, he was telling 50 PhDs and MDs that their conclusions have the potential to be very misguided.

    After the day’s business concluded, everyone began to file out. One woman though turned to us and said, “What a bunch of rebels!”

    Glad we could liven up the workshop for you, ma’am.

    Although during our speeches, we asked people to come by and ask us about our work, only Dr. Tony Norman did. He did not seem convinced, but did invite us to submit an abstract for a poster presentation at an upcoming vitamin D conference in Belgium.

    Anticipating what is to come

    If you ask most Vitamin D Council researchers, they would say that this is the “end game,” and there is already more than enough evidence to raise the level of vitamin D added to the food supply. During the question and answer sessions, some of these scientists such as Dr. Garland were dismissive of evidence to the contrary. It was as if many were saying, “Look – there is no downside here. It is demonstrably impossible that consumption of vitamin D can cause harm. If we don’t have all the requisite evidence, it doesn’t matter. Lives are at stake!” We suspect that even if the committee decides to maintain current vitamin D levels, there are other ways to convince the public to increase vitamin D intake.

    But despite the media’s stampede to promote the “sunshine vitamin,” the evidence is ambiguous and the issue of biological plausibility – not knowing how 25-D exerts its claimed benefit – is troubling as well. Dr. Kramer said that the root of science is the art of thinking hard about how you could be wrong. Is this something the vitamin D research community is actively doing? Looking through everything that was presented throughout the day, how many confounding variables might Dr. Kramer have identified? How many surrogate outcomes could he point to?

    It is difficult to anticipate exactly what decision the IOM Committee will arrive at. However, from this perspective, it would be hard to see how the group could raise the dietary reference intake in light of such an equivocal set of conclusions in the Tufts report – in spite of considerable pressure to do so.

    Will an IOM committee ever emerge from this climate of consensus and consider research that would cause them to lower the DRI of vitamin D?

    Here are a few possibilities:

    • An evolution in the understanding of disease raises new concerns about the risks of using immunosuppressants. The Human Microbiome Project shows that bacteria are not confined to the surfaces of the body, i.e. skin, mouth, gastrointestinal tract, etc. As chronic disease is increasingly characterized as an infection or at least having infectious components, researchers seriously question if reducing the inflammatory response needed to kill chronic pathogens is in a individual’s long-term interest.
    • After continuing to increase vitamin D consumption to historic levels, members of the public and some researchers begin to question the absence of the promised overall drop in rates of disease. In some respects, the decision of the IOM committee is immaterial. All indications are that the vitamin D “experts” are having a great deal of success communicating their message that it’s important to take 4-5 times or more the current DRI of vitamin D. People will take increasing amounts of vitamin D as food manufactures will add increasing amounts to their products. Many of the presenters at the Workshop essentially promised double-digit declines in disease. If this does not materialize, there will be questions. If we are right, this could be the hormone replacement therapy (HRT) saga redux, except with potentially broader ramifications.
    • Well-controlled long-term studies show that vitamin D consumption increases incidence and severity of chronic disease. To most people – probably in excess of 95% of people at the workshop – this is not even a possibility, but the history of HRT use proves such unexpected results can emerge, eventually, from well-controlled studies.

    Epilogue: The ride home

    After the meeting adjourned, we were approached by a nattily attired man in his thirties, originally from Barcelona. He offered us a ride home to New York. His Mercedes SUV looked quite appealing, so we skipped the bus and took him up on his offer.

    On the ride home, this fellow – who told us he had a PhD in oncology – told us he agreed with the sentiment of our remarks and expressed disappointment with the lack of rigor of the science presented. The word he used to describe the majority of presentations was “pseudoscience.” He told us that, based on what he saw, vitamin D was harmful and that it was only a matter of time before the hype surrounding vitamin D would fizzle.

    Although we felt validated, we wondered why he had attended the conference in the first place. It turns out that he was an entrepreneur, had just bought the patent for a new formulation of calcium, and wanted the discussion at the IOM workshop to help him decide how much vitamin D to add to his product.

    He seemed like a honest and honorable guy until, that is, he let us know that despite his negative view of vitamin D, he intended to add high levels of it to his supplement anyway, so long as the medical community and public viewed it as beneficial. Later on, he said, he planned to strategically remove it “just before the vitamin D bubble bursts.”

    Well, isn’t that wonderful? Some reassurance about the people behind products aimed at “improving our health.”

    In that vein, we couldn’t help remembering the short speeches delivered by members of the Dairy Council as well as a yeast company, whose goal in speaking before the Committee were simply to urge the Committee that, if more vitamin D is added to the food supply, it should be added to the food they market. This would give these interests the ability to claim more health benefits from their food and, of course, make more money.

    In sum, our adventure in the nation’s capital left us with a bad taste in our mouths. We’d like to wash it away but we’re worried that by the time we do so, no drink won’t be fortified with vitamin D.

    REFERENCES

    1. Albert, P.J., Proal, A.D. & Marshall, T.G. Vitamin D: the alternative hypothesis. Autoimmun Rev8, 639-644 (2009). [] []
    2. Wang, Y. & Beydoun, M.A. The obesity epidemic in the United States–gender, age, socioeconomic, racial/ethnic, and geographic characteristics: a systematic review and meta-regression analysis. Epidemiol Rev 29, 6-28 (2007). []
    3. Rossouw, J.E. et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women’s Health Initiative randomized controlled trial. JAMA288, 321-333 (2002). []
    4. Sackett, D.L. Evidence-based medicine. Semin. Perinatol 21, 3-5 (1997). []
    5. Jablonski, N.G. & Chaplin, G. The evolution of human skin coloration. J. Hum. Evol 39, 57-106 (2000). []
    6. Petit, J.R. et al. Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica. Nature 399, 429-436 (1999). []
    7. Dodd, A.T. et al. Melanocytic nevi and sun exposure in a cohort of colorado children: anatomic distribution and site-specific sunburn. Cancer Epidemiol. Biomarkers Prev 16, 2136-2143 (2007). []
    8. Maslin, M.A. & Christensen, B. Tectonics, orbital forcing, global climate change, and human evolution in Africa: introduction to the African paleoclimate special volume. J. Hum. Evol 53, 443-464 (2007). []
    9. Potts, R. Environmental hypotheses of hominin evolution. Am. J. Phys. Anthropol Suppl 27, 93-136 (1998). []
    10. Matsuoka, L.Y., Ide, L., Wortsman, J., MacLaughlin, J.A. & Holick, M.F. Sunscreens suppress cutaneous vitamin D3 synthesis. J. Clin. Endocrinol. Metab 64, 1165-1168 (1987).
      []
    11. Matsuoka, L.Y., Wortsman, J., Hanifan, N. & Holick, M.F. Chronic sunscreen use decreases circulating concentrations of 25-hydroxyvitamin D. A preliminary study. Arch Dermatol 124, 1802-1804 (1988). []
    12. Matsuoka, L.Y., Wortsman, J. & Hollis, B.W. Use of topical sunscreen for the evaluation of regional synthesis of vitamin D3. J. Am. Acad. Dermatol 22, 772-775 (1990). []
    13. Scarlett, W.L. Ultraviolet radiation: sun exposure, tanning beds, and vitamin D levels. What you need to know and how to decrease the risk of skin cancer. J Am Osteopath Assoc 103, 371-375 (2003). []
    14. Wolpowitz, D. & Gilchrest, B.A. The vitamin D questions: how much do you need and how should you get it? J. Am. Acad. Dermatol 54, 301-317 (2006). []
    15. Palm, M.D. & O’Donoghue, M.N. Update on photoprotection. Dermatol Ther 20, 360-376 (2007). []
    16. Chlebowski, R. et al. (2006). The Women’s Health Initiative Randomized Trial of calcium plus vitamin D: effects on breast cancer and arthralgias. Journal of Clinical Oncology, 24. []
    17. This comment was based on Figure 3 of Chlebowski et al on page 1587, strata entitled Baseline total vitamin D (supplements + diet). One can see that those who started out with an intake of 600 IU of vitamin D or higher had a hazard ratio of 1.34 (95% CI, 1.01-1.78), while those who started out low had a risk reduction (the p-value for interaction was significant at 0.003). This is one of the few examples of a randomized trial on vitamin D. []
    18. Jackson, R.D. et al. Calcium plus vitamin D supplementation and the risk of fractures. N. Engl. J. Med 354, 669-683 (2006). []
    19. Mohr, S.B., Garland, C.F., Gorham, E.D. & Garland, F.C. The association between ultraviolet B irradiance, vitamin D status and incidence rates of type 1 diabetes in 51 regions worldwide.Diabetologia 51, 1391-1398 (2008). []
    20. Vieth, R. Vitamin D toxicity, policy, and science. J. Bone Miner. Res 22 Suppl 2, V64-68 (2007). []
    21. Ardawi, M.S., Nasrat, H.A. & BA’Aqueel, H.S. Calcium-regulating hormones and parathyroid hormone-related peptide in normal human pregnancy and postpartum: a longitudinal study.Eur. J. Endocrinol 137, 402-409 (1997). []
    22. Tanaka, Y., Halloran, B., Schnoes, H.K. & DeLuca, H.F. In vitro production of 1,25-dihydroxyvitamin D3 by rat placental tissue. Proc. Natl. Acad. Sci. U.S.A 76, 5033-5035 (1979). []
    23. Steichen, J.J., Tsang, R.C., Gratton, T.L., Hamstra, A. & DeLuca, H.F. Vitamin D homeostasis in the perinatal period: 1,25-dihydroxyvitamin D in maternal, cord, and neonatal blood. N. Engl. J. Med 302, 315-319 (1980). []
    24. Gray, T.K., Lowe, W. & Lester, G.E. Vitamin D and pregnancy: the maternal-fetal metabolism of vitamin D. Endocr. Rev 2, 264-274 (1981). []
    25. Haug, C.J. et al. Severe deficiency of 1,25-dihydroxyvitamin D3 in human immunodeficiency virus infection: association with immunological hyperactivity and only minor changes in calcium homeostasis. J. Clin. Endocrinol. Metab 83, 3832-3838 (1998). []
    26. Turnbaugh, P.J. et al. The human microbiome project. Nature 449, 804-810 (2007). []
    27. Proal, A.D., Albert, P.J. & Marshall, T. Autoimmune disease in the era of the metagenome.Autoimmun Rev 8, 677-681 (2009). []
    28. Xu, Y. et al. Chin. Med. J 116, 1070-1073 (2003). []
    29. Yenamandra, S.P. et al. Expression profile of nuclear receptors upon Epstein — Barr virus induced B cell transformation. Exp. Oncol 31, 92-96 (2009). []
    30. Kivity, S., Agmon-Levin, N., Blank, M. & Shoenfeld, Y. Infections and autoimmunity–friends or foes? Trends Immunol 30, 409-414 (2009). []
    31. O’Connor, S.M., Taylor, C.E. & Hughes, J.M. Emerging infectious determinants of chronic diseases. Emerging Infect. Dis 12, 1051-1057 (2006). [] []
    32. Relman, D.A. Detection and identification of previously unrecognized microbial pathogens.Emerging Infect. Dis 4, 382-389 (1998). []
    33. Rook, G.A. & Stanford, J.L. Slow bacterial infections or autoimmunity? Immunol. Today 13, 160-164 (1992). []
    34. Allie, N. et al. Protective role of membrane tumour necrosis factor in the host’s resistance to mycobacterial infection. Immunology 125, 522-534 (2008). []
    35. Hurley, J.C. Antibiotic-induced release of endotoxin. A therapeutic paradox. Drug Saf 12, 183-195 (1995). []
  • Comments Off on Second-guessing the consensus on vitamin D
  • Filed under: conferences and trainings, featured articles, medical research, vitamin D
  • Travels, papers, and more… an update

    If I have seen further it is only by standing on the shoulders of giants a pile of driftwood.

    Hello readers!  Suffice it to say I’ve been missing in action for several months.  For much of the time I’ve been traveling.  Some of you may know that I just got back from China where I gave a speech at the International Congress of Antibodies. That will be the subject of my next post when the video of my speech is ready. In the meantime, I finally have time to give you an update of what I was up to before I left for Beijing…just to keep things in chronological order.

    Several months ago I travelled to Vancouver Island to stay with Paul’s brother and his wife. It was wonderful to be surrounded by nature again! We took hikes through 200 year old forests and climbed gnarled driftwood on the beach. It was the first time I’ve sat around a bonfire since getting sick. I even got to take a horseback riding lesson and stayed on the horse!

    Then, I went to visit my twin sister Sara in Dubrovnik, Croatia where she and her fiancee Tony live.  Not only did I walk the city walls and streets with vigor (old town Dubrovnik is surrounded by an ancient fortress), but I got to see some serious water polo action between my sister’s fiancee’s club team, JUG Dubrovnik, and the top Serbian club team in the world.

    Ah… fresh forest air

    The two teams are intense rivals and JUG won their biggest game of the year when I was there.  The whole city went nuts and I got to participate in the celebration… and with the team no less. Sara and I also celebrated our birthday together in Dubrovnik – it was the first time we have been together for our birthday in about 8 years.

    After that excitement I returned to New York where I worked on writing two papers for the scientific journalAutoimmunity Reviews.  One of them describes how the interaction between human and bacterial genomes drives the autoimmune disease process:

    Autoimmune Disease in the Era of the Metagenome

    For the second paper I worked (with Paul, see below) to explain, as simply as possible, how the Marshall model of vitamin D metabolism contrasts with the current model of “vitamin” D’s properties currently put forth by much of the mainstream medical community.  Among other topics, we point out how the Marshall model is supported by molecular data but certainly makes more sense in the face of research showing that the diseases apparently “helped” by vitamin D supplementation are actually becoming increasingly widespread:

    Vitamin D: The Alternative Hypothesis

    On my horse before we galloped off into the horizon

    Also, I wrote press releases for both articles. These have sparked the interest of at least a few journalists and their publications including Science Daily.

    I have also been doing my best to keep up with comments posted on this site. As you may have noticed, my trusted colleague, Paul Albert of Weill Cornell Medical College, has been answering some of the questions himself. I enjoy everyone’s comments but since I’m getting increasingly busy I’m sorry if my turnaround time is a bit slow sometimes. Or if I somehow miss your post or email it’s definitely not intentional!

    Walking with Tony to our apartment on an island near Dubrovnik with Marco Polo’s childhood home in front of us

    In the fall, I will start graduate school at University of Illinois at Chicago (UIC) where I’ll be pursuing a degree in microbiology.  The school appeals to me on many levels.  During the interview process the school’s researchers seemed genuinely interested in my work with the MP.  They asked questions, watched my videos, and truly appreciated the personal statement that I had labored over in an attempt to best communicate my passion for exploring the microbial world.  Several researchers are studying biofilm bacteria, which is a topic of great personal interest.

    In particular, one researcher, who graduated from Princeton rather recently, is working on bacterial communication in biofilms.  He’s also very interested in the role of bacteria in chronic disease.  He called me a perfect prospective graduate student, so I hope that I can live up to his impression of me. Obviously grad school will take up much of my time, but I need the expertise that comes with a PhD and I’m excited to better learn how to use some of the latest molecular techniques.  However, I plan to try my hardest to continue to work with ARF on the side.

    Happy birthday Sara and Amy!

     

    In less than a week I’ll be heading to Chicago for Sara’s bachelorette party – a three day bacchanalia with her friends from around the world.  Then I’ll take some time to look for an apartment in Chicago and visit my family in the area. As soon as I get back my parents are visiting me in New York.   Another reason I had to take some time off from Bacteriality is because I worked with Paul to design and write the content for Sara and Tony’s wedding website.

    During my free time I’ll be working on another paper for Autoimmunity Reviews – this one will discuss immunopathology. I also plan to help Paul edit what I think is a groundbreaking project on his behalf – the MP Knowledge Base. Paul has been faithfully gathering both old and new content on essentially every topic related to the MP and is organizing it in easy-to-read articles on a very searchable website. His perserverence amazes me. While the site will take a few more months to complete, I’m going to review the major articles and try to ensure the content is in tip-top shape.

    Dancing to live Croatian music after JUG’s victory
  • Comments Off on Travels, papers, and more… an update
  • Filed under: featured articles, personal
  • Though the human genome was fully sequenced in 2001, the most promising work in genomics has just begun and not even in the study of human DNA. Human cells are outnumbered by bacterial cells by a factor of ten to one, and, as the rest of this site alludes to ad nauseam, there is strong reason to believe that bacteria are to blame for many of the chronic diseases from which humans suffer. Genetically speaking, we know relatively little about bacteria that persist in humans. The field is ripe for advances.

    Colorful representations of sequenced genomes adorn the walls at JCVI.

    You may wonder how a researcher can view and understand a particular bacterial genome. On their own, they cannot. Progress in genetics is a group effort, and requires partnering with one of the handful of heavyweight institutions in the world that have developed resources allowing for genome interpretation. Several such institutions exist in the US. The NIH has bacterial protein sequencing tools at its disposal.The Broad Institute at MIT as well as theWashington University Genome Sequencing Center have also developed tools that allow for genome sequencing.

    Many would argue though that the Institution most on the bleeding edge when it comes to genome sequencing technology is the J. Craig Venter Institute, formerly known as TIGR. Headed by transformative iconoclast and entrepreneur J. Craig Venter, the Institute is a non-profit research center that was founded in 2006. It has facilities in Rockville, Maryland and La Jolla, California and employs over 400 people, including Nobel laureate Hamilton Smith.

    You can imagine how happy I was to get an email from my former advisor at Georgetown (where I have an undergraduate degree), asking if I wanted to attend a training session in bacterial sequencing technology at the Rockville branch of the J. Craig Venter Institute (JCVI). He was keenly aware of my thirst to gain hands on experience with sequencing technology.

    The training was called the “Prokaryotic Annotation and Analysis Workshop.” (As some may know, “prokaryote” is just another name for single-celled bacteria.) This experience marked my first exposure to sequencing technology, and I had little idea what to expect. Would I be able to follow the procedures used to identify protein sequences? Three days isn’t much time, but I was cautiously optimistic.

    First Impressions

    Last Monday, I boarded a train to Washington DC, took a quick cab over to Georgetown to say hello to some of my old professors, and proceeded to take the Metro up to Rockville. After a solid night’s sleep at the “Sleep Inn,” I took the hotel’s shuttle to the door of the Venter Institute.

    The entrance of JCVI has an aura of science and progress. The walls of the lobby and hallways are covered with neatly framed images of sequenced genomes. The individual proteins in such pictures are illuminated in different colors, invoking modern art. The head of educational outreach programs gave us a tour of the grounds, which concluded at the space right in front of Venter’s office (he was traveling at the time and unfortunately not in his office!). Employees of JCVI refer to the space as “the museum” as several objects deeply rooted in scientific history are on display. A glass case on the left side of the room stores letters exchanged between Watson and Crick. A very early model of a sequencing machine used by Rosyln Franklin is on the right. A large statue of a tiger seemingly prowls in the middle of the room – one of several tiger statues that used to be at the building’s entrance when the Institute bore its previous name. If the tiger is the unofficial mascot of JCVI, it’s certainly an appropriate one. This is not the place for the ambivalent.

    Do the rules say, “No riding the tiger”? I hope not.

    It’s clear that the staff at JCVI take great pride in their accomplishments and with good reason! Copies of Science and other prestigious medical journals containing studies published by JVCI or reports of efforts led by Venter are displayed on tables in several locations. The walls of the hallway leading to Venter’s office are covered with framed newspaper and magazine articles featuring Venter – articles in Wired, People Magazine, and the New York Times. Venter has been named one of the top 100 most influential people in the world by Time Magazine for the last two years.

    Before the training began, I had the opportunity to chat with some of the twelve other people in my class. I had already met Dr. Anne Rosenwald, a professor at Georgetown whose research focuses on understanding the genetics of various yeast forms. She also teaches biochemistry. Dr. Rosenwald was attending the session in the hopes of working out a deal with JCVI in which the Center could provide her with genomes that have been analyzed by computers but are still in need of human annotation. Annotation refers to the process of using clues in a DNA sequence in order to name and identify protein coding regions. If such an exchange of information is possible, it would allow her undergraduate students to map a bacterial genome as their thesis project. I hope the partnership works out because I think that while challenging, using JCVI’s annotation technology would provide any undergraduate with excellent preparation for microbiology and molecular biology gradate programs. I certainly wish I could have learned how to sequence a genome as an undergraduate!

    Let the learning begin

    Two members of our group had travelled to JCVI all the way from South Africa. Researchers at the University of the Free State, they were already using several of JCVI’s programs to sequence and thus better understand the genomes of bacteria isolated from several African caves – bacteria that have never before been classified. I spoke with them about the challenges of mapping completely new genomes. Soon enough, I aspire to study new genomes myself, especially those pertaining to the great mass of unclassified species of bacteria in the human body. I figured their feedback could clue me in to the challenges particular to dealing with unknown organisms.

    The South African duo were pleased with what they have been able to learn thus far about their cave-dwelling species. When it comes to JCVI’s sequencing technology, they were old pros and suggested improvements to the software throughout the class. Why had they come to JCVI? I sensed an eagerness on their part to see the hub of progress in person and personally get to know some of the people working with and developing the technology they are using. At the end of the session they kindly invited me to visit South Africa and spend time in their lab. Who knows, I might take them up on the offer at some point as South Africa is one of my top travel destinations.

    Site of the learning.

    Our classroom provided a comfortable atmosphere in which to learn with shiny new laptops for each of us. Access to the laptop allowed us all to get a chance to navigate our way through a program as the instructor described its features in a lecture. Snacks, coffee, hot chocolate and tea were available at all times and during the class we would break every hour or so to refill our cups and chat.

    The first day was spent learning about the process by which JVCI’s technology allows unknown proteins to be named and characterized (annotated). Our teacher, Ramana Madupu, is a full-time employee at JCVI who uses the technology discussed in the lecture in the course of her job.

    Let’s discover a new species of bacteria!

    Let’s say you are picking your nose. First of all, shame on you. But, let’s say that in spite of your flagrant disregard for common decency, you nobly want to contribute to human progress by determining what kind of bacteria are in your booger. After conducting several basic experiments on the bacterial DNA in your lab, you decide that a bacterial species may be new and unique, so you decide to contact JCVI.

    JCVI has you send them a sample of the bacteria in question. A non-profit institution, JCVI will run your genome through its sequencing machine at no charge to you. This service is largely automated and it’s becoming cheaper and cheaper. JCVI’s mandate is to sequence as many genomes as possible and freely share that data with researchers. However, JCVI’s offer to freely sequence and interpret your genome comes with an expectation, namely that upon receiving your results, you will review and manually correct any of the sequence errors. At last check, JCVI’s computer annotation programs claims a 95% accuracy rate.

    As we know from high school biology, DNA consists of four nucleotides: adenine, thymine, cytosine, and guanine. A gene is nothing more than a sequence of those As, Ts, Cs, and Gs, one which codes for a particular protein. Genes are the blueprints for making proteins, and in fact a new gene is often referred to, at JCVI at least, as a “putative protein.” (The word putative is used until one has sufficiently conclusive evidence to remove that label.)

    Sequencing and beginning to interpret a genome

    At the risk of gross oversimplification or misstatement, let me dare to explain the technical process of how a genome is sequenced and interpreted. You start with a genome. The DNA is processed with fluorescent dye. Each base pair (aka a nucleotide) emits a different color. Those colors are read by a machine and interpreted as a sequence of nucleotides. The result is an exceedingly long sequence of As, Ts, Cs, and Gs.

    Now the fun really begins. The goal is to take this enormous sequence and begin to determine which base pair sequence codes for which protein and in which biological category. The Prokaryotic Annotation Pipeline aka “the pipeline” to the rescue!

    The pipeline is an algorithm-based workflow which automatically predicts to a good, but certainly not perfect, degree of reliability for the name, location, and function of a gene. The pipeline does this by comparing your base pair sequences to sequences of previously identified proteins that exist in a variety of databases.

    But how does the pipeline know which segments of your base pair sequence to check against known protein sequences in the hopes of finding a match? There are apparently a lot of fancy statistical algorithms at work here, but one way is to look at the codons, or pieces of genetic code which mark the start and end of a protein sequence. The base pair sequences ATG, GTG, and TTG almost always code for the start of a protein, while the sequences TAA, TAG and TGA almost always indicate the end of a protein. Of course there are always exceptions to the rule, which is why every sequence to come through the pipeline should be checked by a human being. That’s the ideal anyway. By identifying these start and stop codons, the pipeline has a pretty good idea of where one protein coding sequence ends and another begins. At this point, each potential protein coding sequence is referred to as an ORF or “open reading frame.”

    The algorithm matching a gene to an existing sequence applies greater weight to matches derived by certain databases. For example, one database frequently used for comparison is Swiss-Prot. Swiss-Prot relies exclusively on manual annotation by humans. At present, humans make fewer annotation errors and are, therefore, more reliable than software. For this reason (and perhaps due to the fact that, at least according to stereotype, the Swiss are highly precise), Swiss-Prot is arguably the gold standard. If a sequence from your bacterial genome matches a Swiss-Prot sequence, the confidence level is high that the match is correct.

    During the pipeline comparison process, the software will also run your protein sequences against what are known as “Hidden Markov Models” or HMMs. HMMs are essentially statistical models of the patterns of amino acids in a multiple alignment of proteins which share sequence and functional similarity. Proteins run against HMMs receive a score as to how well they match the model. If the score is high enough you can reasonably expect your protein to have the same function that the HMM represents. For example, if your protein has a high-scoring match to an HMM model for a protein involved in sugar transport, you can be pretty sure that the match protein from your genome has the same role.

    After a particular protein sequence is run against HMM models, the software assigns it a putative name and role, based on how much information it believes it has to support such a label. The process of comparing sections of your base pair sequence to as many existing protein databases as possible is also referred to as BLASTING. Depending on the level of evidence at hand, the protein is also given a gene symbol, role information, and sometimes numbers that pertain to its classification.

    For example, after one of your proteins is run through the pipeline the pipeline might come up with the following result:
    TIGR00433

    Name: biotin synthase

    Gene symbol: bioB

    TIGR role: 77 biotin synthesis

    Now, what does this mean? Let’s break this down. The fact that the gene has what is called a TIGERfam ID (TIGR0043) refers to the fact that it had a high scoring match to a protein previously annotated at JCVI. Since JCVI obviously believes their genomes have been well annotated, a TIGERfam match that exceeds the minimum threshold for reliability is generally regarded as a sign that the computer has made the correct match. The name and protein role associated with the highest HMM and other database matches for your protein is also displayed, along with the symbol for the putative gene. In this example, it appears that it is the software’s best guess that your protein is involved in the synthesis of biotin (also known as vitamin B7).

    JCVI repeats this process for every Open Reading Frame sequence it detects, and the number of sequences often ranges in the thousands.

    How then does one access the proposed annotations generated by the pipeline? After each of your protein sequences has been run through the pipeline, JVCI software condenses them into a digital file that is sent to you. At this point you need to use a web-based program that allows you to manually modify the results. The program is called Manatee, Manual Annotation Tool Etc. Etc. An open source project, it was also created by JCVI software programmers. A bit intimidating for the uninitiated, Manatee is a powerful and exquisite program, which allows a person to assign each putative protein with the correct name and function.

    Annotating a DNA sequence

    Your goal in using Manatee is to make sure that the protein matches made by the pipeline are grounded by supporting evidence. For example, you can check for “gene model curation” which provides information necessary to ensure that your genes have the correct coordinates and that your set of predicated genes is complete. Other features allow you to look at the raw base pair sequence of your genome in order to identify rare start and stop codons that the computer may have missed, or screenshots that allow you to note if the software accidentally annotated overlapping genes.

    The labs at JCVI are first-class.

    As the human annotators using Manatee use their good old-fashioned brain power to identify where the JCVI computers may have made mistakes, they alter the names of certain proteins in accordance with such findings. When naming a protein, the goal is always to err on the side of conservatism.

    Let’s say, for example, that based on a strong HMM hit, the computer has decided that one of your proteins is a ribose ABC transporter protein (ribose is a sugar). But after further examining the protein using Manatee’s tools you decide that there really isn’t enough evidence to support the conclusion that the sugar transported by your protein is ribose. You then manually change the protein’s name in Manatee so that it is less definitive by calling it only an “sugar ABC transporter”. Then, after using even more of Manatee’s features, you decide that you can’t put together sufficient evidence that the gene in question really transports any form of sugar. Under such circumstances, you make its name even less specific, calling it simply an “ABC transporter.”

    As you can see, Manatee is a tool which enables researchers to better make judgments about the role and function of genes by assigning characteristics to those genes. Often enough, evidence to make these determinations is insufficient and attributes are characterized by how reliable the best evidence is. One expands and contracts the attributed qualities as the evidence warrants. When the evidence is equivocal, you say that a protein is “putative.” This apparently is the nature of genetic research, one which requires scientists to pick up on indeterminacy and do their best to fill in the gaps as they go.

    Every DNA sequence which emerges from the pipeline is putative. Certain sequences remain so because they fall short of threshold reliability which would allow the software to give it an existing name. Under such circumstances, no name is assigned and no role is attributed. The protein is simply named “hypothetical protein.” If a hypothetical protein from one species matches a hypothetical protein from another, each are given the name “conserved hypothetical protein.” Since the hypothetical protein has been found in two different species, the corresponding sequence clearly exists. But the sequence’s series of base pairs are so different from known sequences that, at this point in time, neither the software nor a human annotator are able to give it a name or role. Ramana (my instructor) commented that as the Human Microbiome Project presses on ahead, she expects to see many more “hypothetical proteins” show up in genomes. In fact, she had just recently finished sequencing about eight Microbiome genomes and was surprised at how few of their DNA sequences matched known proteins. This suggests that the majority of the yet unknown bacteria that inhabit the human body are quite different than those species we have already become familiar with such as E. coli or Tuberculosis.

    One might ask, “Isn’t the human annotation process open to error and bias?” The answer is yes. It’s up to the human annotators to decide if they can find enough information to support a software derived match and every human has different tendencies when it comes to such decisions. Annotators like Ramana say that after working with a sufficient number of genomes they usually learn to trust their gut feelings and standardize the process by which they make naming decisions. Even the best human annotators would admit that 100% consistency, from one day to the next, between one annotator and the next, is unattainable.

    The Institute had a modern aesthetic.

    So what happens when an annotator has finished going over all the proteins in a particular genome? Genomes in which all protein sequences have been given a name and function are considered “closed” and made available throughGenBank, ultimately. Any genome with loose ends is considered “open,” with the hope that future researchers will we able to confidently determine what the names and roles of current hypothetical proteins. One way to determine the role of a “hypothetical protein” is to study the protein coding sequence in the laboratory using in vitro techniques such as the creation of gene knockouts. Given this, it should be clear to my readers that sequencing technology does not obviate the need for laboratory research. There remains a lot of work to be done in this field.

    About the Comprehensive Microbial Resource (CMR)

    JCVI enters as many genomes as possible into a database called the Comprehensive Microbial Resource (CMR). CMR is a free, open-source website that allows access to the sequence and annotation of all completed prokaryotic genomes. CMR is a seemingly invaluable resource. Before genomes are entered into the database they are standardized in a manner that makes them much easier to be compared. Researchers from over 200 sequencing centers currently put sequenced genomes into a database called GenBank. GenBank contains about 600 complete prokaryotic genomes with about 10 new genomes released each month. One of the significant problems with GenBank is that the annotation process at each center that submits genomes to GenBank is done so differently that many of the genomes in GenBank have been named using different conventions. Often, they have also been assigned genes symbols and role names that differ depending on their where they were sequenced.

    The goal of the CMR is to take the genomes from GenBank and create common datatypes with the same nomenclature sequence elements annotation methodology. When this has been done individual genomes can be compared much more easily and accurately. There are currently about 400 organisms in CMV but the project’s leaders have ambitiously committed themselves to adding several hundred more genomes to the database in the coming months. One reason that the CMV contains fewer genomes than GenBank is because the project is, thus far, unfunded. Apparently, JCVI has been working on the CMV without grant money for the last two years. The program is so well-designed and useful that it’s hard to believe it could go unfunded. I was told that JCVI has just applied for a new grant that might allow the project to be funded and project leaders should hear back about the decision in a week or two. Fingers crossed!

    Tanja Davidson is one the main directors of the CMR, who was our teacher on day three. In fact, our entire third day was spent learning about the CMR, which at first glance, contains a daunting but well-organized number of features. CMR allows the researcher to compare multiple genomes using what are called “cross genome analysis pages.” These tools allow two or more genomes to be compared so that the elements they have in common (or the elements that make them different) can easily be analyzed.

    Imagine that doctors report an outbreak of a stomach disease and a bacterial species is isolated from people with the illness. The genome of the disease-causing pathogen is put through Glimmer and the pipeline, annotated by humans, and found to be part of the E. coli family. By using CMR tools, researchers can compare the genome of the new E. coli variant to the genomes of other E. coli species that have not been tied to stomach disease. Most of the genes between the different forms of E. coli should be the same because they are of the same family. But those genes that differ between the recently isolated species and those already in the database can be assumed to be those coding for the proteins that endow the new variant with the ability to cause disease. In this case, the CMV comparison tools greatly narrowed down what would otherwise have been a veritable “needle in a haystack” situation.

    Other nice features of the CMV include the ability to access a “Role Category Graph” which displays the different roles of all the proteins in a genome in a colorful pie chart. A tool called “Restriction Digest” allows users to splice genes of interest with various enzymes – a procedure that takes a long time to complete in the lab but only minutes to complete using the CMV. A “Pseudo 2-D Gel” allows users to get an idea of what a genome of interest looks like in another dimension. Each dot of a 2D gel represents a single protein whose location can be compared to others. The comparative tools even allow for the creation of a scatter plot in which two genomes are compared on a two-dimensional plane.MUMmer or (Maximum Unique Match) compares genomes at the nucleotide level, allowing scientists to detect just single nucleotide differences between DNA sequences.

    Ready to learn about human annotation!

    When it comes to the CMR, Tanja and other JCVI employees welcome feedback from scientists other than those at JCVI. In fact, while we were doing some practice CMV tutorials in class, the pair from South African and Dr. Rosenwald came across a few minor glitches in the system. Tanja was quick to write them down and most of them were already fixed by the time we got back from lunch. Rosenwald and others also offered feedback about new features they might like to see in the CMV and Tanja was again quick to record their suggestion and insights. I could tell she was definitely not just humoring people but actually planning to pass every suggestion by her development team.

    The reality is that Manatee and the Annotation Engine project are part of the Institute’s open source initiative, the goal of which is to provide high quality software and services to the genomic community. External involvement and feedback is strongly encouraged because it’s such feedback that drives development and continual improvement of the software. In fact, JCVI doesn’t actually have employees who test their software, so they fully depend on user feedback. Some of us joked that because we were testing the CMV as part of our class exercises we should have been paid to attend the training session rather than vice versa.

    Improving the accuracy of the pipeline

    Human annotation is a lengthy and laborious process. One of the foremost goals at JCVI is to perfect the pipeline and the computer annotation process such that human annotation is no longer necessary. One Idea currently being tossed around at JCVI when it comes to perfecting the output of the pipeline is a concept referred to as something like “humanitization.” (I’ve searched my notes but can’t find the exact name!) The annotators at JCVI are currently being asked to report exactly how they go about using Manatee in order to annotate a genome. As previously discussed, since there are so many databases to compare and analyze in Manatee, each employee using the program has settled into a pattern of evaluating database information in a certain methodical fashion. The hope is that if some of the best human annotation regimens are recorded and analyzed, they can be translated into logic, which software could duplicate.

    If these extra steps do indeed increase the accuracy of the protein matches made by the pipeline, there may no longer be a need for humans to check Manatee’s output. So it’s possible that in the coming years genome sequencing may be a completely automated process. At the current moment, the pipeline’s protein matches are accurate about 95% of the time The stated goal is to get that level of accuracy into the 99-100% percent range. So, as Tanja commented, the human annotators at JCVI who are currently helping programers understand how they navigate Manatee may, by doing so, actually be putting themselves out of a job.

    But at least for now, human employees are still an integral part of the annotation system. Four recently hired JCVI employees were attending the teaching session. During a discussion about perfecting the pipeline, our instructor confided that one of our classmates had just been hired with the expectations that he would create the technology to make the pipeline more accurate. What a daunting job! The rest of us regarded him with a certain level of awe over the next two days. Every so often our practice sets would reveal a flaw in pipeline output and the instructor would turn to this particular employee and say something like, “Of course now, you’ll be fixing this problem.” Such comments reflect what seems to be the prevailing attitude at JCVI. Most of their projects are extremely ambitious and half the time I’m not sure if they even know if success is possible when a task is initiated. But the mindset is “No matter how hard this goal seems we will simply have to find a way to get it done!” This type of determined thinking does seem to generate results as there is little doubt that such an attitude was the driving force behind the Institute’s ability to sequence the human genome in record time.

    Tanja, our instructor, and Phil, a JCVI employee who was hired to improve the pipeline.

    As implied by the above paragraph there are a lot of situations at JCVI that end up pitting humans against computers. As Ramana described, it would be ideal if every genome sent to JCVI could be manually annotated from the onset. At least for now, a well-trained human is able to pick up on subtleties of database comparisons that the computer can miss. But such a scenario, at least over the long term, simply isn’t sustainable. Since genome mapping is growing in popularity over the coming years, humans alone cannot keep up with the number of genomes requiring mapping. Although using computers to annotate genomes slightly compromises accuracy, the technology must be used in order to keep up with demand. Ideally genomes are manually checked with Manatee but there are definitely JCVI/TIGER annotations that are never checked by a human annotator at all.

    Sequencing bacteria and the future of medicine

    In recent years, mapping genomes has grown in popularity. Scientists working on efforts related to the Human Microbiome Project currently want to map the genomes of every single bacterial species capable of inhabiting the human body, and such pathogens may number in the thousands. But large groups of other scientists are set on better understanding the massive number of bacteria that inhabit our oceans. Since little is known about many regions of the ocean, who knows how many microbes these efforts may turn up? Then, like the two scientists in our group, other research teams seek to map the genome of bacteria that live in obscure land locations such as caves, volcanoes, mines etc. So, the JCVI computers and those at other sequencing centers are relentlessly accumulating DNA data.

    Perhaps because they have each personally annotated so many hypothetical proteins about which we currently know nothing, the staff at JCVI are very open to the idea that we are only on the brink, if that, of truly understanding the bacteria capable of making us ill. This correlates with the Marshall Pathogenesis in which essentially all inflammatory diseases are attributed to infection with chronic intraphagocytic metagenomic bacteria that, for the most part, have yet to be clearly named and sequenced. One study I often invoke was conducted by Dempsey and team. This Glasgow-based group found human tissue taken from prosthetic hip joints contained protein sequences corresponding to those of hydrothermal heat vent bacteria. Most of the time when I discuss the study, other scientists are skeptical of the results. The average response is that they would like to see the results repeated or that the sample was contaminated. Ramana had no such reaction. In her opinion, there can definitely be hydrothermal heat bacteria in the human body and she’s confident the sample was not tainted. When we discussed the findings she suggested that the bacteria are probably not killed at high temperatures which, interestingly, was one of Dr. Marshall’s first inferences when analyzing the data.

    A JCVI sequencing machine

    The organizers made an admirable effort to serve us savory lunches which we ate in one of JCVIs cafeterias. All our teachers attended lunch and sat among us, meaning that I was able to easily batter them with questions. Alex Richter, one of the program heads, was great about answering my questions in detail. Thanks to his anecdotes, I got a much better impression of what microbiology labs will be doing in the coming years and the tools I will likely need to master as a potential microbiology PhD student. Before attending the training I had wondered if I would be able to understand JCVI’s sequencing technology without a background in computer science. But Richter didn’t seem to think that my lack of computer training is an issue and it’s true that I certainly seemed able to follow the discussions in class. I was encouraged by Richter’s comment that someone good at scientific reasoning (such as, ahem, myself) is also likely to be good at working systematically with computer programs. I’m sure he’s right, but even so I won’t be contributing to the Linux codebase any time soon.

    It felt pretty darn good to be in a place where I personally believe that government funding is going towards research that is really going to have an impact on our ability to better understand chronic disease. As the Marshall Pathogenesis continues to spread, it’s clear that bacteria will eventually receive all the scrutiny they are due. At that point, scientists, doctors and patients alike are going to demand a more thorough understanding of bacteria implicated in chronic disease and down to the level of the genome.

    It’s great that JCVI is already starting to collect data on never before sequenced bacteria. It’s also good that the Institute is striving to perfect bacterial sequencing technology now, so that by the time the Marshall Pathogenesis gains hold, sequencing results should not only be more accurate but also easier to use. Just as our ability to sequence genes has improved exponentially, so, I believe, will our ability to interpret the data. The tools are just getting better and better. As someone who has the inside scoop about the fact that bacteria are headed for the big time, I feel we’re closer than ever to characterizing the genomes of the pathogens that are capable of making us so ill.

  • Comments Off on Three days at the J. Craig Venter Institute
  • Filed under: conferences and trainings, featured articles, microbiome
  • Note: Much of the information included in this piece was derived from two articles published in the May 28th edition of Nature News, a resource published by the medical journal Nature

    Even those of us who live under rocks have heard of the Human Genome Project, a massive international scientific research project the aim of which was to understand the genetic makeup of the human species. Its primary goal was to determine the sequence of chemical base pairs which make up DNA and to identify the approximately 25,000 genes of the human genome from both a physical and functional standpoint.

    The goal of the Human Genome Project was to understand the genetic makeup of the human species.

    A working draft of the genome was released in 2000 and a complete one in 2003, with further analyses yet to be completed and published. Meanwhile, a parallel project was conducted by the private company Celera Genomics. Most of the sequencing was performed in universities and research centers from the United States, Canada and Great Britain.

    Most researchers would agree that the Human Genome Project was launched in order to answer the long-standing question, “Who am I?” The goal was to identify and sequence every single human gene. By doing so, many researchers were certain they would uncover causes for most of the chronic diseases that plague humankind. At the project’s start, scientists were faced with a multitude of unknown sequences to decipher and understand. Surely such sequences would offer up answers to disease, and specific genes would be found that would correlate with specific illnesses. In a Gattaca-like environment, people would then be informed early in life that they had “the gene” for MS or “the gene” for breast cancer. Scientists would work fervently to identify and change the expression of such disease-causing genes, finally developing enough gene therapies to eradicate human disease. The above scenario has an abiding appeal, largely because the idea that our genes dictate our health is so temptingly simplistic.

    Yet, while striving to answer the question- “Who am I?”- those researchers searching for a purely genetic cause for disease have failed to recognize that the question, “Who am I?” can only be answered after the question “Who are we?” has been clarified and understood.

    The question, “Who am I?” can only be answered after the question “Who are we?” has been clarified and understood.As Asher Mullard of Nature Newsdescribes, ‘we’ refers to the wild profusion of bacteria, fungi and viruses that are able to colonize the human body. Such pathogens, and bacteria in particular, number in the trillions. According to one common estimate, the human gut alone contains at least a kilogram of bacteria.

    The fact is, at the present moment, human beings serve as communities in which prodigious numbers of bacteria can thrive. Since the average human is currently outnumbered by the pathogens they harbor, the genes produced by these bacteria outnumber human genes as well. According to Mullard, “Between them [the pathogens in our bodies], they harbour millions of genes, compared with the paltry 20,000 estimated in the human genome. To say that you are outnumbered is a massive understatement.”

    So by sequencing only human genes, the Human Genome Project has failed to take into account a vast number of bacterial genes that also have the potential to affect the progression of human disease. The fact that many researchers are interpreting genetic data while leaving bacterial genomes and bacteria in general out of the picture is a serious issue.

    This is because many of the chronic bacteria we harbor are intraphagocytic – meaning they have developed the ability to live inside the nuclei of our cells. Such pathogens thrive in the cytoplasm, or the liquid surrounding the cellular organelles that allow for DNA replication and repair.

    Our DNA sequences are replicated on a regular basis. The process of transcription allows for the synthesis of RNA under the direction of DNA. Since both RNA and DNA use the same “language”, information is simply transcribed, or copied, from one molecule to the other. The result is messenger RNA (mRNA) that carries a genetic message from the DNA to the protein-synthesizing machinery of the cell. In translation, messenger RNA (mRNA) is decoded to produce a specific protein according to the rules specified by the genetic code.

    Pathogens in the cytoplasm can likely interfere with the processes of transcription and translation.

    Unfortunately, pathogens in the cytoplasm can likely interfere with any number of the many precise steps involved in the transcription and translation processes. Such interference results in genetic mutations, meaning that our DNA is almost certainly altered, over time, by the intracellular pathogens we harbor. The more pathogens a person accumulates, the more his genome is potentially altered.

    It is also quite likely that intracellular pathogens disrupt DNA repair mechanisms. Since environmental factors such as UV light result in as many as 1 million individual molecular lesions per cell per day, the potential of intracellular bacteria to interfere with DNA repair mechanisms also greatly interferes with the integrity of the genome and its normal functioning. If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis or cancer. Problems associated with faulty DNA repair functioning result in premature aging, increased sensitivity to carcinogens, and correspondingly increased cancer risk.

    “Lifelong persistent symbiosis between the human genome and the microbiota [the large community of chronic pathogens that inhabit the human body] must necessarily result in modification of individual genomes,” states biomedical researcher Trevor Marshall. It must necessarily result in the accumulation of ‘junk’ in the cytosol, it must necessarily cause interactions between DNA repair and DNA transcription activity”, he continues.

    So there is increasing evidence that many of the genetic mutations identified by Human Genome Project researchers are largely induced by bacteria and other pathogens. Rather than serving as markers of particular diseases, such mutations generally mark the presence of those pathogens capable of affecting DNA transcription and translation in the nucleus. This is why, in most cases, the “one gene, one disease” hypothesis has failed to hold water. Instead, geneticists are now stuck examining a perplexing number of different mutations, most of which differ so greatly between individuals that no correlations can be made between their presence and any particular illness. The mutations are nothing but genetic “noise,” induced either by random chance or by the pathogens that such researchers fail to factor into the picture.

    As Marshall describes, researchers sequencing human DNA samples often make the assumption that only one genome (the human genome) is present, when in reality, their tests are also likely picking up on the loose bits of multiple genomes (bacterial genomes). So if a person’s genome is sequenced once and then sequenced again, will the same DNA sequences be obtained? Probably not, because each individual sequencing will randomly pull various pathogenic genes into the human mix. Thus, what are currently viewed as “disease-causing” mutations are essentially statistical anomalies that vary depending on when and how a person’s genome is sequenced.

    Each individual sequencing of a genome will randomly pull various pathogenic genes into the human mix.

    If sequenced genomes are currently just a sum of several random parts, then it’s inevitable that an individual’s genome will change throughout life. Because pathogen-induced mutations, random mutations, and mutations that result from faulty DNA repair accumulate over decades, the genome map of a child will be very different from the genome map produced when the same individual is an adult, with differences increasing as people reach their elderly years. Goodbye the world of Gattaca. Right now, if a child’s genome is sequenced at birth, his or her genetic sequences predict nothing about the common chronic diseases they will encounter, and mutations accumulated later in life largely serve to signal the presence of infection. This means that using people’s genomes to define their identity – as envisioned by futuristic movies in which a person’s genome might serve as their passport and destiny – is not feasible at the moment.

    This understanding marks a yet to be fully realized paradigm shift in the way the genome is interpreted. It’s hard not to feel sympathy for those individuals paying hefty sums of money to have their genomes sequenced by certain companies that now offer such a service. Customers are provided with a map of their genome in which the majority of observed mutations serve only to inform them that they harbor numerous intracellular pathogens. As Mullard describes, “Observers have started to question whether the human genome can deliver on its once-hyped promises to tackle disease. To take just one example, anyone so inclined can now pay genetic-testing companies for a preliminary rundown of the genetic variations associated with his or her risk of developing cancer, obesity and other conditions. But the risks identified are often so low or unclear that people are questioning whether the information will actually prompt the changes in health behaviour, such as losing weight, that could make them valuable.”

    Finally, bacteria enter the picture – the rise of metagenomics and the Human Microbiome Project

    The paradigm shift described above, in which genetic mutations are viewed in a new light, has been largely fueled by a new movement in which scientists are now beginning to use molecular technology to detect and sequence bacteria in lieu of simply trying to grow them in the lab. These tools will allow researchers to bypass the need to culture bacteria, exploring the human microbiome by studying genes en masse, rather than studying the organisms themselves.

    Recent studies that have used powerful molecular tools rather than standard cultivation techniques have left scientists slack-jawed at the number of bacterial DNA sequences that correspond to bacteria yet to be named or sequenced. A great number of sequences also correspond to bacteria never thought to have the capability of living on or within the human body. It has recently become all too clear that only a fraction of the bacteria capable of infecting humans grow in the lab, and that we have been oblivious to the presence of the majority of pathogens capable of entering our bodies. This realization that we harbor myriad unnamed and unidentified microbes comes at a time when the Human Genome Project is failing to capitalize on its promise to identify root causes for human disease.

    Powerful new molecular tools are revealing the number of bacterial DNA sequences that correspond to bacteria yet to be named or sequenced.

    As Mullard admits, “The microbes that swarm in and on the human body have always held a certain fascination for researchers. Since so few of them grow in the lab, it has been difficult to work out exactly who these microbial passengers are and how they interact with one another.” Whereas over the past century, standard laboratory culturing techniques have failed to detect the vast number of pathogens capable of infecting human beings, recent advances in molecular technology that allow for the sequencing of bacterial DNA mean that, at long last, we may be able to successfully identify and sequence the bacteria that cause disease.”

    The reality is that the plethora of unknown pathogens that colonize the human body are the previously unrecognized puzzle piece behind chronic inflammatory disease. Enter metagenomics, a relatively new field of research that, thanks to advanced molecular techniques, enables researchers to study organisms not easily cultured in a laboratory as well as organisms in their natural environment.

    This year marks the tenth birthday of metagenomics. It was a decade ago that Jo Handelsman and her colleagues at the University of Wisconsin in Madison successfully cloned and determined the functional analysis of the collective genomes of previously unculturable soil microorganisms in an attempt to reconstruct and characterize individual community inhabitants. The team coined the word “metagenomics” to explain their techniques and goals. Since the Handelsmam work, the scope of metagenomics has expanded greatly. Teams of researchers across the world have made efforts to describe the bacteria in environments as diverse as the human gut, the air over New York, the Sargasso Sea and honeybee colonies.

    “We can look at the metagenomic analysis so much more deeply, at such a better cost,” says Jane Peterson, associate director of the Division of Extramural Research of the National Human Genome Research Institute in Bethesda, Maryland, which recently launched a five-year initiative to explore the human microbiome.

    The five-year initiative is one of several massive projects striving to characterize what is referred to as the human microbiome, the name given to the collection of microorganisms living in and on the human body. The goal of the project is as ambitious as it is exciting – to detect and name every type of bacterial species that is currently capable of inhabiting the human body.

    The project is perhaps the best example of a new, and long overdue, shift in thinking among many medical researchers. At long last, microbiome scientists are vastly more interested in studying and identifying the pathogens that inhabit the human body rather than simply examining human genes.

    Microbiome scientists are vastly more interested in studying and identifying the pathogens that inhabit the human body rather than simply examining human genes.

    Late last year, the US National Institutes of Health (NIH) pledged US$ 115 million to identify and characterize the human microbiome, Also last year, the European Commission and various research institutes committed €20 million (US$31 million) for similar research. Smaller research teams with similar goals are also pledging lesser sums of money towards research that hopes to contribute to a greater understanding of the microbiome. These teams are situated in countries as diverse as China, Canada, Japan, Singapore and Australia.

    Since the human microbiome is so diverse, it’s not surprising that an array of different research teams are needed to tackle divergent areas of the project. The NIH’s five-year Human Microbiome Project will spend much of its money identifying where certain bacteria in the body are located. They also plan to compile a reference set of genetic sequences that correspond to each bacterial species.

    Although one-quarter of the project’s money has been earmarked to examine the role of the microbiome in health and disease, the Human Microbiome Project will do little to assess the function of microbes during its first year, although it may focus on the topic later. It’s serendipitous that the “health and disease” aspect of the project has been put off, since it’s only a matter of time before the medical community realizes that biomedical researcher Trevor Marshall has already largely elucidated how the intraphagocytic, metagenomic, microbiota of bacteria that cause chronic inflammatory disease are able to survive in the body and evade the immune system. Ideally, the money now dedicated towards examining the role of the human microbiome in disease could be used to pursue research projects related to Marshall’s discoveries.

    Since the vast number of bacteria and other pathogens that cause human disease have yet to even be discovered and documented, the primary goal of the Human Microbiome Project is to build up a research community and generate a sequence resource, akin to that developed during the Human Genome Project, so that questions about bacteria and specific disease-causing mechanisms can be tackled at a later date.

    Under the most ideal of circumstances, the money now dedicated towards examining the role of the human microbiome in disease could be used to pursue research projects related to Marshall’s discoveries.This year, researchers will collect samples of feces plus swabs from the vagina, mouth, nose and skin of 250 volunteers. 250 people may seem like a small number of subjects for such a massive project, but when one understands that the DNA of every single one of the trillions of pathogens harbored by each subject will be analyzed, it’s easy to see that such an undertaking is actually a monumental task.

    How do you effectively study such a vast and unknown community? The ultimate goal is to sequence the complete genomes of hundreds of bacterial species and deposit them in a shared database. Most of the research teams involved in the project will be sequencing short, variable stretches of DNA that code for components of bacterial proteins in order to roughly identify which bacteria are present in each person and how many bacterial species volunteers have in common. Once an estimate of diversity has been attained, the researchers plan to mine deeper by using shotgun sequencing – a molecular technique that will allow them to analyze many short pieces of DNA from all over the microbes’ genomes and reveal which genes are present.

    In shotgun sequencing, DNA is broken up randomly into numerous small segments with the goal of creating multiple overlapping sequences.

    In shotgun sequencing, DNA is broken up randomly into numerous small segments. The DNA sequence of each fragment is subsequently identified. The process is then repeated in order to create multiple overlapping sequences of DNA. When enough overlapping sequences have been generated a computer program is able to assemble the ends of the overlapping sequences into a contiguous sequence.

    Microbiome researchers will initially use shotgun sequencing data from a few bacterial species that can already be grown, and piece together their whole genomes by putting overlapping fragments in order. The Human Microbiome Project plans to provide 600 “reference genomes.” The European project will do another 100, and other sequencing efforts by the NIH and elsewhere will make additional contributions. The hope is that enough research teams are able to set up a broad enough reference database. Then, researchers will be able to predict the genetic capabilities of many currently unculturable species (many of which are in an L-form and/or biofilm-like state) solely on the basis of similarities with known genes.

    Creating the database will not be a simple task. According to Peer Bork, a biochemist who heads the European project’s computational center at the European Molecular Biology Laboratory in Heidelberg, Germany, even if many reference sequences are created, fitting together DNA fragments in order to identify unknown species, “is pretty hairy from a computational biology analysis point of view. Even with the immense power of supercomputers to process the sequencing data, it will take some clever analysis to compare the millions of sequence reads that span thousands of species between hundreds of ‘healthy’ and unhealthy people.”

    Many different research teams will be working simultaneously in order to build a map of human bacteria.

    Yet, the scientists involved in the project appear intent on capitalizing on their promise to sequence the microbiome. Furthermore, each research team will still be allowed to pursue their own pet projects. “Talented people are doing what they think is the most important research to do, rather than being forced to do what somebody else has decided would be the best,” says Ehrlich.

    As touched on above, one of the main scientists pushing forward the metagenomics movement is Marshall. Although not directly involved in the Microbiome Project at the moment, decades of in silico and clinical research have allowed the biomedical researcher to create a treatment regimen that effectively kills the intraphagocytic, metagenomic bacteria that microbiome researchers will be identifying in greater detail.[1]

    While at first glance it may seem counterintuitive that Marshall’s work has demonstrated how to kill the microbiome before the bacteria that comprise it have even been fully sequenced, one must keep in mind that all bacteria possess certain characteristics. Every bacterial species has a 70S ribosome – a protein region that must be functioning if the pathogen is to survive. Whether or not a species has been named, identified, cultured, or sequenced, if its 70S ribosome is blocked, as it is by the pulsed, low-dose antibiotics championed by Marshall, it will be weakened so greatly that it cannot survive in the presence of an activated immune system.

    Every bacterial species has a 70S ribosome – a protein region that must be functioning if the pathogen is to survive.

    So Marshall’s treatment protocol – dubbed the Marshall Protocol – already exists, and can kill the pathogens that Microbiome researchers will be identifying. As it perfuses the mainstream, Marshall’s research (when fully appreciated) will represent a quantum leap forward for microbiome researchers. After all, the microbiome community should be quite pleasantly surprised to find out that the disease-causing bacteria they sequence can already be killed.

    However, at the moment, patients on the Marshall Protocol have little knowledge of exactly what chronic pathogens they are killing. In a sense, such knowledge isn’t of utmost importance, as specific names are not needed to induce recovery. Yet it would certainly be of great interest for researchers working with various aspects of the Marshall Pathogenesis to possess a greater understanding of the bacterial species any one patient is killing, and what species of bacteria can generally be associated with specific symptoms.

    Thus, the database that the Microbiome project intends to provide promises to be uniquely helpful for researchers working on MP-related projects. Such researchers will be able to use the database to get a much clearer idea of exactly which chronic pathogens cause inflammatory disease, the substances created by these pathogens that may lead to receptor blockage, and exactly which bacteria are killed by different antibiotic combinations. As more knowledge is built about the specific pathogens that cause inflammatory disease, other drugs besides the MP antibiotics may be developed that also target them effectively, adding to an already powerful arsenal to render them dead.

    Of course, using the Microbiome database to identify the presence and species of bacteria targeted by the Marshall Protocol will require numerous researchers to perform shotgun sequencing of the bacteria in the tissues of patients with many forms of chronic disease. Sequences derived from such patients can be compared with the database in order to match DNA sequences with those of specific bacterial species.

    If periodic shotgun sequencing studies are performed as a patient progresses through the MP, they will undoubtedly reveal that MP patients have high bacterial loads at the onset of treatment and greatly reduced bacterial loads after several years of therapy.Even better, shotgun sequencing can be used to convince skeptics of the MP’s validity. If periodic shotgun sequencing studies are performed as a patient progresses through the MP, they will undoubtedly reveal that MP patients have high bacterial loads at the onset of treatment and greatly reduced bacterial loads after several years of therapy. Absence of bacteria would, of course, correlate with disease resolution and cure. Such data would provide even the greatest skeptic with the proof necessary to confirm that the MP does indeed reverse inflammatory disease and successfully kill chronic idiopathic bacteria.

    The few shotgun sequencing studies performed to date have already helped Marshall flesh out certain aspects of the Marshall Pathogenesis. A recent shotgun sequencing study that detected the species of bacteria present on prosthetic hip joints allowed him to identify (using molecular software) that the chronic pathogen Flavobacter, creates a lipid capable of dysregulating the Vitamin D Receptor (VDR). The discovery finally provided proof of concept for the fact that many of the chronic pathogens we harbor almost certainly increase their survival by creating similar ligands that block the ability of the VDR to activate components of the innate immune response.

    The fact that same study also found hydrothermal heat vent bacteria (which clearly cannot be killed by boiling) on the joints reinforces just how much we have yet to discover about the pathogens capable of inhabiting our bodies. Other pathogens detected by the research team include species such as Lysobacter, Methylobacterium, and Eubacteria. “None of these species were previously expected to exist in man” states Marshall. “These are species nobody is looking for, they are not picked up by PCR testing and nobody is culturing them.”

    Consider that each species of bacteria detected in the above study has about 1,000 – 4,000 genes. So together they create about 100,000 genes that are active in the body, yet are not even contemplated by the vast majority of mainstream researchers. And those are only the pathogens detected by one molecular analysis.

    A continued focus on gut bacteria

    As previously discussed, the European Commission has launched a four-year initiative, called Metagenomics of the Human Intestinal Tract (MetaHIT). The project, which contains many different initiatives, overlaps somewhat with the American initiative in the sense that the European team is required to sequence bacterial genomes for a database. American and European results will be put in the same database, one which is freely available for anyone interested in sequencing and identifying bacterial DNA. And who isn’t?

    But Tract (Meta HIT) has a different goal than the American Microbiome Project. It will focus on better understanding of the bacteria that inhabit the gut and how they contribute to obesity and inflammatory bowel disease. And, according to Mullard, whereas the Human Microbiome Project is initially comparing people’s microbiota on a species level, MetaHIT aims to find differences in microbial genes and the proteins they express without necessarily worrying about which species they came from.

    The bacterium, Entercoccus faecalis, which lives in the human gut, is just one type of microbe that will be studied as part of NIH’s Human Microbiome Project.

    “We don’t care if the name of the bacteria isEnterobacter or Salmonella. We want to know if there is an enzyme producing carbohydrates, an enzyme producing gas or an enzyme degrading proteins,” explains Francisco Guarner, a gastroenterologist at Vall D’Hebron University Hospital in Barcelona, Spain. We want to “examine associations between bacterial genes and human phenotypes,” says Dusko Ehrlich, coordinator of MetaHIT and head of microbial genetics at INRA, the French agricultural research agency in Jouy-en-Josas.

    This is similar to the approach currently taken by Marshall who is more interested in the observable characteristics of the bacteria, including how they respond to different antibiotics and what substances they produce, than in identifying individual species.

    A handful of projects have already tried to characterize the bacteria that cause bowel disease and obesity, including research by Jeff Gordon at Washington University in St. Louis, who found different compositions of bacteria in obese and lean subjects (for details, see my article on obesity). Then, there was the 2006 project by Steven Gill and colleagues at the Institute for Genomic Research in Rockville, Maryland, who threw around some then-hefty numbers when they carried out such a metagenomic analysis of the microbes in two people’s intestines. After 2,062 polymerase chain reactions and 78 million base pairs, the team provided only the briefest of glimpses into the genetic underpinnings of the human gut’s microbes.

    According to Mullard, these first surveys involved too few individuals and sampled too few microbes, usually from only the gut or the mouth, to provide an adequate description of the microbiome. But things have changed in the past few years. A few million foreign genes no longer sound so daunting in the face of advanced genetic-sequencing methods that are capable of crunching monumental amounts of numbers. As with the American Microbiome project, thanks to certain cutting-edge technologies, researchers can assess hundreds of millions of base pairs in just a few hours.

    A few million foreign genes no longer sound so daunting in the face of advanced genetic-sequencing methods that are capable of crunching monumental amounts of numbers.

    An in-depth analysis by Tract (MetaHIT) researchers of exactly what microbes inhabit the gut and what substances they produce will only enhance the knowledge already derived from Marshall’s work, which shows that chronic bacteria in the gut or elsewhere survive largely because they have evolved mechanisms to block the activity of certain receptors that would otherwise activate pathways that would inhibit their survival. A better understanding of what substances gut bacteria create may help Marshall and other scientists identify other ligands that dysregulate the VDR or other receptors.

    Combining data derived from Tract (MetaHIT) with that derived from Marshall’s work will also provide an opportunity to better understand exactly what the mass microbes in our gut are up to. Historically, researchers have understood that a great number of bacteria thrive in the gut. However, in the absence of enough data showing how pathogens in the gut survive, or how gut bacteria contribute to disease, they have only been able to guess at the role of the gut microbial community.

    Such researchers have proven to be optimists. Over the past decades, the vast majority of them have concluded that, if bacteria are present in large numbers in the gut, they must be doing something helpful. That, or gut bacteria have been assumed to be commensal – helping the human gut in some way and in turn obtaining nutrients from the host. Yet nature shows that commensal relationships are not necessarily the rule. Sure, you’ll find mites on certain birds or orchids growing on trees. But in the majority of cases where two species interact, one usually takes advantage of the other. Furthermore, considering what we already know about bacteria, they are almost always guilty of exploiting the resources of their host. So it may be wishful thinking to assume that the bacteria in our guts are largely friendly helpers.

    This isn’t to say that there aren’t species of gut bacteria that can provide a benefit to their host. Yet increasing evidence points to the fact that the vast majority of gut bacteria are actually responsible for causing many bowel diseases previously considered to be of “unknown” cause. When faced with the large number of different inflammatory bowel diseases and the fact that a tremendous number of uncharacterized bacteria inhabit the gut, it’s logical that there’s a connection between the two phenomena. Of course, Marshall’s in silico work, as well as data derived from the MP study site shows that patients who kill large numbers of gut bacteria end up recovering from a number of bowel diseases, providing a good deal of support for the above hypothesis.

    This all invokes the rather controversial question, “Do humans really need gut bacteria?” Those patients to spend long periods of time on the MP have killed a great deal of their gut bacteria, yet seem to have GI tracts that function properly. Marshall has conceded that “good” gut bacteria could potentially exist, but as of yet, he has simply seen no evidence of a species that offers humans a benefit.

    Then again, whether or not a certain species of gut bacteria may be considered “helpful” may depend on a person’s set of circumstances.Then again, as Marshall describes, whether or not a certain species of gut bacteria may be considered “helpful” may depend on a person’s set of circumstances. It’s widely accepted that some gut bacteria help metabolize carbohydrates, causing people to absorb about 15-20% more of the energy from the carbohydrates they ingest. In a country like the United States, where the majority of people are well-fed, or in many cases over-fed, the presence of such bacteria in the gut might provide a distinct disadvantage. People who have access to enough food are usually seeking to lose weight and, in such cases, the presence of bacteria that glean more energy from carbohydrates would contribute to weight gain. The average American would probably be better off without such species in the gut.

    But what about people in developing countries who face food shortages and are often limited to eating just the amount of food they need to survive? Under such circumstances, the presence of a bacterial species in the gut that gleans more energy from carbohydrates would be seen as a great advantage, allowing people to acquire more energy from a smaller portion of food. In a world where even the developed world may face food shortages in the future, one can never tell if someday such bacteria would provide a benefit to the entire population.

    Scanning electron microscope images of B. thetaiotaomicron, a prominent human gut bacterium, and the intestine.

    Yet, possibilities like the one discussed above still don’t answer the questions of whether humans actually need gut bacteria. Bacteria and humans (or our ape-like ancestors) have evolved in tandem for millennia. Are pathogens’ ability to inhabit our bodies an evolutionary adaptation that serves to benefit both humans and bacteria, or is it possible that the ability of microbes to persist in the human body is largely an evolutionary victory for bacteria won at our expense? As more and more diseases are linked to bacteria previously considered innocuous, the latter is becoming an increasingly plausible possibility.

    Compare the human body to planet Earth. Creatures including human beings have evolved to live on our planet, yet does the Earth need the presence of such animals to survive? Most people would agree that the Earth would manage just fine without human beings. Although a handful of humans may strive to enhance certain aspects of our natural surroundings, the vast majority of mankind is depleting the Earth’s resources, leading to massive problems such as climate change, pollution, and an accumulation of fake chemicals in our water and food supply. So if we compare the bacteria that inhabit our guts and bodies to the people that inhabit our planet, it’s plausible that both might be better off without alien inhabitants.

    Of course, some animals may be seen as beneficial to Earth. The earthworm restores the resiliency of soil, or the honey bee carries pollen from flower to flower. Yet even under these potentially beneficial circumstances, one can still question whether the Earth could maintain a state of homeostasis on it’s own without such help, or quickly evolve different ways to manage without such aid.

    Furthermore, there is no question that any bacteria, whether friend or foe, places a burden on the innate immune system. With trillions of bacteria to keep track of, the innate immune system is constantly at work, prioritizing which bacteria to attack and determining where immune system cells should be located. In fact, researchers estimate that 70 percent of the immune system is located in and around the gut. Imagine if gut bacterial load was reduced to the point where much of this burden was lifted? The innate immune system would certainly be able to divert much more strength towards killing pathogens in other tissues as well new pathogens attempting to enter the body. Of course, as of now, such a scenario would only be possible if a person were to remain on low, pulsed antibiotics for a lifetime. Without the help of antibiotics, it seems reasonable to conclude the innate immune system would be over-burdened by the task of keeping the body bacteria-free.

    We may be asked to embrace the reality the gut bacteria are not just “friendly” helpers.

    Some may argue that probiotics are beneficial bacteria yet, as described in this article. An alternate hypothesis about how they provide palliation must also be factored into the picture.

    All this means that Tract (MetaHIT) researchers, MP researchers, and scientists studying gut bacteria in the light of new molecular technology may be facing a paradigm shift in the way gut microbes are perceived. Rather than viewing the majority of them as “friends,” we may unfortunately have to face the fact that many of them are enemies, or at least not necessary for our well-being. It still remains unclear if humans would want to be completely bacteria-free if the option existed, but the possibility that a person would be in better health without bacteria is nevertheless an intriguing possibility. Or perhaps in the future, humans will be able to pick and choose the bacteria that will inhabit their guts, in order to harbor certain species that fit their specific needs.

    Competition!

    The fact that the Microbiome project and Tract (MetaHIT) plan to generate so much new information on bacteria means that collaboration between the two groups and other smaller groups involved in bacterial sequencing is important. According to Bork, when all the projects are running at speed, reams of data will be generated worldwide. But because different groups are using different techniques to collect samples, extract DNA and annotate data, the data sets could be difficult to compare.

    Enter the as-yet-unlaunched International Human Microbiome Consortium. Scientists from several international projects, including the Human Microbiome Project and MetaHIT, have been meeting since late 2005 to figure out how to collaborate on a range of issues such as the compatibility of data and which bacteria to sequence for the reference database. The group is already setting up infrastructure and “beginning to address the tough questions,” says Weinstock. But according to Nathan Blow of Nature News, it is too early to say how well the Consortium will foster collaboration. Its official launch, scheduled for this past April, was postponed for six months to allow the NIH and the European Commission to overcome bureaucratic difficulties.

    Another issue being addressed by the Consortium is that of intellectual property. As with other genomic projects, members of the Consortium will be expected to release sequence data into the public domain as soon as they are generated. But according to Blow, this doesn’t necessarily preclude disputes over intellectual property if, for instance, a particular bacterial gene proves to be a useful diagnostic marker for a disease. Another unresolved question is whether a laboratory can have one project that abides by the Consortium’s regulations, and another that doesn’t. “There are grey areas, and I feel that until we have a test case, they will have to be watched very carefully,” says Bhagirath Singh of the Canadian Institute of Health Research, who is helping to develop the Canadian Microbiome Initiative.

    It’s increasingly important for researchers and doctors to start pulling information out of individual laboratories and individual clinic records so that we may compile it.Intellectual privacy and patent issues aside, optimism for the collaboration still runs high, and having a database of bacterial sequences that is available to other research teams and perhaps even the public would be a great step forward in a medical movement that many believe needs a pick-me-up. It’s increasingly important for researchers and doctors to start pulling information out of individual laboratories and individual clinic records so that we may compile it. Patterns and associations can be detected much more easily when large groups of data are gathered simultaneously and made accessible to as many people as possible. The computer open-source movement, which has spread to many other fields, has seen incredible success in areas where research and data are openly shared. Access to open-source databases will almost certainly augment the pace of major medical discoveries – a pace that, MP aside, can often seem as if it’s at a current standstill.

    Participants from microbiome projects around the world have expressed the desire to join and attend the Consortium. Like bulls ready to race down the streets of Pamplona, such research teams will be competing with each other as the search to sequence the microbiome moves forward. After all, each group wants credit for identifying as many new species of bacteria as possible.

    “The intention is to work together,” says George Weinstock, a geneticist at Washington University in St Louis, who is helping to organize the Human Microbiome Project, “but for the moment it is more about working in parallel until we can understand how to work together”. Apparently some European researchers feel at a disadvantage because MetaHIT’s operating budget is only a quarter the size of the Human Microbiome Project’s. “This is giving a huge advantage to the Americans,” Guarner says. “They are going to be quicker and they have more equipment.”

    Then again, other members of MetaHIT feel that they actually have an edge because money for their project has already been distributed and data collection is under way, whereas the Human Microbiome Project will not announce many of its funding decisions until later this year. “We have an advantage already, we have a show on the road,” says Willem de Vos, a microbiologist at Wageningen University in the Netherlands and a member of MetaHIT.

    Color-enhanced scanning electron micrograph showing Salmonella typhimurium (red) invading cultured human cells

    For example, in Denmark, a team led by Oluf Pedersen at the Steno Diabetes Centre in Copenhagen is collecting fecal samples from 120 obese volunteers and 60 controls to tease out specific microbial genes that might contribute to obesity. A similar-sized study in Spain, led by Guarner, will compare the microbiotas of patients with inflammatory bowel disease with those of genetically matched controls and examine the effect of drugs.

    Others feel that the sharing of data will simply allow the most ingenious teams to get ahead. “If it is an international consortium, it doesn’t matter where the data are generated,” Bork adds. “For example, we can be the pirates here, sitting at the end in Europe, and use American data to make the discoveries.”

    As Blow describes, given the number of separate projects, all at such an early stage, it’s almost impossible to make out where the starting line lies or who exactly is edging ahead. But for many of us, the potentially intense competition among microbiome researchers is a welcome change to the increasing number of “consensus conferences” in which researchers with the same opinions fail to consider alternate lines of thinking and generate novel hypotheses. Competition has the potential to speed up output, allowing for a medical community that may stall less and deliver more. Furthermore, when faced with talented competitors, researchers are more likely to consider new hypotheses and break from the norm in order to gain an edge over an opposing team.

    Competition has the potential to speed up output, allowing for a medical community that may stall less and deliver more.Then again, the fact that trillions of bacterial genomes must be sequenced means that at the current moment there is plenty of work for each research team and multiple ways for every research team to excel. With trillions of microbes to sift through, most researchers feel that there is more than enough of the microbiome to go around. “There’s so much to learn, so much we don’t know and so many adventures,” Gordon says. “There’s enough room for everyone.”

    A core microbiome?

    How many different bacterial species the microbiome project will uncover remains anyone’s guess. But according to Blow, many of the researchers involved with the project have one impending question.

    Is there a core human microbiome?

    “One of the things that is obsessing microbiologists is: ‘What is the size of the core microbiome?,’” says Jeremy Nicholson, a biological chemist who studies microbes and metabolism at Imperial College in London.

    By core microbiome, researchers like Nicholson are referring to a hypothesized number of bacteria that every person might harbor. For example, if some bacteria are shown to have a beneficial effect on human health, then perhaps everybody needs a certain number of these pathogens to survive. Then again, if all people harbor certain bacterial species, such pathogens may be seen in a different light. If a “core microbiome” is established, then perhaps the bacteria that comprise it contribute a process that happens to every human being. That process is aging.

    As Marshall and colleagues discussed at the recent “Understanding Aging: Biomedical and Bioengineering Approaches” conference at UCLA, it’s entirely possible that the bacteria we harbor are able to infect our stem cells – cells found in all adult tissues that act as a repair system for the body by replenishing other more specialized cells. But as people age, stem cells often lose their ability to repair and heal. If bacteria infect the stem cells, it has been hypothesized that they may expedite the rate at which they lose their resiliency, thus accelerating the aging process.

    A stem cell derived from the skin.

    Remember the above discussion about how certain microbes can allow people to glean 15% more energy from the carbohydrates they consume? While beneficial under some circumstances, Marshall warns that if such a bacterial species can infect nearby stem cells, they will contribute to the aging processes in the gut.

    Several studies support the possibility that chronic bacteria can infect the stem cells. A team of German researchers recently showed that patients who had suffered a heart attack (an event most likely caused by chronic bacterial forms in the heart and blood vessels) had stem cells which were only about half as effective at repairing the heart tissue as stem cells transplanted from healthy 20 year-old males. This supports the view that infected stem cells lack many of the healing properties maintained by their healthy counterparts.

    Dr Emil Wirosko, one of the foremost experts on L-form bacteria, died before he could publish on the subject. But according to his colleagues, Wirosko believed L-form bacteria are able to infect stem cells.

    Then there are telomeres – DNA sequences on the ends of chromosomes that are gradually lost as cells replicate. As they shorten, a cell can no longer divide and becomes inactive or dies – meaning that the length of a person’s telomeres plays a role in how quickly they will age. The fact that people with heart disease, Alzheimer’s, cancer, and other illnesses have been shown to lose telomere sequences at a faster rate than their healthy counterparts suggests that the bacteria involved in causing such diseases may also have an effect on telomere length.[2] As Marshall describes, if pathogens do directly alter our DNA, then the weakened DNA at the ends of telomeres provides some of the easiest genetic material for them to mutate.

    The weakened DNA at the ends of telomeres might provide some of the easiest genetic material for bacteria to mutate.

    Once again then, the question is posed: What might occur if humans were to become largely bacteria-free? Might they age at a slower rate? The possibility is tantalizing. Data from people on the Marshall Protocol, who are gradually reducing their bacterial loads, will prove to be increasingly insightful in this regard as time wears on.

    As previously discussed, the sequencing of the human genome alone does not allow for the Gattaca-like world described earlier in which humans could be identified and catalogued by their unique DNA sequences. Ironically, the human Microbiome Project and Marshall’s work might make that world more of a reality. If it turns out that the bacteria we harbor are a source of disease and a burden on the innate immune system, then the population will seek (like those people on the MP) to eliminate at least the majority of them.

    If sequencing procedures then no longer detect bacterial genes along with human genes, it may be possible to sequence a more fully human genome. One must still factor in DNA mutated by other environmental factors or by chance, but nevertheless, we would be closest to actually answering the question “Who am I?”

    Will we ever enter a Gattacca-type world?

    Perhaps then, after people have eliminated much of their bacterial load, genetic information will prove to be a more valuable human fingerprint, ethical issues aside.

    In the meantime, an optimal environment to better the health of humankind will be one in which controversial hypotheses such as that described above are at least put on the table, and new ideas that challenge current paradigms are embraced rather than rejected. Under such conditions scientists can fully live out Mullard’s advice to, “Celebrate their quest to map, catalogue, and understand the human microbiome for the inspiring saga it is.”

    REFERENCES

    1. Marshall, T. G. (2006d). Molecular mechanisms driving the current epidemic of chronic disease. []
    2. Cawthon, R.M., Smith, K.R., O’Brien, E., Sivatchenko, A., & Kerber, R.A. (2003). Association between telomere length in blood and mortality in people aged 60 years or older. Lancet, 361(9355), 393-5. []
  • Comments Off on The bacteria boom – implications of the Human Microbiome Project
  • Filed under: featured articles, microbiome
  • Understanding Biofilms

    As humans, our environment consistently exposes us to a variety of dangers. Tornadoes, lightning, flooding and hurricanes can all hamper our survival. Not to mention the fact that most of us can encounter swerving cars or ill-intentioned people at any given moment.

    Biofilms form when bacteria adhere to surfaces in aqueous environments and begin to excrete a slimy, glue-like substance that can anchor them to all kinds of material

    Thousands of years ago, humans realized that they could better survive a dangerous world if they formed into communities, particularly communities consisting of people with different talents. They realized that a community is far more likely to survive through division of labor– one person makes food, another gathers resources, still another protects the community against invaders. Working together in this manner requires communication and cooperation.

    Inhabitants of a community live in close proximity and create various forms of shelter in order to protect themselves from external threats. We build houses that protect our families and larger buildings that protect the entire community. Grouping together inside places of shelter is a logical way to enhance survival.

    With the above in mind, it should come as no surprise that the pathogens we harbor are seldom found as single entities. Although the pathogens that cause acute infection are generally free-floating bacteria – also referred to as planktonic bacteria – those chronic bacterial forms that stick around for decades long ago evolved ways to join together into communities. Why? Because by doing so, they are better able to combat the cells of our immune system bent upon destroying them.

    It turns out that a vast number of the pathogens we harbor are grouped into communities called biofilms. In an article titled “Bacterial Biofilms: A Common Cause of Persistent Infections,” JW Costerton at the Center for Biofilm Engineering in Montana defines a bacterial biofilm as “a structured community of bacterial cells enclosed in a self-produced polymeric matrix and adherent to an inert or living surface.”[1] In layman’s terms, that means that bacteria can join together on essentially any surface and start to form a protective matrix around their group. The matrix is made of polymers – substances composed of molecules with repeating structural units that are connected by chemical bonds.

    According to the Center for Biofilm Engineering at Montana State University, biofilms form when bacteria adhere to surfaces in aqueous environments and begin to excrete a slimy, glue-like substance that can anchor them to all kinds of material – such as metals, plastics, soil particles, medical implant materials and, most significantly, human or animal tissue. The first bacterial colonists to adhere to a surface initially do so by inducing weak, reversible bonds called van der Waals forces. If the colonists are not immediately separated from the surface, they can anchor themselves more permanently using cell adhesion molecules, proteins on their surfaces that bind other cells in a process called cell adhesion.

    A biofilm in the gut.

    These bacterial pioneers facilitate the arrival of other pathogens by providing more diverse adhesion sites. They also begin to build the matrix that holds the biofilm together. If there are species that are unable to attach to a surface on their own, they are often able to anchor themselves to the matrix or directly to earlier colonists.

    During colonization, things start to get interesting. Multiple studies have shown that during the time a biofilm is being created, the pathogens inside it can communicate with each other thanks to a phenomenon called quorum sensing. Although the mechanisms behind quorum sensing are not fully understood, the phenomenon allows a single-celled bacterium to perceive how many other bacteria are in close proximity. If a bacterium can sense that it is surrounded by a dense population of other pathogens, it is more inclined to join them and contribute to the formation of a biofilm.

    Bacteria that engage in quorum sensing communicate their presence by emitting chemical messages that their fellow infectious agents are able to recognize. When the messages grow strong enough, the bacteria respond en masse, behaving as a group. Quorum sensing can occur within a single bacterial species as well as between diverse species, and can regulate a host of different processes, essentially serving as a simple communication network. A variety of different molecules can be used as signals.

    “Disease-causing bacteria talk to each other with a chemical vocabulary,” says Doug Hibbins of Princeton University. A graduate student in the lab of Princeton University microbiologist Dr. Bonnie Bassler, Hibbins was part of a research effort which shed light on how the bacteria that cause cholera form biofilms and communicate via quorum sensing.[2]

    “Forming a biofilm is one of the crucial steps in cholera’s progression,” states Bassler. “They [bacteria] cover themselves in a sort of goop that’s a shield against antibiotics, allowing them to grow rapidly. When they sense there are enough of them, they try to leave the body.”

    Although cholera bacteria use the intestines as a breeding ground, after enough biofilms have formed, planktonic bacteria inside the biofilm seek to leave the body in order to infect a new host. It didn’t take long for Bassler and team to realize that the bacteria inside cholera biofilms must signal each other in order to communicate that it’s time for the colony to stop reproducing and focus instead on leaving the body.

    “We generically understood that bacteria talk to each other with quorum sensing, but we didn’t know the specific chemical words that cholera uses,” Bassler said.

    Then Higgins isolated the CAI-1 – a chemical which occurs naturally in cholera. Another graduate student figured out how to make the molecule in the laboratory. By moderating the level of CAI-1 in contact with cholera bacteria, Higgins was successfully able to chemically control cholera’s behavior in lab tests. His team eventually confirmed that when CAI-1 is absent, cholera bacteria attach in biofilms to their current host. But when the bacteria detect enough of the chemical, they stop making biofilms and releasing toxins, perceiving that it is time to leave the body instead. Thus, CAI-1 may very well be the single molecule that allow the bacteria inside a cholera biofilm to communicate. Although it is likely that the bacteria in a cholera biofilm may communicate with other signals besides CAI-1, the study is a good example of the fact that signaling molecules serve a key role in determining the state of a biofilm.

    Sessile cells in a biofilm “talk” to each other via quorum sensing to build microcolonies and to keep water channels open.

    Similarly, researchers at the University of Iowa (several of whom are now at the University of Washington) have spent the last decade identifying the molecules that allow the bacterial species P. aeruginosa to form biofilms in the lungs of patients with cystic fibrosis.[3] Although the P. auruginosa isolated from the lungs of patients with cystic fibrosis looks like a biofilm and acts like a biofilm, up until recently, there were no objective tests available to confirm that the bacterial species did indeed form biofilms in the lungs of patients with the disease, nor was there a way to tell what proportion of P. aeruginosa in the lungs were actually in biofilm mode.

    “We needed a way to show that the P. auruginosa in cystic fibrosis lungs was communicating like a biofilm. That could tell us about the P. auruginosalifestyle,” said Pradeep Singh, M.D., a lead author on the study who is now at the University of Washington.

    Singh and his colleagues finally discovered that P. aeruginosa uses one of two particular quorum-sensing molecules to initiate the formation of biofilms. In November 1999, his research team screened the entire bacterial genome, identifying 39 genes that are strongly controlled by the quorum-sensing system.

    In a 2000 study published in Nature, Singh and colleagues developed a sensitive test which shows P. auruginosa from cystic fibrosis lungs produces the telltale, quorum-sensing molecules that are the signals for biofilm formation.[3]

    It turns out that P. aerugnosa secretes two signaling molecules, one that is long, and another that is short. Using the new test, the team was able to show that planktonic forms of P. aeruginosa produce more long signaling molecules. Alternately, when they tested the P. aeruginosa strains isolated from the lungs of patients with cystic fibrosis (which were in biofilm form), all of the strains produced the signaling molecules, but in the opposite ratio – more short than long.

    Interestingly, when the biofilm strains of P. aeruginosa were separated in broth into individual bacterial forms, they reverted to producing more long signal molecules than short ones. Does this mean that a change in signaling molecular length can indicate whether bacteria remain as planktonic forms or develop into biofilms?

    To find out, the team took the bacteria from the broth and made them grow as a biofilm again. Sure enough, those strains of bacteria in biofilm form produced more short signal molecules than long.

    “The fact that the P. aeruginosa in [the lungs of cystic fibrosis patients] is making the signals in the ratios that we see tells us that there is a biofilm and that most of the P. aeruginosa in the lung is in the biofilm state,” states Greenberg, another member of the research team. He believes that the findings allow for a clear biochemical definition of whether bacteria are in a biofilm. Techniques similar to those used by his group will likely be used to determine the properties of other biofilm signaling molecules.

    Development

    Once colonization has begun, the biofilm grows through a combination of cell division and recruitment. The final stage of biofilm formation is known as development and is the stage in which the biofilm is established and may only change in shape and size. This development of a biofilm allows for the cells inside to become more resistant to antibiotics administered in a standard fashion. In fact, depending on the organism and type of antimicrobial and experimental system, biofilm bacteria can be up to a thousand times more resistant to antimicrobial stress than free-swimming bacteria of the same species.

    Biofilms grow slowly, in diverse locations, and biofilm infections are often slow to produce overt symptoms. However, biofilm bacteria can move in numerous ways that allow them to easily infect new tissues. Biofilms may move collectively, by rippling or rolling across the surface, or by detaching in clumps. Sometimes, in a dispersal strategy referred to as “swarming/seeding”, a biofilm colony differentiates to form an outer “wall” of stationary bacteria, while the inner region of the biofilm “liquefies”, allowing planktonic cells to “swim” out of the biofilm and leave behind a hollow mound.[4]

    Biofilm bacteria can move in numerous ways: Collectively, by rippling or rolling across the surface, or by detaching in clumps. Individually, through a “swarming and seeding” dispersal.

    Research on the molecular and genetic basis of biofilm development has made it clear that when cells switch from planktonic to community mode, they also undergo a shift in behavior that involves alterations in the activity of numerous genes. There is evidence that specific genes must be transcribed during the attachment phase of biofilm development. In many cases, the activation of these genes is required for synthesis of the extracellular matrix that protects the pathogens inside.

    According to Costerton, the genes that allow a biofilm to develop are activated after enough cells attach to a solid surface. “Thus, it appears that attachment itself is what stimulates synthesis of the extracellular matrix in which the sessile bacteria are embedded,” states the molecular biologist. “This notion– that bacteria have a sense of touch that enables detection of a surface and the expression of specific genes– is in itself an exciting area of research…”[1]

    Certain characteristics may also facilitate the ability of some bacteria to form biofilms. Scientists at the Department of Microbiology and Molecular Genetics, Harvard Medical School, performed a study in which they created a “mutant” form of the bacterial species P. aeguinosa (PA).[5] The mutants lacked genes that code for hair-like appendages called pili. Interestingly, the mutants were unable to form biofilms. Since the pili of PA are involved in a type of surface-associated motility called twitching, the team hypothesized this twitching might be required for the aggregation of cells into the microcolonies that subsequently form a stable biofilm.

    Once a biofilm has officially formed, it often contains channels in which nutrients can circulate. Cells in different regions of a biofilm also exhibit different patterns of gene expression. Because biofilms often develop their own metabolism, they are sometimes compared to the tissues of higher organisms, in which closely packed cells work together and create a network in which minerals can flow.

    “There is a perception that single-celled organisms are asocial, but that is misguided,” said Andre Levchenko, assistant professor of biomedical engineering in Johns Hopkins University’s Whiting School of Engineering and an affiliate of the University’s Institute for NanoBioTechnology. “When bacteria are under stress—which is the story of their lives—they team up and form this collective called a biofilm. If you look at naturally occurring biofilms, they have very complicated architecture. They are like cities with channels for nutrients to go in and waste to go out.”[6]

    The biofilm life cycle in three steps: attachment, growth of colonies (development), and periodic detachment of planktonic cells.

    Understanding how such cooperation among pathogens evolves and is maintained represents one of evolutionary biology’s thorniest problems. This stems from the reality that, in nature, freeloading cheats inevitably evolve to exploit any cooperative group that doesn’t defend itself, leading to the breakdown of cooperation. So what causes the bacteria in a biofilm to contribute to and share resources rather than steal them? Recently, Dr. Michael Brockhurst of the University of Liverpool and colleagues at the Université Montpellier and the University of Oxford conducted several studies in an effort to understand why the bacteria in a biofilm cooperate and share resources rather than horde them.[7]

    The team took a closer look at P. fluorescens biofilms, which are formed when individual cells overproduce a polymer that sticks the cells together, allowing the colonization of liquid surfaces. While production of the polymer is metabolically costly to individual cells, the biofilm group benefits from the increased access to oxygen that surface colonization provides. Yet, evolutionarily speaking, such a setup allows possible “cheaters” to enter the biofilm. Such cheats can take advantage of the protective matrix while failing to contribute energy to actually building the matrix. If too many “cheaters” enter a biofilm, it will weaken and eventually break apart.

    After several years of study, Brockhurst and team realized that the short-term evolution of diversity within a biofilm is a major factor in how successfully its members cooperate. The team found that once inside a biofilm, P. fluorescensdifferentiates into various forms, each of which uses different nutrient resources. The fact that these “diverse cooperators” don’t all compete for the same chemicals and nutrients substantially reduces competition for resources within the biofilm.

    When the team manipulated diversity within experimental biofilms, they found that diverse biofilms contained fewer “cheaters” and produced larger groups than non-diverse biofilms.

    Levchenko and team used this device to observe bacteria growing in cramped conditions.

    Similarly, this year, researchers from Johns Hopkins; Virginia Tech; the University of California, San Diego; and Lund University in Sweden recently released the results of a study which found that once bacteria cooperate and form a biofilm, packing tightly together further enhances their survival.[6]

    The team created a new device in order to observe the behavior of E. coli bacteria forced to grow in the cramped conditions. The device, which allows scientists to use extremely small volumes of cells in solution, contains a series of tiny chambers of various shapes and sizes that keep the bacteria uniformly suspended in a culture medium.

    Not surprisingly, the cramped bacteria in the device began to form a biofilm. The team captured the development of the biofilm on video, and were able to observe the gradual self-organization and eventual construction of bacterial biofilms over a 24-hour period.

    First, Andre Levchenko and Hojung Cho of Johns Hopkins recorded the behavior of single layers of E. coli cells using real-time microscopy. “We were surprised to find that cells growing in chambers of all sorts of shapes gradually organized themselves into highly regular structures,” Levchenko said.

    Dr. Levchenko of Johns Hopkins and Hojung Cho, a biomedical engineering doctoral student

    Further observations using microscopy revealed that the longer the packed cell population resided in the chambers, the more ordered the biofilm structure became. As the cells in the biofilm became more ordered and tightly packed, the biofilm became harder and harder to penetrate.

    Levchenko also noted that rod-shaped E. colithat were too short or too long typically did not organize well into the dense, circular main hub of the biofilm. Instead, the bacteria of odd shapes or highly disordered groups of cells were found on the edges of the biofilm, where they formed sharp corners.

    Nodes of relapsing infection?

    Researchers often note that, once biofilms are established, planktonic bacteria may periodically leave the biofilm on their own. When they do, they can rapidly multiply and disperse.

    According to Costerton, there is a natural pattern of programmed detachment of planktonic cells from biofilms. This means that biofilms can act as what Costerton refers to as “niduses” of acute infection. Because the bacteria in a biofilm are protected by a matrix, the host immune system is less likely to mount a response to their presence.[1]

    But if planktonic bacteria are periodically released from the biofilms, each time single bacterial forms enter the tissues, the immune system suddenly becomes aware of their presence. It may proceed to mount an inflammatory response that leads to heightened symptoms. Thus, the periodic release of planktonic bacteria from some biofilms may be what causes many chronic relapsing infections.

    Planktonic bacteria are periodically released from a biofilm

    As Matthew R. Parsek of Northwestern University describes in a 2003 paper in the Annual Review of Microbiology, any pathogen that survives in a chronic form benefits by keeping the host alive.[8] After all, if a chronic bacterial form simply kills its host, it will no longer have a place to live. So according to Parsek, chronic infection often results in a “disease stalemate” where bacteria of moderate virulence are somewhat contained by the defenses of the host. The infectious agents never actually kill the host, but the host is never able to fully kill the invading pathogens either.

    Parsek believes that the optimal way for bacteria to survive under such circumstances is in a biofilm, stating that “Increasing evidence suggests that the biofilm mode of growth may play a key role in both of these adaptations. Biofilm growth increases the resistance of bacteria to killing and may make organisms less conspicuous to the immune system… ultimately this moderation of virulence may serve the bacteria’s interest by increasing the longevity of the host.”

    The acceptance of biofilms as infectious entities

    Anton van Leeuwenhoek.

    Perhaps because many biofilms are sufficiently thick to be visible to the naked eye, the microbial communities were among the first to be studied by early microbiologists. Anton van Leeuwenhoek scraped the plaque biofilm from his teeth and observed what he described as the “animalculi” inside them under his primitive microscope. However, according to Costerton and team at the Center for Biofilm Research at Montana State University, it was not until the 1970s that scientists began to appreciate that bacteria in the biofilm mode of existence constitute such a major component of the bacterial biomass in most environments. Then, it was not until the 1980s and 1990s that scientists truly began to understand how elaborately organized a bacterial biofilm community can be.[1]

    As Robert Kolter, professor of microbiology and molecular genetics at Harvard Medical School, and one of the first scientists to study how biofilms developstates, “At first, however, studying biofilms was a radical departure from previous work.”

    Like most microbial geneticists, Kolter had been trained in the tradition dating back to Robert Koch and Louis Pasteur, namely that bacteriology is best conducted by studying pure strains of planktonic bacteria. “While this was a tremendous advance for modern microbiology, it also distracted microbiologists from a more organismic view of bacteria, Kolter adds, “Certainly we felt that pure, planktonic cultures were the only way to work. Yet in nature bacteria don’t live like that,” he says. “In fact, most of them occur in mixed, surface-dwelling communities.”

    Although research on biofilms has surged over the past few decades, the majority of biofilm research to date has focused on external biofilms, or those that form on various surfaces in our natural environment.

    Over the past years, as scientists developed better tools to analyze external biofilms, they quickly discovered that biofilms can cause a wide range of problems in industrial environments. For example, biofilms can develop on the interiors of pipes, which can lead to clogging and corrosion. Biofilms on floors and counters can make sanitation difficult in food preparation areas.

    Since biofilms have the ability to clog pipes, watersheds, storage areas, and contaminate food products, large companies with facilities that are negatively impacted by their presence have naturally taken an interest in supporting biofilm research, particularly research that specifies how biofilms can be eliminated.

    This means that many recent advances in biofilm detection have resulted from collaborations between microbial ecologists, environmental engineers, and mathematicians. This research has generated new analytical tools that help scientists identify biofilms.

    Biofilm in a swamp gas reactor.

    For example, the Canadian company FAS International Ltd. has just createdan endoluminal brush, which will be launched this spring. Physicians can use the brush to obtain samples from the interior of catheters. Samples taken from catheters can be sent to a lab, where researchers determine if biofilms are present in the sample. If biofilms are detected, the catheter is immediately replaced, since the insertion of catheters with biofilms can cause the patient to suffer from numerous infections, some of which are potentially life threatening.

    Scientists now realize that biofilms are not just composed of bacteria. Nearly every species of microorganism – including viruses, fungi, and Archaea – have mechanisms by which they can adhere to surfaces and to each other. Furthermore, it is now understood that biofilms are extremely diverse. For example, upward of 300 different species of bacteria can inhabit the biofilms that form dental plaque.[9]

    Furthermore, biofilms have been found literally everywhere in nature, to the point where any mainstream microbiologist would acknowledge that their presence is ubiquitous. They can be found on rocks and pebbles at the bottom of most streams or rivers and often form on the surface of stagnant pools of water. In fact, biofilms are important components of food chains in rivers and streams and are grazed upon by the aquatic invertebrates upon which many fish feed. Biofilms even grow in the hot, acidic pools at Yellowstone National Park and on glaciers in Antarctica.

    Biofilm in acidic pools at Yellowstone National Park.

    It is also now understood that the biofilm mode of existence has been around for millenia. For example, filamentous biofilms have been identified in the 3.2-billion-year-old deep-sea hydrothermal rocks of the Pilbara Craton, Australia. According to a 2004 article in Nature Reviews Microbiology, “Biofilm formation appears early in the fossil record (approximately 3.25 billion years ago) and is common throughout a diverse range of organisms in both the Archaea and Bacteria lineages. It is evident that biofilm formation is an ancient and integral component of the prokaryotic life cycle, and is a key factor for survival in diverse environments.”[10]

    Biofilms and disease

    The fact that external biofilms are ubiquitous raises the question – if biofilms can form on essentially every surface in our external environments, can they do the same inside the human body? The answer seems to be yes, and over the past few years, research on internal biofilms has finally started to pick up pace. After all, it’s easy for biofilm researchers to see that the human body, with its wide range of moist surfaces and mucosal tissue, is an excellent place for biofilms to thrive. Not to mention the fact that those bacteria which join a biofilm have a significantly greater chance of evading the battery of immune system cells that more easily attack planktonic forms.

    Many would argue that research on internal biofilms has been largely neglected, despite the fact that bacterial biofilms seem to have great potential for causing human disease.

    Common sites of biofilm infection. One biofilm reach the bloodstream they can spread to any moist surface of the human body.

    Paul Stoodley of the Center for Biofilm Engineering at Montana State University, attributes much of the lag in studying biofilms to the difficulties of working with heterogeneous biofilms compared with homogeneous planktonic populations. In a 2004 paper in Nature Reviews, the molecular biologist describes many reasons why biofilms are extremely difficult to culture, such as the fact that the diffusion of liquid through a biofilm and the fluid forces acting on a biofilm must be carefully calculated if it is to be cultured correctly. According to Stoodley, the need to master such difficult laboratory techniques has deterred many scientists from attempting to work with biofilms. [10]

    Also, since much of the technology needed to detect internal biofilms was created at the same time as the sequencing of the human genome, interest in biofilm bacteria, and the research grants that would accompany such interest, have been largely diverted to projects with a decidedly genetic focus. However, since genetic research has failed to uncover the cause of any of the common chronic diseases, biofilms are finally – just over the past few years – being studied more intensely, and being given the credit they deserve as serious infectious entities, capable of causing a wide array of chronic illnesses.

    In just a short period of time, researchers studying internal biofilms have already pegged them as the cause of numerous chronic infections and diseases, and the list of illnesses attributed to these bacterial colonies continues to grow rapidly.

    According to a recent public statement from the National Institutes of Health, more than 65% of all microbial infections are caused by biofilms. This number might seem high, but according to Kim Lewis of the Department of Chemical and Biological Engineering at Tufts University, “If one recalls that such common infections as urinary tract infections (caused by E. coli and other pathogens), catheter infections (caused by Staphylococcus aureus and other gram-positive pathogens), child middle-ear infections (caused by Haemophilus influenzae, for example), common dental plaque formation, and gingivitis, all of which are caused by biofilms, are hard to treat or frequently relapsing, this figure appears realistic.”[11]

    Hundreds of microbial biofilm colonize the human mouth, causing tooth decay and gum disease.

    As Lewis mentions, perhaps the most well-studied biofilms are those that make up what is commonly referred to as dental plaque. “Plaque is a biofilm on the surfaces of the teeth,” states Parsek. “This accumulation of microorganisms subject the teeth and gingival tissues to high concentrations of bacterial metabolites which results in dental disease.”[12]

    It has also recently been shown that biofilms are present on the removed tissue of 80% of patients undergoing surgery for chronic sinusitis. According to Parsek, biofilms may also cause osteomyelitis, a disease in which the bones and bone marrow become infected. This is supported by the fact that microscopy studies have shown biofilm formation on infected bone surfaces from humans and experimental animal models. Parsek also implicates biofilms in chronic prostatitis since microscopy studies have also documented biofilms on the surface of the prostatic duct. Microbes that colonize vaginal tissue and tampon fibers can also form into biofilms, causing inflammation and disease such as Toxic Shock Syndrome.

    Biofilms also cause the formation of kidney stones. The stones cause disease by obstructing urine flow and by producing inflammation and recurrent infection that can lead to kidney failure. Approximately 15%–20% of kidney stones occur in the setting of urinary tract infection. According to Parsek, these stones are produced by the interplay between infecting bacteria and mineral substrates derived from the urine. This interaction results in a complex biofilm composed of bacteria, bacterial exoproducts, and mineralized stone material.

    Microbes that colonize vaginal tissue and tampon fibers can become pathogenic, causing inflammation and disease such as Toxic Shock Syndrome.

    Perhaps the first hint of the role of bacteria in these stones came in 1938 when Hellstrom examined stones passed by his patients and found bacteria embedded deep inside them. Microscopic analysis of stones removed from infected patients has revealed features that characterize biofilm growth. For one thing, bacteria on the surface and inside the stones are organized in microcolonies and surrounded by a matrix composed of crystallized (struvite) minerals.

    Then there’s endocarditis, a disease that involves inflammation of the inner layers of the heart. The primary infectious lesion in endocarditis is a complex biofilm composed of both bacterial and host components that is located on a cardiac valve. This biofilm, known as a vegetation, causes disease by three basic mechanisms. First, the vegetation physically disrupts valve function, causing leakage when the valve is closed and inducing turbulence and diminished flow when the valve is open. Second, the vegetation provides a source for near-continuous infection of the bloodstream that persists even during antibiotic treatment. This causes recurrent fever, chronic systemic inflammation, and other infections. Third, pieces of the infected vegetation can break off and be carried to a terminal point in the circulation where they block the flow of blood (a process known as embolization). The brain, kidney, and extremities are particularly vulnerable to the effects of embolization.

    A variety of pathogenic biofims are also commonly found on medical devices such as joint prostheses and heart valves. According to Parsek, electron microscopy of the surfaces of medical devices that have been foci of device-related infections shows the presence of large numbers of slime-encased bacteria. Tissues taken from non-device-related chronic infections also show the presence of biofilm bacteria surrounded by an exopolysaccharide matrix. These biofilm infections may be caused by a single species or by a mixture of species of bacteria or fungi.

    According to Dr. Patel of the Mayo Clinic, individuals with prosthetic joints are often oblivious to the fact that their prosthetic joints harbor biofilm infections.[13]

    Cells of Staphylococcus epidermidis causing devastating disease as they grow on the cuff at a mechanical heart valve.

    “When people think of infection, they may think of fever or pus coming out of a wound,” explains Dr. Patel. “However, this is not the case with prosthetic joint infection. Patients will often experience pain, but not other symptoms usually associated with infection. Often what happens is that the bacteria that cause infection on prosthetic joints are the same as bacteria that live harmlessly on our skin. However, on a prosthetic joint they can stick, grow and cause problems over the long term. Many of these bacteria would not infect the joint were it not for the prosthesis.”

    Biofilms also cause Leptospirosis, a serious but neglected emerging disease that infects humans through contaminated water. New research published in the May issue of the journal Microbiology shows for the first time how bacteria that cause the disease survive in the environment.

    Leptospirosis is a major public health problem in southeast Asia and South America, with over 500,000 severe cases every year. Between 5% and 20% of these cases are fatal. Rats and other mammals carry the disease-causing pathogen Leptospira interrogans in their kidneys. When they urinate, they contaminate surface water with the bacteria, which can survive in the environment for long periods.

    “This led us to see if the bacteria build a protective casing around themselves for protection,” said Professor Mathieu Picardeau from the Institut Pasteur in Paris, France. [14]

    Previously, scientists believed the bacteria were planktonic. But Professor Picardeau and his team have shown that L. interrogans can make biofilms, which could be one of the main factors controlling survival and disease transmission. “90% of the species of Leptospira we tested could form biofilms. It takes L. interrogans an average of 20 days to make a biofilm,” says Picardeau.

    Biofilms have also been implicated in a wide array of veterinary diseases. For example, researchers at the Virginia-Maryland Regional College of Veterinary Medicine at Virginia Tech were just awarded a grant from the United States Department of Agriculture to study the role biofilms play in the development of Bovine Respiratory Disease Complex (BRDC). If biofilms play a role in bovine respiratory disease, it’s likely only a matter of time before they will be established as a cause of human respiratory diseases as well.

    When the immune response is compromised, Pseudomonas aeruginosabiofilms are able to colonize the alveoli, and to form biofilms.

    As mentioned previously, infection by the bacterium Pseudomonas aeruginosa (P. aeruginosa) is the main cause of death among patients with cystic fibrosis. Pseudomonas is able to set up permanent residence in the lungs of patients with cystic fibrosis where, if you ask most mainstream researchers, it is impossible to kill. Eventually, chronic inflammation produced by the immune system in response to Pseudomonas destroys the lung and causes respiratory failure. In the permanent infection phase, P. aeruginosa biofilms are thought to be present in the airway, although much about the infection pathogenesis remains unclear.[15]

    Cystic fibrosis is caused by mutations in the proteins of channels that regulates chloride. How abnormal chloride channel protein leads to biofilm infection remains hotly debated. It is clear, however, that cystic fibrosis patients manifest some kind of host-defense defect localized to the airway surface. Somehow this leads to a debilitating biofilm infection.

    Biofilms have the potential to cause a tremendous array of infections and diseases

    Because internal pathogenic biofilm research comprises such a new field of study, the infections described above almost certainly represent just the tip of the iceberg when it comes to the number of chronic diseases and infections currently caused by biofilms.

    For example, it wasn’t until July of 2006 that researchers realized that the majority of ear infections are caused by biofilm bacteria. These infections, which can be either acute or chronic, are referred to collectively as otitis media (OM). They are the most common illness for which children visit a physician, receive antibiotics, or undergo surgery in the United States.

    There are two subtypes of chronic OM. Recurrent OM (ROM) is diagnosed when children suffer repeated infections over a span of time and during which clinical evidence of the disease resolves between episodes. Chronic OM with effusion is diagnosed when children have persistent fluid in the ears that lasts for months in the absence of any other symptoms except conductive hearing loss.

    It took over ten years for researchers to realize that otitis media is caused by biofilms. Finally, in 2002, Drs. Ehrlich and J. Christopher Post, an Allegheny General Hospital pediatric ear specialist and medical director of the Center for Genomic Sciences, published the first animal evidence of biofilms in the middle ear in the Journal of the American Medical Association, setting the stage for further clinical investigation.

    In a subsequent study, Ehrlich and Post obtained middle ear mucosa – or membrane tissue – biopsies from children undergoing a procedure for otitis. The team gathered uninfected mucosal biopsies from children and adults undergoing cochlear implantation as a control.[16]

    Using advanced confocal laser scanning microscopy, Luanne Hall Stoodley, Ph.D. and her ASRI colleagues obtained three dimensional images of the biopsies and evaluated them for biofilm morphology using generic stains and species-specific probes for Haemophilus influenzae, Streptococcus pneumoniae and Moraxella catarrhalis. Effusions, when present, were also evaluated for evidence of pathogen specific nucleic acid sequences (indicating presence of live bacteria).

    The study found mucosal biofilms in the middle ears of 46/50 children (92%) with both forms of otitis. Biofilms were not observed in eight control middle ear mucosa specimens obtained from cochlear implant patients.

    Otitis media, or inflammation of the inner ear, is caused by biofilm.

    In fact, all of the children in the study who suffered from chronic otitis media tested positive for biofilms in the middle ear, even those who were asymptomatic, causing Erlich to conclude that, “It appears that in many cases recurrent disease stems not from re-infection as was previously thought and which forms the basis for conventional treatment, but from a persistent biofilm.”

    He went on to state that the discovery of biofilms in the setting of chronic otitis media represented “a landmark evolution in the medical community’s understanding about a disease that afflicts millions of children world-wide each year and further endorses the emerging biofilm paradigm of chronic infectious disease.”

    The emerging biofilm paradigm of chronic disease refers to a new movement in which researchers such as Ehrlich are calling for a tremendous shift in the way the medical community views bacterial biofilms. Those scientists who support an emerging biofilm paradigm of chronic disease feel that biofilm research is of utmost importance because of the fact that the infectious entities have the potential to cause so many forms of chronic disease. The Marshall Pathogenesis is an important part of this paradigm shift.

    It was also just last year that researchers realized that biofilms cause most infections associated with contact lens use. In 2006, Bausch & Lomb withdrew its ReNu with MoistureLoc contact lens solution because a high proportion of corneal infections were associated with it. It wasn’t long before researchers at the University Hospitals Case Medical Center found that the infections were caused by biofilms. [17]

    “Once they live in that type of state [a biofilm], the cells become resistant to lens solutions and immune to the body’s own defense system,” said Mahmoud A. Ghannoum, Ph.D, senior investigator of the study. “This study should alert contact lens wearers to the importance of proper care for contact lenses to protect against potentially virulent eye infections,” he said.

    It turns out that the biofilms detected by Ghannoum and team were composed of fungi, particularly a species called Fusarium. His team also discovered that the strain of fungus (with the catchy name, ATCC 36031) used for testing the effectiveness of lens care solutions is a strain that does not produce biofilms as the clinical fungal strains do. ReNu contact solution, therefore, was effective in the laboratory, but failed when faced with strains in real-world situations.

    Fungal biofilm can form in contact lens solution leading to potentially virulent eye infections

    Unfortunately, Ghannoum and team were not able to create a method to target and destroy the fungal biofilms that plague users of ReNu and some other contact lens solutions.

    Then there’s Dr. Randall Wolcott who just recently discovered and confirmed that the sludge covering diabetic wounds is largely made up of biofilms. Whereas before Wolcott’s work such limbs generally had to be amputated, now that they have been correctly linked to biofilms, measures such as those described in thisinterview can be taken to stop the spread of infection and save the limb. Wolcott has finally been given a grant by the National Institutes of Health to further study chronic biofilms and wound development.

    Dr. Garth James and the Medical Biofilm Laboratory team at Montana State University are also researching wounds and biofilms. Their latest article and an image showing wound biofilm was featured on the cover of the January-February 2008 issue of Wound Repair and Regeneration.[18]

    Biofilm bacteria and chronic inflammatory disease

    In just a few short years, the potential of biofilms to cause debilitating chronic infections has become so clear that there is little doubt that biofilms are part of the pathogenic mix or “pea soup” that cause most or all chronic “autoimmune” and inflammatory diseases.

    In fact, thanks, in large part, to the research of biomedical researcher Dr. Trevor Marshall, it is now increasingly understood that chronic inflammatory diseases result from infection with a large microbiota of chronic biofilm and L-form bacteria (collectively called the Th1 pathogens).[19][20] The microbiota is thought to be comprised of numerous bacterial species, some of which have yet to be discovered. However, most of the pathogens that cause inflammatory disease have one thing in common – they have all developed ways to evade the immune system and persist as chronic forms that the body is unable to eliminate naturally.

    Some L-form bacteria are able to evade the immune system because, long ago, they evolved the ability to reside inside macrophages, the very white bloods cells of the immune system that are supposed to kill invading pathogens. Upon formation, L-form bacteria also lose their cell walls, which makes them impervious to components of the immune response that detect invading pathogens by identifying the proteins on their cell walls. The fact that L-form bacteria lack cell walls also means that the beta-lactam antibiotics, which work by targeting the bacterial cell wall, are completely ineffective at killing them.[21]

    Clearly, transforming into the L-form offers any pathogen a survival advantage. But among those pathogens not in an L-form state, joining a biofilm is just as likely to enhance their ability to evade the immune system. Once enough chronic pathogens have grouped together and formed a stable community with a strong protective matrix, they are likely able to reside in any area of the body, causing the host to suffer from chronic symptoms that are both mental and physical in nature.

    Biofilm researchers will also tell you that, not surprisingly, biofilms form with greater ease in an immunocompromised host. Marshall’s research has made it clear that many of the Th1 pathogens are capable of creating substances that bind and inactivate the Vitamin D Receptor – a fundamental receptor of the body that controls the activity of the innate immune system, or the body’s first line of defense against intracellular infection.[22]

    Diagram of the Vitamin D Receptor and capnine.

    Thus, as patients accumulate a greater number of the Th1 pathogens, more and more of the chronic bacterial forms create substances capable of disabling the VDR. This causes a snowball effect, in which the patient becomes increasingly immunocompromised as they acquire a larger bacterial load.

    For one thing, it’s possible that many of the bacteria that survive inside biofilms are capable of creating VDR blocking substances. Thus, the formation of biofilms may contribute to immune dysfunction. Conversely, as patients acquire L-form bacteria and other persistent bacterial forms capable of creating VDR-blocking substances, it becomes exceptionally easy for biofilms to form on any tissue surface of the human body.

    Thus, patients who begin to acquire L-form bacteria almost always fall victim to biofilm infections as well, since it is all too easy for pathogens to group together into a biofilm when the immune system isn’t working up to par.

    To date, there is also no strict criteria that separate L-form bacteria from biofilm bacteria or any other chronic pathogenic forms. This means that L-form bacteria may also form into biofilms, and by doing so enter a mode of survival that makes them truly impervious to the immune system. Some L-form bacteria may not form complete biofilms, yet may still possess the ability to surround themselves in a protective matrix. Under these circumstances one might say they are in a “biofilm-like” state.

    Marshall often refers to the pathogens that cause inflammatory disease as an intraphgocytic, metagenomic microbiota of bacteria, terms which suggest that most chronic bacterial forms possess properties of both L-form and biofilm bacteria. Intraphagocytic refers to the fact that the pathogens can be found inside the cells of the immune system. The term metagenomic indicates that there are a tremendous number of different species of these chronic bacterial forms. Finally, microbiota refers to the fact that biofilm communities sustain their pathogenic activity.

    For example, when observed under a darkfield microscope, L-form bacteria are often encased in protective biofilm sheaths. If the blood containing the pathogens are aged overnight, the bacterial colonies reach a point where they expand and burst out of the cell, causing the cell to burst as well. Then they extend as huge, long biofilm tubules, which are presumably helping the pathogens spread to other cells. The tubules also help spread bacterial DNA to neighboring cells.

    Clearly, there is a great need for more research on how different chronic bacterial forms interact. To date, L-form researchers have essentially focused soley on the L-form, while failing to investigate how frequently the wall-less pathogens form into biofilms or become parts of biofilm communities together with bacteria with cell walls. Conversely, most biofilm researchers are intently studying the biofilm mode of growth without considering the presence of L-form bacteria. So, it will likely take several years before we will be better able to understand probable overlaps between the lifestyles of L-form and biofilm bacteria.

    Anyone who is skeptical about the fact that biofilms likely form a large percentage of the microbiota that cause inflammatory disease should consider many of the recent studies that have linked established biofilm infections to a higher risk for multiple forms of chronic inflammatory disease. Take, for example, studies that have found a link between periodontal disease and several major inflammatory conditions. A 1989 article published in British Medical Journal showed a correlation between dental disease and systemic disease (stroke, heart disease, diabetes). After correcting for age, exercise, diet, smoking, weight, blood cholesterol level, alcohol use and health care, people who had periodontal disease had a significantly higher incidence of heart disease, stroke and premature death. More recently, these results were confirmed in studies in the United States, Canada, Great Britain, Sweden, and Germany. The effects are striking. For example, researchers from the Canadian Health Bureau found that people with periodontal disease had a two times higher risk of dying from cardiovascular disease.[23]

    Dental plaque as seen under a scanning electron microcroscope.

    Since we know that periodontal disease is caused by biofilm bacteria, the most logical explanation for the fact that people with dental problems are much more likely to suffer from heart disease and stroke is that the biofilms in their mouths have gradually spread to the moist surfaces of their circulatory systems. Or perhaps if the bacteria in periodontal biofilms create VDR binding substances, their ability to slow innate immune function allows new biofilms (and L-form bacteria as well) to more easily form and infect the heart and blood vessels. Conversely, systemic infection with VDR blocking biofilm bacteria is also likely to weaken immune defenses in the gums and facilitate periodontal disease.

    In fact, it appears that biofilm bacteria in the mouth also facilitate the formation of biofilm and L-form bacteria in the brain. Just last year, researchers at Vasant Hirani at University College London released the results of a study which found that elderly people who have lost their teeth are at more than three-fold greater risk of memory problems and dementia.[24]

    At the moment, Autoimmunity Research Foundation does not have the resources to culture biofilms from patients on the treatment and, even if they did, current methods for culturing internal biofilms remain unreliable. According to Stoodley, “The lack of standard methods for growing, quantifying and testing biofilms in continuous culture results in incalculable variability between laboratory systems. Biofilm microbiology is complex and not well represented by flask cultures. Although homogeneity allows statistical enumeration, the extent to which it reflects the real, less orderly world is questionable.”[10]

    How else do we acquire biofilm bacteria?

    As discussed thus far, biofilms form spontaneously as bacteria inside the human body group together. Yet people can also ingest biofilms by eating contaminated food.

    According to researchers at the University of Guelph in Ontario Canada, it is increasingly suspected that biofilms play an important role in contamination of meat during processing and packaging. The group warns that greater action must be taken to reduce the presence of food-borne pathogens like Escherichia coli and Listeria monocytogenes and spoilage microorganisms such as thePseudomonas species (all of which form biofilms) throughout the food processing chain to ensure the safety and shelf-life of the product. Most of these microorganisms are ubiquitous in the environment or brought into processing facilities through healthy animal carriers.

    Hans Blaschek of the University of Illinois has discovered that biofilms form on much of the other food products we consume as well.

    A biofilm on a piece of lettuce

    “If you could see a piece of celery that’s been magnified 10,000 times, you’d know what the scientists fighting foodborne pathogens are up against,” says Blaschek.

    “It’s like looking at a moonscape, full of craters and crevices. And many of the pathogens that cause foodborne illness, such as Shigella, E. coli,and Listeria, make sticky, sugary biofilms that get down in these crevices, stick like glue, and hang on like crazy.”

    According to Blaschek, the problem faced by produce suppliers can be a triple whammy. “If you’re unlucky enough to be dealing with a pathogen–and the pathogen has the additional attribute of being able to form biofilm—and you’re dealing with a food product that’s minimally processed, well, you’re triply unlucky,” the scientist said. “You may be able to scrub the organism off the surface, but the cells in these biofilms are very good at aligning themselves in the subsurface areas of produce.”

    Scott Martin, a University of Illinois food science and human nutrition professor agrees, stating,”Once the pathogenic organism gets on the product, no amount of washing will remove it. The microbes attach to the surface of produce in a sticky biofilm, and washing just isn’t very effective.”

    Biofilms can even be found in processed water. Just this month, a study was released in which researchers at the Department of Biological Sciences, at Virginia Polytechnic Institute isolated M. avium biofilm from the shower head of a woman with M. avium pulmonary disease.[25] A molecular technique called DNA fingerprinting demonstrated that M. avium isolates from the water were the same forms that were causing the woman’s respiratory illness.

    Effectively targeting biofilm infections

    Although the mainstream medical community is rapidly acknowledging the large number of diseases and infections caused by biofilms, most researchers are convinced that biofilms are difficult or impossible to destroy, particularly those cells that form the deeper layers of a thick biofilm. Most papers on biofilms state that they are resistant to antibiotics administered in a standard manner. For example, despite the fact that Ehrlich and team discovered that biofilm bacteria cause otitis media, they are unable to offer an effective solution that would actually allow for the destruction of biofilms in the ear canal. Other teams have also come up short in creating methods to break up the biofilms they implicate as the cause of numerous infections.

    This means patients with biofilm infections are generally told by mainstream doctors that they have an untreatable infection. In some cases, a disease-causing biofilm can be cut out of a patient’s tissues, or efforts are made to drain components of the biofilm out of the body. For example, doctors treating otitis media often treats patients with myringotomy, a surgical procedure in which small tubes are placed in the eardrum to continuously drain infectious fluid.

    When it comes to administering antibiotics in an effort to target biofilms, one thing is certain. Mainstream researchers have repeatedly tried to kill biofilms by giving patients high, constant doses of antibiotics. Unfortunately, when administered in high doses, the antibiotic may temporarily weaken the biofilm but is incapable of destroying it, as certain cells inevitably persist and allow the biofilm to regenerate.

    “You can put a patient on [a high dose] antibiotics, and it may seem that the infection has disappeared,” says Levchenko. “But in a few months, it reappears, and it is usually in an antibiotic-resistant form.”

    What the vast majority of researchers working with biofilms fail to realize is that antibiotics are capable of destroying biofilms. The catch is that antibiotics are only effective against biofilms if administered in a very specific manner. Furthermore, only certain antibiotics appear to effectively target biofilms. After decades of research, much of which was derived from molecular modeling data, Marshall was the first to create an antibiotic regimen that appears to effectively target and destroy biofilms. Central to the treatment, which is called the Marshall Protocol, is the fact that biofilms and other Th1 pathogens succumb to specific bacteriostatic antibiotics taken in very low, pulsed doses. It is only when antibiotics are administered in this manner that they appear capable of fully eradicating biofilms.[19][20]

    In a paper entitled “The Riddle of Biofilm Resistance,” Dr. Kim Lewis of Tulane University discusses the mechanisms by which pulsed, low dose antibiotics are able to break up biofilms, while antibiotics administered in a standard manner (high, constant doses) cannot. According to Lewis, the use of pulsed, low-dose antibiotics to target biofilm bacteria is supported by observations she and her colleagues have made in the laboratory.[11]

    Some researchers claim that antibiotics cannot penetrate the matrix that surrounds a biofilm. But research by Lewis and other scientists has confirmed that the inability of antibiotics to penetrate the biofilm matrix is much more of an exception than a rule. According to Lewis, “In most cases involving small antimicrobial molecules, the barrier of the polysaccharide matrix should only postpone the death of cells rather than afford useful protection.”

    For example, a recent study that used low concentrations of an antibiotic to killP. aeruginosa biofilm bacteria found that the majority of biofilm cells were effectively eliminated by antibiotics in a manner that did not differ much from what is observed when the same antibiotic concentrations are administered to single planktonic cells.[26]

    After antibiotics are applied to a biofilm, a number of cells called “persisters” are left behind.

    Thus, since antibiotics can generally penetrate biofilms, some other factor is responsible for the fact that they cannot be killed by standard high dose antibiotic therapy. It turns out that after antibiotics are applied to a biofilm, a number of cells called “persisters” are left behind. Persisters are simply cells that are able to survive the first onslaught of antibiotics, and if left unchecked, gradually allow the biofilm to form again. According to Lewis, persister cells form with particular ease in immunocompromised patients because the immune system is unable to help the antibiotic “mop up” all the biofilm cells it has targeted.

    “This simple observation suggests a new paradigm for explaining, at least in principle, the phenomenon of biofilm resistance to killing by a wide range of antimicrobials,” states Lewis. “The majority of cells in a biofilm are not necessarily more resistant to killing than planktonic cells and die rapidly when treated with [an antibiotic] that can kill slowly growing cells.”

    Thus, a dose of antibiotics – particularly in immunocompromised patients – eradicates most of the biofilm population but leaves a small fraction of surviving persisters behind. Unfortunately, in the same sense that the beta-lactam antibiotics promote the formation of L-form bacteria, persister cells are actually preserved by the presence of an antibiotic that inhibits their growth. Thus, paradoxically, dosing an antibiotic in a constant, high-dose manner (in which the antibiotic is always present) helps persisters persevere.

    But in the case of low, pulsed dosing, where an antibiotic is administered, withdrawn, then administered again, the first application of antibiotic will eradicate the bulk of biofilm cells, leaving persister cells behind. Withdrawl of the antibiotic allows the persister population to start growing. Since administration of the antibiotic is temporarily stopped, the survival of persisters is not enhanced. This causes the persister cells to lose their phenotype (their shape and biochemical properties), meaning that they are unable to switch back into biofilm mode. A second application of the antibiotic should then completely eliminate the persister cells, which are still in planktonic mode.

    Lewis has found that the feasibility of a pulsed, or cyclical biofilm eradication approach depends on the rate at which persisters lose resistance to killing and regenerate new persisters. It also depends on the ability to manipulate the antibiotic concentration – something that is done quite effectively by patients on the Marshall Protocol who carefully dose their antibiotics at different levels, allowing constant variation in antibiotic concentration. Although Lewis speculates that allowing the concentration of an antibiotic to drop could potentially lead to resistance towards the antibiotic, she is quick to add that if two or more antibiotics are used to target a biofilm at one time, such resistance would not occur. Again, since the Marshall Protocol uses a total of five bacteriostatic antibiotics, usually taken two or three at a time, concerns of resistance are essentially negligible.

    Model of biofilm resistance based on persister survival. An initial treatment of high-dose constant antibiotic kills planktonic cells and the majority of biofilm cells. But persisters remain alive and resurrect the biofilm, causing the infection to relapse

    “It is entirely possible that successful cases of antimicrobial therapy of biofilm infections result from a fortuitous optimal cycling [pulsed dosing] of an antibiotic concentration that eliminated first the bulk of the biofilm and then the progeny of the persisters that began to divide,” states Lewis.

    Lewis’ work has been supported by other research teams. Recently, researchers at the University of Iowa found that subinhibitory (extremely low dose) concentrations of the bacteriostatic antibiotic azithromycin significantly decreased biomass and maximal thickness in both forming and established biofilms.[27] These extremely low concentrations of azithromycin inhibited biofilms in all but the most highly resistant isolates. In contrast, subinhibitory concentrations of gentamicin, which is not a bacteriostatic antibiotic, had no effect on biofilm formation. In fact, biofilms actually became resistant to gentamicin at concentrations far above the minimum inhibitory concentration.

    Researchers at Tulane University recently confirmed yet again that low, pulsed dosing is a superior way of targeting treatment-resistant biofilm bacteria. According to the team, who mathematically modeled the action of antibiotics on bacterial biofilms, “Exposing a biofilm to low concentration doses of an antimicrobial agent for longer time is more effective than short time dosing with high antimicrobial agent concentration.”[28]

    Similarly, a bioengineer led team at the University of Washington recently created an antibiotic- containing polymer that releases antibiotic slowly onto the surface of hospital devices, such as catheters and prostheses, to reduce the risk of biofilm-related infections.

    “Rather than massively dosing the patient with high levels of released antibiotic, this strategy allows the release of extremely low levels of this very potent antibiotic over long periods of time,” explained Buddy Ratner, PhD, Professor and Director of the Engineered Biomaterials Program at the University of Washington, Seattle. “We calculated the amount released at the surface that would kill 100% of the bacteria entering the surface zone.”

    When challenged by Dr. Leonard A. Mermel from Brown University School of Medicine on the issue that long-term use of pulsed, low-dose antibiotics might allow for increased resistance on the part of the bacteria being treated, Ratner responded, “Dr. Mermel’s concerns are, in fact, why we developed this system for [antibiotic] release. Bacteria that live through antibiotic dosing can go on to produce resistant strains. If 100% of the bacteria approaching the surface are killed, they can’t produce resistant offspring. The classical physician approach, dosing the patient systemically and heavily to rid the patient of persistent bacteria, can lead to those resistant strains. Our approach releases miniscule doses compared to what a physician would use, but releases the antibiotic where it will be optimally effective and least likely to leave antibiotic-resistant survivors.”

    Although taken orally, the MP antibiotics are taken in the same manner as those administered by Ratner and team. Because they too are dosed at optimal times in extremely small doses, the chance that long-term antibiotic use might foster resistant bacteria is again, essentially negligible, especially when multiple antibiotics are typically used.

    Key to the ability of the Marshall Protocol to effectively target biofilm bacteria is the fact that the specific pulsed, low-dose bacteriostatic antibiotics used by the treatment are taken in conjunction with a medication called Benicar. Benicar binds and activates the Vitamin D Receptor, displacing bacterial substances and 25-D from the receptor, so that it can once again activate the innate immune system.[29] Benicar is so effective at strengthening the innate immune response that the patient’s own immune system ultimately helps destroy the biofilm weakened by pulsed, low-dose antibiotics.

    Thus, it is not enough for patients on the Marshall Protocol to simply take specific pulsed, low-dose antibiotics. The activity of their innate immune system must also be restored so that the cells of the immune system can actively combat biofilm bacteria, the matrix that surrounds them, and persister cells.

    After antibiotics are applied to a biofilm, a number of cells called “persisters” are left behind

    How do we know that the Marshall Protocol effectively kills biofilm bacteria? Namely because those patients to reach the later stages of the treatment do not report symptoms associated with established biofilm diseases. Patients on the MP who once suffered from chronic ear infections (OM), chronic sinus infections, or periodontal disease find that such infections resolve over the course of treatment. Furthermore, since we now understand that biofilms almost certainly form a large part of the chronic microbiota of pathogens that cause chronic inflammatory and autoimmune diseases, the fact that patients can use the Marshall Protocol to recover from such illnesses again suggests that the treatment must be effectively allowing them to target and destroy biofilms.

    Because all evidence points to the fact that the MP does indeed effectively target biofilm bacteria, it is of utmost importance that people who suffer from any sort of biofilm infection start the treatment. Knowledge of the Marshall Protocol has yet to reach the cystic fibrosis community, but there is great hope that if people with the disease were to start the MP, they could destroy the P. aeruginosabiofilms that cause their untimely deaths. In the same vein, people with a wide range of infections, such as those infected with biofilm during surgery, can likely restore their health with the MP.

    It is to be hoped that the clinical data emerging from the Marshall Protocol study site, which shows patients recovering from biofilm-related diseases, will inspire future researchers to invest a great deal of energy into further research aimed at identifying and studying the biofilm bacteria – bacteria that almost certainly form part of the microbiota of pathogens that cause inflammatory disease. In the coming years, as the technology to detect biofilms becomes even more sophisticated, it is almost certain that a great number of biofilms will be officially detected and documented in patients with a vast array of chronic diseases.

    REFERENCES

    1. Costerton, J. W., Stewart, P. S., & Greenberg, E. P. (1999). Bacterial biofilms: a common cause of persistent infections. Science (New York, N.Y.), 284(5418), 1318-22. [] [] [] []
    2. Higgins, D. A., Pomianek, M. E., Kraml, C. M., Taylor, R. K., Semmelhack, M. F., & Bassler, B. L. (2007). The major Vibrio cholerae autoinducer and its role in virulence factor production.Nature, 450(7171), 883-6. []
    3. Singh, P. K., Schaefer, A. L., Parsek, M. R., Moninger, T. O., Welsh, M. J., & Greenberg, E. P. (2000). Quorum-sensing signals indicate that cystic fibrosis lungs are infected with bacterial biofilms. Nature, 407(6805), 762-4. [] []
    4. Stoodley, P., Purevdorj-Gage, B., & Costerton, J. W. (2005). Clinical significance of seeding dispersal in biofilms: a response. Microbiology, 151(11), 3453. []
    5. O’toole, G. A., & Kolter, R. (1998). Flagellar and Twitching Motility Are Necessary for Pseudomonas Aeruginosa Biofilm Development. Molecular Microbiology, 30(2), 295-304. []
    6. Cho, H., Jönsson, H., Campbell, K., Melke, P., Williams, J. W., Jedynak, B., et al. (2007). Self-Organization in High-Density Bacterial Colonies: Efficient Crowd Control. PLoS Biology, 5(11), e302 EP -. [] []
    7. Brockhurst, M. A., Hochberg, M. E., Bell, T., & Buckling, A. (2006). Character displacement promotes cooperation in bacterial biofilms. Current biology: CB, 16(20), 2030-4. []
    8. Parsek, M. R., & Singh, P. K. (2003). Bacterial biofilms: an emerging link to disease pathogenesis. Annual review of microbiology, 57, 677-701. []
    9. Kraigsley, A., Ronney, P., & Finkel, S. Hydrodynamic effects on biofilm formation. Retrieved May 28, 2008. []
    10. Hall-Stoodley, L., Costerton, J. W., & Stoodley, P. (2004). Bacterial biofilms: from the Natural environment to infectious diseases. Nat Rev Micro, 2(2), 95-108. [] [] []
    11. Lewis, K. (2001). Riddle of biofilm resistance. Antimicrobial agents and chemotherapy, 45(4), 999-1007. [] []
    12. Parsek, M. R., & Singh, P. K. (2003). Bacterial biofilms: an emerging link to disease pathogenesis. Annual review of microbiology, 57, 677-701. []
    13. Trampuz, A., Piper, K. E., Jacobson, M. J., Hanssen, A. D., Unni, K. K., Osmon, D. R., et al. (2007). Sonication of Removed Hip and Knee Prostheses for Diagnosis of Infection. N Engl J Med, 357(7), 654-663. []
    14. Ristow, P., Bourhy, P., Kerneis, S., Schmitt, C., Prevost, M., Lilenbaum, W., et al. (2008).Biofilm formation by saprophytic and pathogenic leptospires. Microbiology, 154(5), 1309-1317. []
    15. Moreau-Marquis, S., Stanton, B. A., & O’Toole, G. A. (2008). Pseudomonas aeruginosa biofilm formation in the cystic fibrosis airway. Pulmonary pharmacology & therapeutics. []
    16. Hall-Stoodley, L., Hu, F. Z., Gieseke, A., Nistico, L., Nguyen, D., Hayes, J., et al. (2006).Direct Detection of Bacterial Biofilms on the Middle-Ear Mucosa of Children With Chronic Otitis Media. JAMA, 296(2), 202-211. []
    17. Imamura, Y., Chandra, J., Mukherjee, P. K., Lattif, A. A., Szczotka-Flynn, L. B., Pearlman, E., et al. (2008). Fusarium and Candida albicans Biofilms on Soft Contact Lenses: Model Development, Influence of Lens Type, and Susceptibility to Lens Care Solutions. Antimicrob. Agents Chemother., 52(1), 171-182. []
    18. James, G. A., Swogger, E., Wolcott, R., Pulcini, E. D., Secor, P., Sestrich, J., et al. (2008).Biofilms in Chronic Wounds. Wound Repair and Regeneration, 16(1), 37-44. []
    19. Marshall, T. G. (2006b). A New Approach to Treating Intraphagocytic CWD Bacterial Pathogens in Sarcoidosis, CFS, Lyme and other Inflammatory Diseases. [] []
    20. Marshall, T. G., & Marshall, F. E. (2004). Sarcoidosis succumbs to antibiotics–implications for autoimmune disease. Autoimmunity reviews, 3(4), 295-300. [] []
    21. Sr, G. J. D., & Woody, H. B. (1997). Bacterial persistence and expression of disease. Clinical Microbiology Reviews, 10(2). []
    22. Marshall, T. G. (2007). Bacterial Capnine Blocks Transcription of Human Antimicrobial Peptides. Nature Precedings. []
    23. Morrison, H. I., Ellison, L. F., & Taylor, G. W. (1999). Periodontal disease and risk of fatal coronary heart and cerebrovascular diseases. Journal of cardiovascular risk, 6(1), 7-11. []
    24. Stewart, R., & Hirani, V. (2007). Dental Health and Cognitive Impairment in an English National Survey Population. Journal of the American Geriatrics Society, 55(9), 1410-1414. []
    25. Falkinham Iii, J. O., Iseman, M. D., Haas, P. D., & Soolingen, D. V. (2008). Mycobacterium avium in a shower linked to pulmonary disease. Journal of water and health, 6(2), 209-13. []
    26. Lewis, K. (2001). Riddle of biofilm resistance. Antimicrobial agents and chemotherapy, 45(4), 999-1007. []
    27. Starner, Timothy D et al. 2008. Subinhibitory Concentrations of Azithromycin Decrease Nontypeable Haemophilus influenzae Biofilm Formation and Diminish Established Biofilms.Antimicrobial agents and chemotherapy 52(1):137-45. []
    28. Cogan, N. G., Cortez, R., & Fauci, L. (2005). Modeling physiological resistance in bacterial biofilms. Bulletin of mathematical biology, 67(4), 831-53. []
    29. Marshall, T. G. (2006). VDR Nuclear Receptor Competence is the Key to Recovery from Chronic Inflammatory and Autoimmune Disease. []
  • Comments Off on Understanding Biofilms
  • Filed under: biofilms, featured articles
  • Patients with diabetic neuropathy may not notice minor injuries due to loss of feeling in their lower extremities. Since the Vitamin D Receptor is inactivated by bacterial ligands, a small cut or sore can become infected, and flare into a limb- or life-threatening condition in as little as three days. These wounds are so difficult to heal that most of medicine considers them a lost cause and treats them with amputation. Amputations are often considered to be the beginning of the end for patients with diabetes.

    Dr. Randall Wolcott

    70% of diabetics who undergo an amputation die within five years due to the stress placed on their hearts from their altered circulatory system. During those five years they are likely to have more amputations and to rate their quality of life worse than cancer patients, according to some studies.

    Nationally, an estimated 82,000 people with diabetes had lower-limb amputations in 2002, according to the Centers for Disease Control. But thanks to a doctor at the Southwest Regional Wound Care Center in Lubbock, Texas, who has teamed up with researchers from Montana State University’s Center for Biofilm Engineering, this situation is changing. After sending samples of the sludge on his patient’s wounds to the Center, Dr. Randall Wolcott was informed that his samples were largely composed of bacterial biofilms.

    This discovery eventually led to a paper on the findings published in the October issue of Wound Repair and Regeneration, an important step in convincing the medical community of biofilms’ importance in chronic wounds.

    In the meantime, with the help of other scientists, Wolcott created a series of treatments that allow him to successfully kill the biofilm bacteria that have taken over his patients’ wounds, saving most of them from the horrors of amputation.

    Before treating the biofilms on his patients’ wounds, Wolcott admitted patients for an estimated 10 to 15 amputations a month. Now, he’s gone months without one of his patients receiving an amputation. He can confidently look patients in the eye and say he’s 80% certain that their wound is going to heal.

    Since patients with diabetes and a host of inflammatory diseases are also killing biofilm bacteria thanks to the Marshall Protocol, Dr. Wolcott’s work is yet another wake-up call as to the massive role these communities of bacteria play in causing all stages of chronic disease. I was lucky enough to speak with Dr. Wolcott and his laboratory research coordinator, Dan Rhoads. We spoke about their work, the importance of biofilm research, and the characteristics of biofilms in general.

    Could you explain what a biofilm is?

    Well, that’s a big question but I’ll do my best! Biofilms have been around for at least 3 million years. They are essentially how organisms protect themselves from environmental attack – from chemicals, phages (viruses that infect bacteria), UV light, or other challenges. We now understand that early on, bacteria learned to act as a community. By doing so, they allowed their existence to become much more secure. When bacteria first started to be studied about 150 years ago, the idea of a bacterial biofilm was simply too complex for scientists at the time to grasp. Consequently, early microbiologists were only able to study single bacterial organisms, one at a time.

    However, today we have an array of new molecular tools that have opened up a whole new world when it comes to understanding how bacteria survive. We now realize that bacteria are hardly ever found individually (in what is referred to as a planktonic state), but instead frequently join communities. These communities are then able to secrete substances that allow them to grab substances from the surrounding environment in order to create a matrix that protects all the bacteria inside.

    Planktonic bacteria produce certain proteins, but once they join a biofilm, the biofilm community expresses vastly different proteins and genes. For example, studies have shown that when a single bacterium becomes part of a biofilm the expression of over 800 genes can change.

    When a bacterium is in its planktonic state (on its own), it’s generally able to be cultured in a laboratory. It can also usually be killed by antibiotics. But since biofilms are entities that form under specific conditions in the human body, it is often difficult or impossible to grow them in a laboratory setting. They can no longer be killed by the standard high-dose antibiotics that easily target most single, planktonic bacteria because the community works to protect its members.

    So planktonic bacteria and biofilms are as different as caterpillars and butterflies. The organisms have the same genotype, but totally different phenotypes.

    Do you believe that bacteria in biofilms could be causing what are now referred to as diseases of unknown cause?

    I believe that they could be. Once bacteria have joined into biofilm communities, they can no longer be effectively targeted by the immune system. This means that after a biofilm is created, it persists as a chronic infection. Like patients who suffer from chronic inflammatory disease, people with biofilm infections find that high-dose antibiotics or steroids may offer them temporary relief, yet their infection never actually goes away.

    Dan and I have been reading several review articles that link autoimmune disease to chronic inflammation, and the more we’ve read, the clearer it’s become that chronic inflammation is a result of bacterial infection. So we think there is a clear link between chronic inflammatory diseases and bacteria, and when we think, “chronic inflammation” we believe we are typically dealing with biofilm infections.

    How did you become interested in studying biofilms?

    I attended a lecture about biofilms in 2002 which piqued my interest in the subject. Then I used Google to search for further information on biofilms and came upon the Montana State University’s Center for Biofilm Engineering – the place that is, in my opinion, the keeper of all knowledge about biofilms. I called them and told them about what I was observing on the wounds of my diabetic patients. I highly suspected that much of the sludge that I was removing from the wounds was biofilm. The center agreed to work with our office, and we proceeded to send them 50 samples of material scraped off our patients’ chronic wounds. Their molecular techniques confirmed that the majority of the samples did contain bacterial biofilms.

    At this point, let me pause to say that diabetic foot ulcers kill tens of thousands of people. Over 100,000 limb amputations happen every year because of infected wounds. The suffering is tremendous and, if the infection from a wound spreads or if the limb is amputated, the patient has a high risk of death. So finding a way to quell the bacterial infections and to heal diabetic wounds is a matter of life or death. So when we realized that we had discovered a previously unrecognized bacterial cause that explains the chronicity of diabetic wounds – wounds that cause patients to lose their limbs – we went after the whole hog.

    How do you treat the biofilms on your patient’s wounds?

    First we use diagnostic tools to determine that biofilms are indeed present on the wound. The techniques also help us identify the species of bacteria in a particular biofilm.

    Oh- so you are using molecular tools, that was something I was going to ask you about later. Later I want to hear how you feel about standard culturing methods…

    Well, now that you’ve brought that up, let me address the question now. The agar cultures that most scientists still use today in order to grow bacteria in the lab are 150 years old. A century ago, Robert Koch first discovered that planktonic bacteria could grow on a plate of agar. He used agar because you can manipulate the plate, scrape out the contents, thin out the contents enough, and finally end up with just one single bacterial species growing on the plate. Koch is the founding father of medical microbiology, and we are now standing on his shoulders. However, our understanding of science and medicine has changed a lot in the last century. Based on his pure-culture techniques, he created a series of postulates which state that only one single species of bacteria can cause any one disease. His postulates also state that a disease pathogenesis can only be considered legitimate if the single bacterium connected to the disease can again be isolated alone on an agar plate.

    Of course, Koch did not isolate bacteria and try to grow them on a medium that wasn’t agar. This is because if he did – let’s say he had tried to grow a bacterial species on a potato or an egg – the single bacterium would have surely congregated with other bacteria in the environment to form a biofilm – a biofilm that Koch could not isolate and study. So growing bacteria on anything besides an agar plate meant dealing with a situation that was too difficult for Koch to understand. So it seems he chose not to deal with such matters.

    Unfortunately, Koch’s postulates caught on among other scientists and eventually became the rule of thumb for growing bacteria and accepting organisms as disease-causing agents. Agar was, and still is seen by many, as the only appropriate bacterial growth medium. Even today, doctors still rigorously adhere to Koch’s postulates, which I believe has significantly impeded their ability to study and understand how bacteria actually survive and cause disease in the body where they are seldom found as single entities.

    Of course, growing some strains on agar has helped us better understand diseases such as strep throat, but we’ve pretty much knocked such diseases out. What we are only starting to realize today is that at least 80% of all the infections we treat are caused by biofilm bacteria, not planktonic bacteria. Now the playing field has changed. We’ve taken care of the planktonic bacteria that cause infections. Now we need to start treating polymicrobial diseases – those caused by combinations of bacteria. Continuing to culture on agar and adhering to Koch’s postulates is going to hinder that line of research because it impedes us from looking at the real thing, or what actually happens in the body. In the body, bacteria group together in communities. Changing the way we look at infection will require a paradigm shift in the way doctors think about bacterial populations and the potential of biofilm bacteria to cause disease.

    Happily, PCR [polymerase chain reaction] has allowed us to detect many of the bacteria in the biofilms we have studied. There are also many other molecular tools that exist or are being created that will allow for better detection of biofilm bacteria and bacteria in general. Once our team started using some of these sensitive, DNA-based technologies to identify the composition of bacteria in wound biofilms, we detected hundreds of different species, most of which would never grow on an agar plate. And every time we run the tests over again, it seems like we come across even more sequences of DNA that indicate the existence of new pathogens. So, the more we use these molecular diagnostic tools, the more we are realizing what highly diverse populations are inside wound biofilms.

    Consider this. In one of our latest studies, we found that it is common for at least 10 bacterial species to comprise at least 1% of each wound’s microbiota. And there were over 40 different species of bacteria that comprised at least 1% of the population in one sample or another. When you look deeper at that 1%, you see that there can be 40, 50, 60 species of bacteria on every wound – an incredible amount of diversity.

    Our next step is to determine which of the bacteria we have identified are important and which are not? Perhaps it will turn out that all species detected are important contributors to the virulence of each biofilm, or maybe we will discover that some are key species that cause more harm than others. By continuing to identify the bacterial species in the biofilms of as many of our patients’ wounds as possible, we also hope to determine possible correlations between the component bacterial species in the biofilm population and wounds’ severity. For example, the existence of some species may be found on wounds that are more difficult to treat. Based on this information we may decide to treat different wounds in different ways.

    There are a lot of people in the biofilm community who argue about the importance of particular species of biofilm bacteria, and many different research groups, each of which usually has its own opinion on which bacteria in a biofilm may be causing more harm than others. But our stance is that all the bacteria in a biofilm are important because they may act synergistically. Biofilms represent entire ecosystems, just like a forest. A forest isn’t made up of just squirrels, or just trees. Rather, all the entities that make up a forest work together and all are important to the survival of the community.

    This brings me to the concept of functional equivalence – a phenomenon that explains why biofilms are able to resist so many sources of stress. Let’s say a single bacterial species such as Staphylococcus aureus is floating around as a single entity. It can be easily identified and attacked by the immune system. Similarly, if it attaches to a surface and starts to form a protective protein matrix around itself – a biofilm – the biofilm is still relatively easy to break down because Staphylococcus aureus has limited defense mechanisms on its own.

    But lets say that when Staphylococcus aureus starts to form a biofilm, 10 other nearby bacteria develop the ability to attach to the biofilm as well. Now, if the biofilm is attacked again (by the immune system or other chemicals) it will be much harder to break down. That’s because each different species of bacteria in the biofilm possesses its own characteristics and its own strengths to combat the attack. If one species goes down, four others may still be able to fight and remain functional. A different form of challenge may take down those four species, but then some of the species that were not as effective against the first challenge may rise up and have the capability to deal with the new attack. I think this phenomenon – functional equivalence – is a very, very, important concept in chronic inflammatory infection.

    Functional equivalence has been documented in vaginal biofilms. Most vaginal biofilms seem to be composed of only one species of bacteria called Lactobacillus. These biofilms adjust the vaginal environment so that it has a pH of around 4.5, a pH that is most conducive to their survival and the woman’s good health. But it has been found that functionally equivalent biofilms develop that mimic these Lactobacillus biofilms. In the functionally equivalent biofilms, subgroups of different bacterial species come together and interact in order to allow the environment to create the same acidic environment. The difference is that the functionally equivalent biofilms, composed of many different species of bacteria, can perform similarly to the lactobacillus biofilm. These two genotypically different biofilms are phenotypically equivalent. They are functional equivalents.

    The same thing happens in wound biofilms. We see some that are predominantly colonized by single, well-known pathogens. But then we also see clinically similar biofilms made up of many different species of bacteria, and it is usually these diverse biofilms that display interesting growth patterns, interesting characteristics, and probably the best survival mechanisms.

    Since many diabetic patients end up with wounds covered in bacterial biofilms, do you think that biofilm bacteria might play a role in the entire pathogenesis of diabetes?

    We don’t know, although Dan and I have had several conversations about it. It’s mostly conjecture at the moment. We do know that a delayed immune response is what allows wound biofilms to become established. When wound biofilms start to form, they do so very quickly. Sometimes, they can even be detected after 20 minutes of growth. In other words, bacteria begin to form biofilms as soon as possible. If the immune system of people with diabetes were working up to par, they would be able to delay or retard such quick establishment of the biofilm, which is what a healthly individual can do.

    But you are usually able to kill the bacteria on your patient’s wounds. What procedures do you use?

    Yes, most of our patients’ wounds heal by using various treatments to wear away at the biofilms that cover them. These treatments include putting lactoferrin and xylitol on the wound. Lactoferrin occurs naturally in tears, mucus and breast milk and appears to attack the bacteria from multiple angles. It is used commercially in meat packing plants to prevent biofilms from growing on carcasses. Xylitol occurs in fruits, vegetables and other plants. It is also produced as part of normal human metabolism. It is used in toothpaste and chewing gum because of its anti-biofilm properties.

    An invaluable first step to treating a wound is debridement, or scraping the biofilm — a yellow-greenish sludge — along with dead tissue off the top of the wound with a curette. For some patients, this can be painful even with an anesthetic. Others feel nothing as diabetes has destroyed the nerve endings in their feet and legs. We also use five hyperbaric chambers where patients spend hours in a super-oxygenated environment that’s good for healthy tissue and bad for biofilms. We also use an arsenal of antibiotics and a new lipid-based gel. We recently finished a study in which we used a bacteriophage (viruses that infect bacteria) cocktail to fight the biofilms.

    How has your novel approach to treating diabetic wounds been accepted by your peers?

    I don’t mean to sound arrogant, but we know we’re right. We know that diabetic wounds are covered in biofilms and that it is biofilm growth that causes them to deteriorate to the point where most other doctors usually just cut them off. Once we realized that our patients’ wounds were covered with biofilm bacteria, we just knew, “This is it!” So we intend to spread the word about our discoveries ASAP. There are just a drastic number of medical issues that stem from biofilm infection. 500,000 people suffer from sinus infections caused by biofilms ever year. There are dozens and dozens of chronic infections that are biofilm related – infections that are now left uncured and thus force people to have heart valves put in, or lead to the removal of entire colons, or lead to tubes in the ears – all kinds of things. It’s bad, and we need a different answer to the way we treat so many conditions.

    Yet we are still often met with skepticism. I try to take the approach that nothing is fun if it isn’t controversial. Once the presence of biofilms on diabetic wounds is accepted as truth, the excitement and ambition of working in the area will dwindle. What we want now is confrontation. We want to push our ideas. It’s up to us to prove this is the real thing. It’s a vetting process, but we can win.

    What we do need is for researchers and doctors to be open minded. Instead of brushing us off, they need to look at the evidence we are presenting, absorb it, and at least argue with it if they think it’s wrong. When it comes to chronic biofilm infections, we are dealing with life and death situations, so it’s important that others take note of the facts and reasonable arguments that are currently on the table.

    Dr. Wolcott treating a wound

    But since your patients are still immunocompromised after you have treated their wound, don’t you find that many come back with new wounds that you must treat again?

    Well, when diabetic patients develop an infected wound that causes a limb to turn black, the trauma serves as a major wake-up call. Many of our patients start taking their illness much more seriously. Some buy insulin pumps. Others are careful to buy special diabetic shoes that offer better foot care, or finally make regular visits to their podiatrist to have difficult-to-cut nails sawed off. All these measures reduce the likelihood that they will develop another wound.

    We do realize that even when we effectively save a patient’s wound, our patients are still at risk for new wounds because they are immunocompromised. However, when a patient comes in with a wound on one limb we can demonstrate that the same comorbidities are present in the other limb as well, but it does not have a wound. The other limb appears to be intact and generally healthy. If we can work to heal the wounded limb, it should be just as healthy as the patient’s unwounded limb.

    But yes, we do have patients that we have treated once for a wound who come back two or three years later with another wound. The good thing is that, based on their first experience, these patients know to come see us as quickly as possible. The sooner we can treat the wound the less likely it is that the infection will spread to the bone where it is much harder to manage.

    Do you believe biofilms are actively studied to the extent they should be? If not, why are they not getting more attention?

    Oh, don’t get me started! Here’s my favorite example. I went to a biofilm conference in 2005. The first speech I heard was given by researchers from Proctor and Gamble. They had developed Compound 227 that works to prevent biofilm growth in the mouth and thus prevent the accumulation of dental plaque. They had literally spent millions and million of dollars on this research, just tremendous amounts of money so that they could use any discoveries to create a more effective toothpaste. Others spend millions to identify chemicals that can better remove and prevent biofilms that often accumulate on toilets—you know, those ugly rings.

    The presentation that followed the dental and industrial presentations was about biofilms and medicine. There was next to nothing to report and practically no spending whatsoever in the area. I came away understanding that right now we are spending way more money on preventing biofilm growth on teeth and toilets than on finding ways to effectively treat the dozens of different serious (many life or death) medical conditions that result from biofilm infection.

    Why aren’t more doctors like you – searching for a curative treatment option rather than content to simply palliate symptoms?

    Well, as we’ve moved forward with our work, we’ve taken our share of hits from all different types of regulatory agencies as well as funding agencies. So for some doctors, it may seem like this kind of scrutiny is not worth the extra hassle.

    But I feel many other doctors care. They are tired of telling their patients, “That’s just the way it is. I can’t make you better,” but often they don’t know how to get started. For example, a physician from England recently came to visit the clinic. She is interested in starting to treat the biofilms that she encounters most often: chronic and recurring bladder infections. However, in order to take this new approach, she is required to begin to work more independently from the National Health Service.

    I take it you are familiar with evidence-based medicine? It’s the increasingly accepted approach for making clinical decisions about how to treat a patient. Basically, doctors are trained to make a decision based on the most current evidence derived from research. But what such thinking boils down to is that I am supposed to do the same thing that has always been done – to treat my patient in the conventional manner – just because it’s become the most popular approach. However, when it comes to chronic wound biofilms, we are in the midst of a crisis – what has been done and is accepted as the standard treatment doesn’t work and doesn’t meet the needs of the patient.

    Thus, evidence-based medicine totally regulates against innovation. Essentially doctors suffer if they step away from mainstream thinking. Sure, there are charlatans out there who are trying to sell us treatments that don’t work, but there are many good therapies that are not used because they are unconventional. It is only by considering new treatment options that we can progress.

    Have you conducted a double-blind placebo control trial on your findings? If not, why not?

    We know without a doubt that chronic diabetic wounds can be saved if the biofilm bacteria that cover them are eliminated. So we are simply unwilling to use a control group as guinea pigs when we know we’ve got the methods to save most of their limbs as well. Granted, we are using some medications for off-label purposes, but they are all approved by the FDA. This is not just experimental stuff. We know that what we are doing is right for the patient. So we simply refuse to do a study where the control group is not treated. The only double-blinded trial we’ve done tested the effect of bacteriophages on wound biofilms, but in that case, the control group still got treated with everything else in our arsenal except the bacteriophages.

    We hope we can come to a compromise. We have plenty of data, and even though it’s retrospective, it’s still very valuable. So we hope that the medical community will take this evidence as proof that we are doing the right thing, in lieu of a blinded trial. Right now, rather than focusing on a blinded trial, we are simply focusing on what is best for the patient. We are trying to heal as many patients’ wounds as possible. That’s our main priority – treating patients right here and right now. If you take time to look at the retrospective evidence, it is solid. Our patients do very well.

    Are you familiar with the Marshall Protocol?

    Only from what you’ve just told us, but we plan to investigate it further. The idea of pulsed antibiotics makes a lot of sense – essentially it may allow the antibiotics to target the growing or regenerating cells in the biofilm, which were previously persister cells. Please send us more information.

    Me: Great! I think that the Marshall pathogenesis will help you better understand why the diabetic patients you treat are so immunocompromised. As you know, we believe that the entire pathogenesis of diabetes is caused by L-form and biofilm bacteria, and that these bacteria are able to create substances that slow the Vitamin D Receptor and subsequently the activity of the innate immune system. Thus, we believe that restoring the competence of the Vitamin D Receptor is key to recovery to inflammatory disease. Activating the VDR, or putting your patients on the full Marshall Protocol, could help restore their innate immune function, which may go a long way in preventing them from developing new infected wounds. At least that’s my take!


    The following is an excerpt taken directly from an article on the Montana State University’s Center for Biofilm Engineering website. It describes the experience of just one of Dr. Wolcott’s patients.

    The fruits of this science can be seen in the story of Jerry Montemayor, a 38-year-old school administrator in Lubbock, who stubbed his toe on the corner of his bed one morning in December 2005 and nearly lost his foot.Initially, Montemayor ignored the bruise. A diabetic, Montemayor has poor blood circulation in his lower legs and feet. Three days later, his toe was discolored and he limped with discomfort. He went to an emergency room.Emergency room physicians told Montemayor his foot was severely infected and he must be admitted. He spent the next 12 days in the hospital. When his infection didn’t respond to treatment, Montemayor’s physicians told him his foot should be amputated, or he risked losing his entire leg, and possibly his life.

    “First they said it would be the top of my foot, then half of my foot, then my whole foot,” Montemayor said. “They kept telling me I needed to set a date and time for my amputation. Believe me, if it wasn’t for the power of prayer I don’t think I’d have gotten through this.”

    Montemayor sought a second opinion. The next day, two staff members from Wolcott’s center visited.

    “I’ll never forget that visit,” Montemayor said. “One of the girls said ‘We’ve seen worse. We suggest you do not get this amputated. We can treat this.’”

    It was Christmas Eve.

    Montemayor took their advice and began nearly a year’s worth of treatments at Wolcott’s clinic on Christmas Day. Today, he walks on both feet.

    “The clinic staff said they were going to do their best and they did,” Montemayor said. “I’m blessed to be walking.”

    “It’s hard to relive that experience in the hospital,” he said. “At the time I was thinking about my personal life. I was thinking how this would affect me meeting someone, or having a relationship with someone. Is she going to accept and support me? Is she going to be able to walk next to me and accept that I have a prosthetic limb?

    “I was thinking ‘If I have kids will I be able to run and play with them?’” Montemayor said. “I was thinking ‘Am I going to be a whole man?’”

    Update – May 24, 2009

    Since the time of this interview, Dr. Wolcott has continued to successfully treat his patients’ wounds. At around the time of this interview, Dr. Wolcott authored a paper, the abstract of which appears in PubMed: A study of biofilm-based wound management in subjects with critical limb ischaemia. Here is the money quote:

    When comparing the healing frequency in this study with a previously published study, [Biofilm-based wound care management] strategies significantly improved healing frequency. These findings demonstrate that effectively managing the biofilm in chronic wounds is an important component of consistently transforming ‘non-healable’ wounds into healable wounds.

    I am also including images Dr. Wolcott put online. Anyone who is interested can view the large PDF file which contains images of the patients Dr. Wolcott has treated according to his biofilm-based wound management strategy. Here’s a sample.

    wounds

  • Comments Off on Interview with Dr. Randall Wolcott, bacterial biofilm wound specialist
  • Filed under: biofilms, featured articles, interview (doctor/researcher)
  • Although it may not seem like a topic immediately related to the Marshall Protocol, I believe that it’s difficult to truly envision the new bacterial pathogenesis of inflammatory disease without taking horizontal gene transfer, or the ability of bacteria to swap DNA, into account. In other articles on this site, I’ve described how people with inflammatory disease gradually accumulate a “pea soup” of pathogens. I like the term because it hints at the fact that everybody’s bacterial load is unique and also brings to mind the image of something stirred or mixed. Everyone with Th1 disease acquires a large mix of different pathogens, but even the image of a great number of different but isolated pathogens does not do justice to the variety of different bacteria that each patient harbors. This is because, if bacteria can trade DNA, they are constantly trading genetic material which allows for the constant creation of new species, with new characteristics and new survival abilities. So the bacterial loads we harbor are probably much more complex than we envision and certainly more complex than what conventional medicine envisions. After all, conventional medicine is still trying to tie one pathogen to one disease, and that’s only if they even decide to factor bacteria into the picture at all.

    In order to better understand horizontal gene transfer, I spoke with Dr. Peter Gogarten at the University of Connecticut and Dr. James Lake at UCLA, both of whom are leaders in the field of gene transfer. Both of them were extremely friendly and seemed excited to speak with me about the phenomenon. I asked them the same questions. Here is how they responded:

    Can you tell me a little about why horizontal DNA transfer (HGT) is so important?

    Lake:   Well, without taking horizontal gene transfer into account, how do we explain the fact that prokaryotes (bacterial organisms) continue to trade genes even though they have no means of sexual reproduction? The only way that new bacterial species can form, and populations of bacteria can adapt and conform to new circumstances, is if they exchange DNA or genes during their lifetimes.

    Dr. James Lake

    We now realize that organisms with similar characteristics find it much easier to swap DNA. But on occasion, a group of organisms, such as a species of bacteria, can trade DNA with a class of organisms that have very different characteristics. If this does happen, it means that through the process of horizontal gene transfer, a bacterial species can acquire a host of new characteristics, even from organisms that are quite different from them. These new acquired characteristics may or may not offer them a survival advantage, but if the acquired traits do endow them with an advantage, they may be able to survive in a new environment or infect a new species – anything along those lines.

    So, in my opinion, it turns out that the exchange of genes among prokaryotes is more fundamental than we’ve ever thought it to be in the past. Because of horizontal gene transfer, the evolution of many species, many different types of bacteria, and also multi-celled organisms are all entangled. Our genetic histories are definitely the result of our DNA mixing with the DNA of other species, including bacteria. Clearly this phenomenon plays a significant role in the body.

    Gogarten:   Horizontal gene transfer allows us to understand how organisms such as bacteria, that don’t have sex, are able to exchange genetic material and create genetic diversity amongst their populations. Human beings and most mammals reproduce via sex in what is called vertical gene transfer. In the case of vertical gene transfer, two sets of different chromosomes (that contain different genes), one from each parent, combine, so that the offspring has a combination of genes from both mother and father. So horizontal gene transfer, the type of gene transfer that occurs between bacteria, viruses etc, is another mode of sharing DNA that still fosters diversity. If bacteria and other organisms couldn’t trade genes via horizontal gene transfer, then there would be no recombination at all, bacteria would not be able to change or acquire new characteristics from generation to generation. Species that have more in common are likely to trade genes more often. However even species that are not of the same lineage can trade genetic material. For example, research has shown that over the past million years, species of bacteria picked up DNA from the domain Archea, prokaryotes that are very different than bacteria. These exchanges may be rare, but still occur.

    The Marshall Protocol puts forth the idea that diseases of unknown cause are bacterial illnesses. How do you feel about this hypothesis?

    Lake:   I am open to the idea that bacteria may be behind diseases of unknown cause. Lately, I have been fascinated by many of the studies which have found that certain species of bacteria in the gut affect an individual’s tendency to gain weight. Even before these studies came out, I’d been thinking about such a possibility for years. I thought a connection would be found. I started to think about the possibility after taking a trip to Japan about four years ago. I was helping advise a steel plant which had just built two refineries for their waste (waste can easily be infected by bacteria). They were identical – they had the exact same design, were the exact same size, and had the exact same content inside (remnants of waste). Yet one of the refineries worked perfectly well and the second refinery simply didn’t work. We ended up taking the bacteria from the refinery that didn’t work and transferring the populations over to the refinery that did work – much like the researchers in these obesity studies take bacteria from obese mice and implant them into thin mice. In the case of the refineries, once we did the bacterial transfer, the second refinery stopped working in the exact same manner as the first. So clearly, certain species of bacteria determined whether each refinery was able to function. I went away thinking, “If this can happen in a reactor mill, it’s got to be able to happen in the body!”

    Dr. Peter Gogarten

    Gogarten:   I do believe that in the future we will discover that many more diseases of unknown cause have a microbial component. I believe the fact that we have not implicated bacteria in more diseases is related to our inability to correctly culture so many different forms of pathogens. Current culturing mechanisms are obviously very poor at identifying the presence of bacteria. Once molecular technology becomes used more frequently, we will probably be able to detect more pathogens and recognize their association with disease. I also suspect that we will be hearing more about how an imbalance of bacteria in the body can cause disease.

    I’m very interested in this question. At what rate do you think horizontal gene transfer occurs in the body? Is it happening constantly? Does it only happen on occasion?

    Lake:   Right now we are only able to estimate and guess about the exact rate of horizontal gene transfer that occurs between organisms. As I mentioned before, it is much easier for similar organisms to trade genes, so HGT happens more frequently between such organisms rather than in organisms with different characteristics. I can’t tell you an exact rate, but I do believe that gene transfer occurs very frequently among similar organisms because of the fact that such transfer happens relatively easily and there are several different ways for DNA transfer to occur. Among all these options, it’s probable that transfer happens quite often. There are three fundamental ways that organisms exchange DNA:

    The first is called conjugation. This process simply involves introducing new genetic material into a different organism. Whether the genetic material is actually incorporated by the new organism is something that’s harder to track. It used to be thought that conjugation was a way to explain how bacteria like E. coli might have “sex” or foster new organisms with different genetic characteristics. But researchers soon realized that E.coli can also exchange genetic material with organisms with very different characteristics (like cyanobacteria) through conjugation. So it’s not technically sex if it can happen between very different organisms. Still, this is one of the easiest ways to exchange DNA and can be performed in the lab.

    The second way that organisms exchange DNA is through a process known as transformation. I’m really interested in transformation. Several decades ago it was mistakenly thought that organisms actually feared strange DNA, or DNA from organisms not like themselves. However, we now realize that this is definitely not the case. Now we know that, in the lab, it’s possible to take organisms with very different DNA, put them in solution, and electrically shock the plate. During the shock, the organisms in the plate trade much of their DNA. This suggests that under stressful situations in particular, organisms are more likely to engage in gene transfer. So, in the body, gene transfer may be particularly common under situations of stress or starvation. If an organism finds itself in a cell that isn’t getting adequate nutrients, it’s logical that it would try to swap DNA with another nearby organism in the chance that the DNA swap might offer it some sort of survival advantage. It’s quite possible that the DNA swap would have no effect, but then again, maybe the swap could give the bacterial species an enzyme that would allow it to use an alternate energy source still available in the cell. This may be how bacteria remain alive when they are forced to go into “survival mode.”

    Last, but not least, gene transfer can occur through transduction – a process in which virus infect bacteria or humans and in the process integrate their DNA into the organism they have infected. So it’s perfectly likely that if a person is infected with both bacteria and viruses, the two forms of organisms can swap DNA.

    Gogarten:   This is a difficult question. It’s very hard to estimate the rate of HGT. But several studies have been eye-openers to me, suggesting that horizontal gene transfer happens quite frequently. I remember a study in which researchers looked at three different E. coli genomes. Basically, they just took the three genomes and sequenced their DNA. Without taking HGT into consideration one would expect the genes of each E. coli in each genome to be identical because they are all the same species. But, after using a molecular technique, the researchers found that only 40% of the genes that each E. coli harbored were identical. 60% of the genes differed between each E.coli genome sequenced. This suggests that among E. coli, and other similar bacteria, an enormous amount of horizontal gene transfer is taking place during just a short period of time. A really amazing amount of transfer.

    One must understand that when bacteria and other organisms swap DNA through HGT, most of the changes that occur when the DNA is swapped are not important or fail to give an organism that acquires new DNA a survival advantage. But in some cases the swap results in a situation that endows another bacterium with a plasmid, or a protein that does provide an advantage, allowing the organism to live in a new ecological niche or survive new environmental conditions. This is how bacteria end up adapting to new challenging circumstances. But for the most part, it is hit or miss. I’d say the occasions on which organisms are actually conferred a serious survival advantage thanks to HGT are rare. They are, for the most part, the exceptions. But when they do occur, the organism involved in the transfer can really benefit.

    Also take, for example, the amount of HGT that probably goes on inside bacterial biofilms. Biofilms are definitely environments that foster HGT. I remember a study where researchers took several plasmids and stained them with a fluorescent dye. They did this so that if they were taken up by another organism, you could see them glowing inside and know the plasmid DNA had indeed been incorporated. In this particular case, the researchers started with two bacterial biofilms and introduced several of these plasmids. Soon, both biofilms were glowing with light. So it was obviously quite easy for the organisms in the biofilms to pick up new genes.

    Right now, I am very interested in studying xenobacteria, single-celled organisms that live in oceans. There are many strains of these bacteria, and all of them differ because, over time, each acquired different genes. Some of the genes that all xenobacteria picked up allowed them to change from normal bacteria to organisms that are able to perform photosynthesis under the water. Even now, the different strains of xenobacteria still shuttle genes back and forth. It’s almost like trying to study and classify Darwin’s finches. Each bird acquired (through vertical gene transfer) genes that allowed it to adapt to its own niche and find unique sources of food. In the same sense, HGT has allowed xenobacteria to do the same thing, and tracking their diversity is just as interesting as looking at the differences among Darwin’s finches. Studying xenobacteria has convinced me of the tremendous amount of horizontal gene transfer that takes place in the ocean.

    Do you think that researchers currently underestimate the amount of horizontal gene transfer occurring in the body, or is the concept of horizontal gene transfer taken adequately into consideration in the studies you read?

    Lake:   It’s hard to tell because the phenomenon is so complex. For the most part, it seems that researchers are making an effort to account for horizontal gene transfer but some of the genetic changes that occur due to the phenomenon may definitely be too subtle for us to pick up. The whole process is likely too complex to be fully accounted for in the average study.

    Gogarten:   I think the pendulum swings back and forth. In the 1940s, scientists had largely given up on the idea that we could create a tree of life – a chart that would show us the lineage of organisms on Earth. Researchers figured because so much HGT was going on, it would be impossible to separate species into distinct lineages. Then, over the past decades, researchers like myself have given thought to the possibility that we may indeed be able to classify organisms, at least to some extent, despite the fact that they so frequently trade DNA. Yet rather than a tree of life, I think what we have to envision when we think of connections between species is more like a web, a network – where there are main lines of ancestry, yet some species that don’t fall strictly into any category between them. In this area, I think we are just scratching the surface of what we will find when we really start to learn more about how HGT has affected the evolution of organisms over the course of history.

    Any parting thoughts on HGT?

    Lake:   These are very exciting times. I feel that in the next five years our whole view of the evolution of life may change as we continue to take this phenomenon into account. I’m glad you’re taking a close look at the characteristics of bacteria on your site.

    Gogarten:   My work has showed me that HGT is capable of endowing organisms with dramatically new traits and completely change the capabilities of microorganisms.

  • Comments Off on Insights into horizontal gene transfer: conversations with Dr. Peter Gogarten and Dr. James Lake
  • Filed under: featured articles, horizontal gene transfer, interview (doctor/researcher)
  • Voices of reason in the vitamin D debate

    Maybe vitamin D isn’t the answer after all.

    Not only does the above statement ring true, it’s also the title of a recent post on “Dr. Len’s Cancer Blog” – a website written by Dr. Len Lichtenfeld, Deputy Chief Medical Officer for the national office of the American Cancer Society, in order to facilitate communication with the public on important issues related to cancer.

    Dr. Lichtenfeld, as described by his website, is a frequent spokesperson on a variety of cancer-related subjects, and serves as a liaison for the Society with many professional and public organizations. He’s also a board certified medical oncologist and internist who was a practicing physician for nearly 20 years and serves on several national committees focused on physician payment, the quality of medical care, and the role of health information technology in healthcare delivery.

    In the blog entry described above, Lichtenfeld attempts to explain to the public why the American Cancer Society does not plan to advise the American public to take extra vitamin D supplements in the name of preventing cancer (this is in contrast to the Canadian Cancer Society which has, unfortunately, urged citizens to ingest more of the secosteroid).

    Dr. Lichtenfeld

    Lichtenfeld begins his discussion by taking a close look at one of the most recent studies on cancer and vitamin D – a study conducted by the National Cancer Institute. The first study to actually look at the relationship between measured vitamin D in the blood and subsequent total cancer deaths, it failed to show an association between baseline vitamin D status and overall cancer risk in men, women, non-Hispanic whites, non-Hispanic blacks, Mexican Americans, and persons younger than 70, or 70 years or older.

    “The key finding of the study was that there was no impact of vitamin D levels on the overall risk of dying from cancer, when comparing groups based on where they lived or what season their blood test was drawn (spring and summer would be expected to increase vitamin D levels, compared to winter),” Lichtenfeld explains. “Vitamin D had no impact on cancer deaths when various racial/ethnic groups were examined.”

    Of course, Lichtenfeld does acknowledge that the research team found a significant reduction in colorectal cancer among subjects with higher levels of vitamin D (25-D) in their blood. Yet, in a decision that reflects his neutrality on the subject, Lichtenfeld makes it clear that such findings will need to be confirmed by future studies before the American Cancer Society considers vitamin D as a possible remedy for colorectal cancer.

    No doubt he is aware of a similar study conducted by Jacques Rossouw at the National Institutes of Health, whose group tracked the effects of vitamin D on 46,282 postmenopausal women with colorectal cancer, while monitoring the women over a long period of time. Rossouw’s team found “absolutely no indication of an effect of calcium or vitamin D [on cancer] — zero.”

    With such conflicting data emerging on vitamin D and colorectal cancer, no wonder leaders such as Lichtenfeld are taking a step back to see if they might be missing part of the vitamin D puzzle.

    Such contradictions may also be why, with good reason, Lichtenfeld appears to be taking a long, hard, look at how several other studies on vitamin D have been conducted, with a keen eye towards bias.

    Many of the other studies have tried to infer vitamin D levels through a variety of means, such as asking about dietary habits or inferring a vitamin D level based on descriptions of outdoor activities.“Many of the other studies have tried to infer vitamin D levels through a variety of means, such as asking about dietary habits or inferring a vitamin D level based on descriptions of outdoor activities,” comments Lichtenfeld. His concerns about such research methods are well-grounded, as studies attempt to infer levels of vitamin D rather than measure them are notoriously bad at coming up with accurate results.

    Thus, Lichtenfeld suggests that the recent study by the National Cancer Institute, a study which found that vitamin D offers no overall benefit in fending off cancer, should bear more weight than other studies on the subject, as it was done prospectively – meaning that participants were followed looking forward, and actual blood tests were used to measure the amount of vitamin D in their blood.

    Futhermore, Lichtenfeld seems to understand the urgent need for long-term studies on vitamin D. He agrees with editorialists who have suggested that it may take longer than 6-12 years to accurately assess the effects of vitamin D on study subjects – especially since, as he comments, it can take many years for a cancer to develop.

    Those of us familiar with the Marshall Protocol wholeheartedly agree with Lichtenfeld in this regard. It’s clear that future studies on vitamin D and cancer will have to follow their subjects for at least a decade or two in order to accurately gauge the relationship between intake of the secosteroid and cancer rates. If such studies actually take place, they will almost certainly highlight the drawbacks of vitamin D rather than any purported “benefits”, as the negative consequences of immunosuppression become increasingly apparent over longer periods of time.

    Lichtenfeld proceeds to comment on several editorials written in response to the National Cancer Institute study, arguing they “point out that we need to know more about how vitamin D levels change from season to season, and how that impacts our health.”

    He also warns readers to heed the following editorial comment, stating that he “couldn’t agree more” with their conclusions:

    “Whether vitamin D reduces cancer risks and, if it does, whether these amounts suffice are actively being debated. Randomized clinical trials of the effects of vitamin D on the incidence of colonic polyps and invasive cancer are needed. While vitamin D may well have multiple benefits beyond bone, health professionals and the public should not in a rush to assume, in a rush to judgment, that vitamin D is a magic bullet and consume high amounts of vitamin D. More definitive data on both benefits and potential adverse effects of high doses are urgently needed.”

    Indeed, Lichtenfeld seems wise enough to have realized that treatment options that are suspiciously simplistic enough to be dubbed “magic bullets” have seldom if ever held up to medical scrutiny, especially when researchers start to examine the substance at the molecular level.

    When the studies were actually done, we discovered that the vitamins had either no effect or, for some people, may have actually increased their risk of cancer.“We have consistently called for more research into this topic [vitamin D],” he argues. This is especially important given our past experience with other vitamins, such as vitamin C and beta-carotene, where well-qualified experts touted the benefit of those vitamins in reducing cancer risk. When the studies were actually done, we discovered that the vitamins had either no effect or, for some people, may have actually increased their risk of cancer.”

    As with any other blog, readers are able to write responses to Dr. Lichtenfeld’s pieces. The very first person to respond to “Maybe Vitamin D isn’t the Answer After All” was none other then Dr. Jacob Cannell – head of the “Vitamin D Council” – an organization that seeks to promote the consumption of vitamin D, and when I say promote I mean promote. Although the group presents itself as a scientific body, even a quick glance at their website assures the reader that the members of this Council have failed to read, evaluate, or even consider any of the alternate hypotheses proposed about vitamin D – hypotheses based on research that clearly show that extra levels of the secosteroid are harming rather than helping people with chronic disease.

    “Perhaps you could explain what residual confounding is?” writes a livid Dr. Cannell. “If so, your readers might feel you fully understand the study. What was the relative risk of breast cancer? I know the sample size was too small for signifigance [sic] but you might want to say what it was? Is it true that the relative risk of breast cancer was almost four times higher in the group with the lower levels?…..What you are actually doing is defending the American Cancer Society’s decision not to follow the Canadian Cancer Society’s recommendation of 1000 IU per day of vitamin D. Say you are wrong and Canada is right? On whose hands will that blood be?”

    Apparently for the Vitamin D Council, this is what passes for professional discourse.

    Lichtenfeld kept his cool, responding, “What Dr. Cannell has not said is that similar circumstances in the past–with other vitamins that were thought to be harmless and able to reduce the risk of cancer–showed evidence of harm and/or lack of efficacy when subjected to appropriate study. To say that my opinion is equivalent to having blood on my hands is an ad hominem attack not worthy of consideration. His cause would be better served to advocate on behalf of people who need to be screened for colorectal cancer (which would save thousands of lives, based on solid evidence), and join us in encouraging appropriate review of the data and research to definitively answer the issue at hand.”

    Lichtenfeld then ended the discussion with a statement that just about sums up one of the biggest problems to result from the fact that the public is getting their information about vitamin D straight from the mouths of people like Cannell, stating:

    “When we succumb to making every medical decision solely on the basis of the strongest advocate’s voice, we run the risk of moving medical practice back into an era similar to that from which we are trying to emerge.“When we succumb to making every medical decision solely on the basis of the strongest advocate’s voice, we run the risk of moving medical practice back into an era similar to that from which we are trying to emerge. If the review and research studies confirm Dr. Cannell’s position, that will be welcome. But we need to once and for all establish the science-based evidence that will conclusively answer the question one way or the other, rather than relying on advocacy to establish dietary and medical practice recommendations for the world.”

    Steven Strauss takes a look the ethical issues affecting the bulk of vitamin D research.

    At about the same time that Lichtenfeld was advising the public to wait for more research before popping extra vitamin D supplements, author Steven Strauss was addressing similar issues in an entry published on the CBC News blog. To me, it is a remarkable piece, because it comes from one of the few voices that actually says in the midst of what can only be described as vitamin D hysteria, “Hey, wait a minute.” In his online bio, Strauss expresses admiration for the motto of Austrian writer Karl Kraus – “Say what is.” I think it’s pretty clear that Strauss does just that.

    Strauss begins his discussion of vitamin D by describing the pressures put not only on himself, but on the average Canadian citizen to purchase vitamin D. “It’s been cold and remarkably un-sunny in my neck of Canada recently — climatic conditions which I have been repeatedly told in the past year should lead me to start scarfing down vitamin D pills, and do it in amounts which likely exceed Health Canada’s daily recommended dosage,” he writes.

    Along with his fellow citizens, he’s also been urged by numerous vitamin D advocates – who might be better characterized as zealots – to ignore the Canadian government’s requirements about vitamin D. These advocates, who include researchers such as Reinhold Vieth, Michael Hollick and Cedric Garland, have encouraged Canadian citizens to “strike out on a vitamin D health path of their own” by taking five times the amount of vitamin D suggested by the government.

    “And if I don’t, it is my fault — well ‘my’ as in all the media — if you readers get cancer, multiple sclerosis, flu, autism, depression, diabetes, loose teeth, stroke, heart disease, osteoporosis, fractures and God knows what else,” remarks Strauss.

    Strauss’ comment is laced with sarcasm as he is well aware of an editorial published last year in the American Journal of Clinical Nutrition in which 15 of the top vitamin D proponents from around the world scolded journalists for not encouraging the public to consume the high amounts of vitamin D recommended by… themselves.

    “Well, one should take this kind of criticism to heart,” remarks Strauss. Indeed, the situation caused Strauss to examine other papers on vitamin D.

    Among these papers was a study by researchers at Creighton Univerisity in Nebraska, one of the papers cited by the Canadian government in an effort to rationalize its decision to recommend that people in Canada take much more — upwards of two-and-a-half times today’s recommended 400 International Units — vitamin D on a daily basis.

    The study, which looked at the cancer rates of women taking vitamin D, taking calcium without vitamin D, or taking nothing over a four year period reported a 60 per cent decrease in collective cancer rates for the vitamin D takers when they took what was something more than twice the currently recommended dosage.

    When Strauss took a better look at the study, he wasn’t pleased with what he discovered. Now, I would like to share with my readers an extended portion of Strauss’s post, in his own words, starting with his discussion of the Creighton study. The following is reproduced from the CBC Canada site. Strauss’s argument is too cogent, too compelling, not to share it.

    While the cancer numbers were small — only 50 cases in total — the CCS decision meant there was lots of coverage of the research. I found upwards of 50 reports in magazines, newspapers, radio and television, but absolutely zero coverage of the criticism of the paper that appeared in the journal in recent months.In one letter, three scientists in Texas pointed out a number of issues, not the least of which being an Iowa study which suggested that when breast cancer was looked at there was indeed a fall in cancer numbers for the first five years when a vitamin D supplement was taken. But this balanced out at 10 years and there actually seemed to be more breast cancers among women taking vitamin D after 15 years.It is precisely these sorts of yes/no/maybe results that make science and medical writers very, very, very, very cautious about blithely recommending dose rate increases.It is precisely these sorts of yes/no/maybe results that make science and medical writers very, very, very, very cautious about blithely recommending dose rate increases.Then there were the questions raised by Manish Sood of the Toronto General Hospital and Amy Sood of the University of Toronto faculty of pharmacy. They pointed out that some had suggested the incidence of heart disease might grow as a result of increasing the vitamin D dosages and recommending, as the CCS did, that supplements be taken year round or during fall and winter months depending on skin colour and other factors.

    In light of the CCS recommendation and a possible heart disease side-effect, they concluded their letter saying: “As Canadians, we ask the question — have we just traded one problem for another?”

    Sounds reasonable, but their concern was brushed back by paper authors Robert Heaney and Joan Lappe of Creighton, who responded that there is no evidence of heart problems with vitamin D doses up to 10 times what they had given people. They added, “The issue of vitamin D toxicity was exhaustively reviewed in this Journal just a few months ago and Sood and Sood may find some reassurance in that report.”

    Given this disagreement I, too, needed reassurance and so I went to the review where I found something very non-reassuring. Heaney and Vieth had co-authored the toxicity study with two employees of the Council For Responsible Nutrition, a Washington D.C.-based lobby group and trade association for ingredient suppliers and manufacturers in the dietary supplement industry — that is to say, the official representatives of the people who would make vitamin D.

    Ultimately what the four wrote looks extremely authoritative, and might well be so, but to my mind this collaboration represents not an apparent conflict of interest, but a genuine conflict of interest.And their roles were anything but minor. One applied “risk assessment methodology” to the results and the other “searched literature and summarized relevant findings.” Ultimately what the four wrote looks extremely authoritative, and might well be so, but to my mind this collaboration represents not an apparent conflict of interest, but a genuine conflict of interest.

    And let me explain it with a simple equation. Let us assume that one-third of the people in North America decide, based on the CCS recommendation, to more than double their vitamin D dosage and this costs a bare $20 per person a year. That translates into an extra $2 billion going to vitamin D manufacturers and sellers.

    All of this made me go back to the original Creighton paper and look to see if there was any indication of specific conflicts of interest among the researchers in it. The paper says no, with resounding vehemence: “None of the authors was affiliated in any way with an entity involved in the manufacture or marketing of vitamin D.”

    Then it goes on to mention that one author, Robert Recker, was on the scientific advisory boards of Roche and Proctor & Gamble, and Heaney was on the scientific advisory board for the International Dairy Food Association and the speaker’s bureau for P & G.

    It’s true that Roche doesn’t make vitamins today — but it sold the business in 2003, a time that the Creighton experiment was ongoing. The sale, by the way, was announced at the same time Roche said it had resolved lawsuits growing out of its involvement in vitamin price fixing.

    But Proctor remains in the business, in that it has licensed its Olay name to another company to produce Olay vitamins, which include vitamin D in a multivitamin supplement. Not to mention the fact that Heaney reported in 2006 that he had a “financial relationship with SmithKlineGlaxo” — a company which directly produces vitamin D.

    And oh, yes, it seems almost everyone doing vitamin D research — Vieth included — gets money from dairy farmers associations in either Canada or the U.S.

    So I sent Recker and Heaney an e-mail asking for an explanation and Recker responded: “Neither Dr. Heaney nor I have any affiliation with the company that supplied the vitamin D for the study. We have not had affiliation with the vitamin D work for the companies you mention. I have been a scientific adviser to Roche, P & G and Smith-Kline-Glaxo, but not in their vitamin D work.”

    Interestingly finely parsed, but when I Google “Recker and Glaxo” I find him quoted in a company press release endorsing an osteoporosis website the company supports — a site that advocates taking vitamin D and which points out that if you have problems getting it naturally, you can buy supplements that will fill in the gap.

    Recker responded in his e-mail to me that, “I do not include the statement in the press release as a potential conflict of interest since I was not making the statement out of any affiliation with GSK. I have not participated in any of the studies nor in any advisory capacity to GSK regarding any vitamin D product. There is often some confusion about what constitutes a potential conflict of interest, as might be the case here. My institution does not require that I list this as a potential conflict of interest in its management of faculty relations with industry.”

    Parsing a parse, if you ask me.

    I then had a lengthy discussion with Vieth who quite candidly said he had been delighted to join up with the manufacturers’ association employees in the toxicity review paper because he had long admired them for being good scientists. “I was honoured when they asked,” he told me.

    As to money conflicts he doesn’t think that was a big issue because vitamin D is a generic product and can be made for very little. He said the pure form of the substance costs about $3,000 a kilogram to make, a figure that translates into the dose each of the women in Nebraska took to ward off cancer costing about 3.5 cents a year to make.

    Then he told me he had been angered when his name had been taken off some scientific papers after he, in complete openness, told agencies and journals that he and his wife have set up a vitamin D company in Toronto called Ddrops Inc. She is now the company’s president and it sells a year’s supply of 1,000 IU liquid vitamin D for about $20. “I was told my name was being taken off papers because of my wife’s occupation. That is something I find infuriating and upsetting,” he said.

    A little additional research found that Elaine Vieth has told the Hamilton Spectator that pharmacies initially had little interest in selling her product, which can be sprinkled on food or in drinks, but that after the Creighton cancer study appeared she sold 30,000 bottles within two days.

    I am not often struck speechless by life’s contradictions, but here I am.Who would have thought that the research pertaining to what Ddrops markets as “the sunshine vitamin in just one drop” could be so conflicted?

    Nonetheless, let me be absolutely clear. I cannot say that any of the findings of any of the researchers I cite — particularly when it comes to vitamin D’s cancer preventative effects — are erroneous because of the scientists’ commercial connections. Vitamin D may indeed turn out to be the next best thing since free e-mail and ballpoint pens, but I will say that a careful journalist, a prudent journalist, a wise journalist would look at this tangled mess of conflicted interests and results and proceed exceedingly carefully in promoting a massive change in vitamin D dosage levels.

    I will say that Health Canada should not be stampeded into doing anything reflexive when it comes to raising vitamin D dosage levels.

    And I might also suggest that if university scientists are looking for a less conspiratorial explanation for their perception that media has been loath to join a crusade to raise the dosage levels, they would do well to consider how it looks to outside observers when researchers blithely associate with those who benefit financially from these changes.

    And that advice is good on both the sunniest and the cloudiest of days.

    In Strauss’ case, an inflamed Vieth wrote back in response to the piece, arguing that. “Is there any conceivable way that a new discovery in nutrition, health or therapeutics could make a difference to the public without involving a commercial interest? Compared to the private sector, government agencies usually move at a snail’s pace. If vitamin D is the example being discussed, then it is foolish to imagine that government will reflect anything newer than what was known ten years ago. Government does not make products for consumers. There can be no progress without the private sector.”

    I beg to differ. Mixing commercialism with science is a dangerous endeavor, one that is sure to mislead the public with biased opinions and deliberately cheerful results. It’s the attitude Vieth describes above that has taken the public to the place they are now. They are a group sadly misled by a handful of researchers who zealously advocate for their preconceived beliefs, while refusing to acknowledge even the most valid of scientific research if it proves them wrong.

  • Comments Off on Voices of reason in the vitamin D debate
  • Filed under: featured articles, vitamin D
  • About Amy Proal

    Amy and Zeus

    Amy Proal graduated from Georgetown University in 2005 with a degree in biology. While at Georgetown, she wrote her senior thesis on Chronic Fatigue Syndrome and the Marshall Protocol.