Science fiction fans know that technology is a double-edged sword. On one hand, advances in science offer us fantastic powers to solve difficult problems (think flying cars). On the other hand, the potential for catastrophe is also greater. With better technology comes a greater responsibility to prevent its misuse.
Early botanical scientists understood both the power and limitations of science to describe a complex natural world. Carl Linnaeus, who developed the original system to classify plants and animals, recognized that all organisms are not discrete species necessarily, but exist on a continuous spectrum of life.
Today, scientists in academia work to identify and quantify the diverse array of chemical constituents in botanical products, while industry works to ensure a safe, effective and consistent product. At our disposal are alphabet soups of various analytical technologies that offer increasingly better detection of constituents, even down to the picogram, which relative to a gram can be visualized as a drop of water in a thousand swimming pools.
But with picoscale resolution comes a lot of noise (one trillion per gram, to be exact) and even more responsibility to reliably separate a signal from it. Even at the parts-per-million (ppm) level—equivalent to a cup of water in a swimming pool—we often observe unexplainable results that defy logic.
For example, only today’s best and most expensive instruments, such as multiple mass spectrometers linked to a chromatograph, such as LC/MS/MS (also known as tandem-MS, which means two mass spectrometers are hooked to each other; the first MS removes a lot of the "junk" that can interfere with the result from the second MS), are able to account for matrix effects that occur when testing complex mixtures. The reason complex mixtures are so difficult to examine is they contain so many different compounds, and therefore the chances are relatively high that one of these is observed at the same retention time (or peak) on the chromatogram as the compound a scientist is trying to quantify. Also, because the sample is being injected into super-heated, high-pressure instruments, there are often chemical reactions create new interfering compounds. Matrix effects can falsely change results in a significant way that cannot be resolved without further work. Results should always be questioned and replicated, and ultimately, investments in the development of methods are required to generate confidence.
Validation of matrix-specific methods across multiple laboratories address these challenges, however few methods have been validated to the extent required to be confident in the results. An example from the nutrition field: the inherent challenges in quantification of vitamin D (a pure compound and age-old vitamin, no less!)
Both the best and worst thing about good science is that with each answer comes another question. There is always more work to be done to achieve the greater goal: reproducible results. Needless to say, rigorous analysis of complex mixtures such as botanical products is often not straightforward. Unfortunately, the aims of science often oppose the aims of high-throughput lab testing.
How do you know whether a lab is focused on getting the right results? Here are some criteria to help decide whether or not to work with an independent laboratory:
- Is it transparent? Does it share methods, chromatograms, observations, historical data and control charts?
- Does it perform validation? Does it verify methods using appropriate controls such as calibration curves and spike recovery? What steps are taken when it initially sets up a method?
- Does it have a process for dealing with out-of-specification results, and will it share that process? Does it have an internal recordkeeping system that tracks method precision and alerts them when a method or system is out of calibration?
- Does it run internal control samples? Does it run samples in triplicate or duplicate at least, and does it report statistical analysis on the certificate of analysis (CoA), such as standard deviation from multiple runs?
- How does it validate the purity of reference standards? When it gets a new batch of reference standard, does it run it against an internal control sample? How often does it make fresh reference standard solution?
- Is it a proactive communicator, for example how often does it advise on the best methods to use, and alert their customers on new developments in methods?
Not all testing needs to be expensive or high-tech, but every method needs to be rigorous enough to provide results that are reproducible in another lab. For example, thin layer chromatography (TLC) is not high-tech, but it can be valid to determine botanical identity with the right mix of expertise, a rigorous and validated set of reference standards, and enough trial and error to develop the method and be confident in reproducibility of results. High-performance liquid chromatography (HPLC) is great when actual validation of the method and reference standards have been certified for their purity.
The true test of scientific validity is when multiple labs running different methods achieve the same result, especially when they are blinded as to the expected result.
Despite all of the challenges in quality control (QC) testing of botanicals, the world is changing, and our industry is rapidly improving. With scientific validity mandated by supplement GMPs (good manufacturing practices), and increasing demands for transparency and validity from all stakeholders, everyone is upping their game. Good science, not science fiction, provides reproducible results we can all be confident in.
Blake Ebersole is technical director at Verdure Sciences (vs-corp.com), where he has led botanical quality initiatives and formed collaborations with dozens of universities and research centers focused on preclinical and clinical development of botanical extracts. Originally trained in analytical and biochemistry, Ebersole also holds graduate degrees in marketing and international business. Follow him on Twitter, @NaturalBlake.