Modern environmental science faces a curious paradox. We have more data than ever, but less certainty. For scientists, policymakers, and the public alike, the sheer volume of studies, each with its own assumptions, experimental conditions, and interpretations, can be overwhelming. Which studies are trustworthy? Which deserve more weight when making decisions about environmental safety? This question has haunted environmental toxicologists who were trying to determine whether pesticides were harming pollinators such as honeybees. Some studies could show significant impacts while others may show minimal effects. Such inconsistencies can fuel the debate over insecticides like neonicotinoids and lead to public confusion. To address this, Professor Keith Solomon, an environmental toxicologist at the University of Guelph, and colleagues set out to bring structure and clarity to the field. Their goal was not to silence debate, but to create a rigorous, transparent, and quantitative framework for evaluating scientific evidence. The result was a methodology called the Quantitative Weight of Evidence, or QWoE. More
The origins of the QWoE approach lay in the recognition that environmental science is rarely black and white. Every study has strengths and weaknesses. Laboratory experiments can control conditions tightly but may not reflect real-world complexity. Field studies can capture natural variability but sometimes lack sufficient replication or statistical power. Regulators, companies, and environmental groups all had data, but no agreed-upon way to compare their reliability.
Solomon’s insight was that judgment in science can itself be made systematic. Rather than treating expert opinion as subjective, he proposed to score and document it, numerically, systematically, and transparently.
In a 2017 study, Solomon and Dr. Gladys Stephenson of Aquaterra Environmental Consulting described the development of this QWoE process. They began by defining the problem formulation, meaning what question the evidence should answer. In their initial work, this question concerned the potential effects of neonicotinoid insecticides (clothianidin, imidacloprid, and thiamethoxam) on honeybees. These chemicals, widely used as seed treatments, had come under intense scrutiny because of suspected links to declines in bee populations.
Solomon and Stephenson built conceptual models that mapped how bees could be exposed to these insecticides: through nectar, pollen, droplets released by leaves, dust from planting, or other water sources. They examined which exposure routes were realistic and which were negligible. These models grounded their later analyses in biological reality rather than speculation.
Next, they developed criteria for quality and relevance. Each study was scored from 0 to 4 on how well it was designed, conducted, and reported (quality), and on how directly its findings applied to the real-world question at hand (relevance). A high-quality study might have strong controls, standardized methods, and full reporting; a high-relevance study might use field conditions fully representative of agricultural practice.
Importantly, the approach did not discard “imperfect” studies. Instead, it acknowledged uncertainty quantitatively. A study with some weakness might still provide useful information, just with a lower weight in the overall synthesis. The idea was not to cherry-pick evidence but to make the weighting explicit and transparent.
Finally, the researchers used graphical summaries to show, at a glance, the distribution of study quality and relevance. This visual synthesis helped decision-makers see patterns rather than isolated results. If most studies of a given pesticide showed high quality and low risk, confidence in safety was strong. If results were scattered or uncertain, more research might be needed.
By combining structured scoring, transparency, and statistical summaries, the QWoE method gave environmental scientists a new language for trust. It made evidence measurable and, therefore, discussable.
Years after the original development, Solomon and collaborators applied this method to a new and timely question: Are pollinators at risk from exposure to sulfoxaflor? Sulfoxaflor is a relatively new insecticide developed to control sap-feeding pests such as aphids. Like neonicotinoids, it works on insects’ nervous systems, but its chemical structure and environmental behavior differ. It breaks down rapidly in soil and is used primarily as a foliar spray, not a seed treatment. Nevertheless, because it is systemic, traces can move into nectar and pollen, raising concerns about potential effects on bees.
In the 2025 paper “Evaluating Pollinator Exposures to Sulfoxaflor via Bee-Relevant Matrices: A Systems-Level Approach,” Solomon and colleagues used QWoE to assess the data available from both regulatory submissions and published studies.
The approach was both quantitative and biologically realistic. The team analyzed residue data from 16 different crops, from alfalfa and canola to citrus, corn, cotton, and strawberries, across multiple countries and climates. To determine risk, Solomon’s team used a two-tiered process.
Tier-1 involved conservative screening, in which the researchers compared the highest measured concentration of sulfoxaflor in nectar or pollen to the No-Observed-Adverse-Effect Concentration (or NOAEC) from 10-day feeding studies on adult and larval honeybees. If all values were below this threshold, the exposure was deemed too small to matter biologically.
Tier-2 screening was conducted when some exposures exceeded the NOAEC. This involved Time-Weighted Averaging the concentrations in pollen and nectar. They fitted dissipation curves to characterize how residues in nectar and pollen declined over time, allowing them to assess the time-weighted average exposure over 10 days, mirroring the duration of the toxicity test. If this average was below the NOAEC, the risk was considered negligible.
This structured, tiered reasoning embodied the spirit of QWoE: transparent, quantitative, and grounded in biology. It acknowledged variability and uncertainty without succumbing to alarmism.
Across 90 crop-use scenarios, 35 initially failed the Tier-1 screen, meaning that, at some point, measured concentrations exceeded the NOAEC. But when the team applied the more realistic Tier-2 time-weighted analysis, only three of the 90 scenarios (about 3%) remained above the benchmark. These were for alfalfa in California, strawberries in France, and peaches in Michigan. In these cases, further testing or mitigations (such as avoiding spraying during bloom) could be recommended. In all other cases, residues dissipated rapidly, and exposure levels fell well below thresholds of concern.
The analysis therefore demonstrated both the power of the QWoE method and its nuance. Rather than offering a blanket “safe” or “unsafe” verdict, it provided evidence-weighted confidence. It showed where the data were strong, where uncertainty remained, and where action might be needed.
The implications of Solomon’s work extend far beyond any single pesticide. The QWoE framework is a model for how environmental risk assessment can evolve, from a patchwork of opinions into a transparent, evidence-based discipline.
For pollinator protection, this approach bridges a crucial gap between laboratory science and field reality. Bees live complex lives in dynamic environments; they encounter not just one pesticide but many, alongside weather changes, parasites, and nutritional stress. QWoE helps scientists integrate this complexity without losing rigor.
For regulators, it provides a defensible, documented rationale for decisions. For farmers, it builds confidence that approved products are being evaluated fairly. And for the public, it offers reassurance that science is not a black box but a transparent process for reasoning.
Perhaps most importantly, the QWoE method embodies the idea that scientific uncertainty is not ignorance, and acknowledging uncertainty is the first step toward understanding it. In an era when environmental debates are often polarized, the QWoE approach offers a middle path: a structured way to argue, not just to assert.