Metabolomics studies have long faced a fundamental tradeoff: they could either measure a handful of molecules with great precision (targeted analysis), or survey thousands of molecules but with less certainty about their identities and exact quantities (untargeted analysis). It was like choosing between a magnifying glass and a wide-angle lens: one could examine a few things in great detail, or scan the whole landscape, but not both at once.
What is semi-targeted metabolomics?
Semi-targeted metabolomics (also called hybrid metabolomics) combines rigorous quantitation for a predefined panel of metabolites with broad high-resolution profiling for discovery – in a single analytical run. In practice, you get confident concentrations for selected biomarkers while still collecting untargeted data that can reveal unexpected patterns, new markers, or pathway-level shifts. It’s designed to reduce the classic tradeoff between precision and discovery without requiring separate experiments. We describe our semi-targeted workflow and deliverables in more detail on the Semi-Targeted Metabolomics page.
A new era is approaching. Modern instrumentation is finally letting researchers have their cake and eat it too.
The Old Divide of Targeted vs. Untargeted Analysis
To understand why this matters, consider what metabolomics tries to accomplish. When doctors take a blood or urine sample, it contains thousands of different molecules such as amino acids, sugars, fats, vitamins, waste products, and countless others. These molecules tell stories about what’s happening inside our bodies: how we’re processing food, fighting infections, taking medications and responding to them, or developing diseases.
Traditionally, metabolomics had two fundamentally different approaches to reading these molecular profiles. Targeted analysis was like looking for specific sentences in a book when you already know what you’re searching for, but without reading the rest of the book. If researchers suspected that, say, vitamin D levels or certain amino acids were important for a disease, they could design their analysis to measure those exact molecules with high precision. Learn more about our quantitative targeted metabolomics services.
Untargeted analysis took the opposite approach. Instead of looking for specific molecules, it captured signals from everything present in the sample, then trying to figure out what is what and what is important. This is essential for discovery, and is required to be able to find new or unexpected molecular patterns, identifying completely new biomarkers, etc. But without knowing exactly what needs to be measured, it’s not possible to tell what most of these molecules are or deduce their precise amount. It’s like reading an entire book, but in a language that you don’t know very well – you capture some familiar words here and there, and can guess meaning in a few places, but you are unsure about exact meaning of any one sentence and don’t understand the book in general. But if you figured that something is important and learn new words in that language, you can go back and examine the book again for a new context.
Each approach has its place. To develop a diagnostic test for a known disease, targeted analysis gives the accuracy needed for clinical decisions. To explore what molecules might be involved in a poorly understood condition, untargeted analysis allows surveying the whole molecular landscape. But getting a complete picture requires both, and that meant running separate experiments, doubling the cost and complexity.
For a concise step-by-step map of the overall process, see Key Stages of the Metabolomics Workflow.
What Changed: Faster Acquisition and Better MS/MS Sampling Across LC Peaks
Recent advances in mass spectrometry, the core technology used in metabolomics, have changed the game. The latest generation of instruments can scan molecular signals fast enough and with sufficient resolution to do both types of analysis simultaneously.
The critical difference is scanning rate.In older workflows, instruments simply couldn’t collect enough high-quality MS/MS information across fast chromatographic peaks without sacrificing either coverage or quantitative rigor. Modern systems can acquire data much faster and more efficiently, leaving room to prioritize targeted metabolites while still capturing broad high-resolution profiles in parallel.
Here’s how it works in practice. You can also review the full metabolomics workflow here: Metabolomics Workflow: From Sample Collection to Data Interpretation. The instrument is programmed with a list of specific molecules to watch for, something like 10, or 50, or even 500 compounds, depending on the application. For these targeted molecules, the analysis includes the same rigorous quantitation methods, where the instrument selects this molecule and precisely measures its signal, used in traditional targeted work. In combination with calibration standards, raw signals can be recalculated into precise concentrations. A doctor could use these numbers to make clinical decisions with confidence.
But in parallel, during that same analysis, the instrument is also recording high-resolution data for thousands or tens of thousands of other molecules in the sample. None of this comprehensive data collection sacrifices the quality of the targeted measurements, and it all happens simultaneously. The result is a dataset that can be used for both immediate clinical or diagnostic applications and open-ended scientific exploration.
Quantitation Propagation: The Anchoring Effect
There’s another subtle but important benefit to collecting both types of data together. The precisely measured targeted molecules serve as internal reference points for estimating the quantities of molecules detected in untargeted mode.
Consider trying to estimate the weight of different fish in a photograph. If the exact weight of one fish in the picture is known, it can be used as a reference to estimate the others based on their relative sizes. This prediction accuracy would vary, to be sure, as fish can have varying shape and body density, but this is still better than with no reference at all. Similarly, when there are accurate measurements for some molecules in a sample, they can be used as calibration points to semi-quantify related molecules that weren’t part of the targeted panel.
This “anchoring” can make the untargeted data more reliable and interpretable. Instead of just knowing that molecule X is present, it may be possible to estimate that it’s present at roughly this concentration, which helps distinguish biologically meaningful changes from background noise. It bridges the gap between pure discovery and quantitative science. This concept is called quantitation propagation.
Because semi-quantitative interpretation depends strongly on analytical stability, we summarize best practices for reproducibility and QC in metabolomics in our Metabolomics Quality Control, Reproducibility & Method Validation Guide.
Same Cost, Double the Value
Perhaps most importantly, this dual approach doesn’t cost more than traditional targeted analysis. Since both data types come from a single analytical run of each sample, there are no additional instrument time, sample preparation, or labor costs. For practical guidance on budgeting, turnaround time, and study planning, see Metabolomics Study Cost, Turnaround Time & Planning Guide. In fact, because modern data acquisition and processing platforms are more efficient, the actual per-sample cost can be lower than older targeted-only methods, despite delivering far more information.
For researchers, this eliminates what used to be a difficult strategic decision. The choice between getting precise measurements of known biomarkers or maintaining the flexibility to discover something unexpected no longer needs to be made. Both outcomes are achieved for the same investment of time and money.
Applications of Semi-Targeted Metabolomics
This convergence of capabilities has implications across medical research and diagnostics. In drug development, researchers can track the targeted molecules they know matter while watching for unexpected metabolic changes. In personalized medicine, doctors can monitor established biomarkers while building datasets that might reveal new subtypes of disease. In environmental health, scientists can quantify known contaminants while discovering novel exposures.
Consider cancer diagnostics as an example. Cancer is profoundly heterogeneous, even within what looks like a single cancer type, there are often multiple molecular subtypes with distinct prognoses and treatment responses. Current metabolomic biomarkers, such as, for example, long-chain aldehydes and other lipid peroxidation products, show associations with certain cancers but lack sufficient diagnostic specificity. Building a diagnostic test solely on imperfect biomarkers provides limited clinical utility.
This is where semi-targeted metabolomics offers a strategic advantage. While measuring the current panel of 50-100 suspected biomarkers with precision, the same analysis captures comprehensive molecular profiles across thousands of metabolites. As researchers correlate these profiles with patient outcomes, treatment responses, and disease subtypes, patterns may emerge that weren’t part of the original hypothesis. The exploratory data enables continuous refinement of the biomarker panel, both identifying which molecules actually drive diagnostic accuracy and discovering new markers that better capture disease heterogeneity. What starts as a test based on imperfect knowledge will (hopefully) eventually evolve into a more powerful diagnostic tool, all built from the same set of samples.
Why Semi-Targeted Metabolomics Is Becoming the Default
The field is entering an era where the technical limitations that shaped how metabolism is studied are fading away. The choice between precision and discovery was never really a scientific preference, but rather a technological constraint. As that constraint is gradually removed, research can become more comprehensive without becoming more expensive or complicated.
This isn’t just about more data, but rather about reconsidering the status quo of an artificial barrier between different types of questions. Clinical diagnostics and basic research can now draw from the same well. Studies designed for one purpose can be reanalyzed years later to answer questions that hadn’t even been asked when the samples were collected. The line between studying what we know and discovering what we don’t has become wonderfully blurred.
At Arome Science, the metabolomics platform is built around this convergence. Using the latest Orbitrap Astral mass spectrometry technology, semi-targeted metabolomics is delivered that combines QqQ-like (triple-quadrupole-style) quantitation of 50-500 targeted metabolites with comprehensive untargeted profiling, all from a single analytical run. Whether developing diagnostics, training machine learning models, or exploring disease mechanisms, the choice between precision and discovery no longer needs to be made.
If you’re ready to move from concept to execution, see our How to Start a Metabolomics Project with Arome Science step-by-step guide.

