Real-world evidence is evidence collected outside the controlled environment of randomized clinical trials. It helps us to understand disease burden, treatment patterns, patient behaviours, and product performance, in settings and populations that are representative of everyday clinical practice.
Real-world data is collected in 3 broad buckets:
Clinical – such as adherence to therapy or cure rates in real life.
Economic – perhaps cost associated with medical resource utilization.
And Humanistic – which could be data on the health-related quality of life of the patient.
But where does the data actually come from? Well, typically there are 2 sources – existing data and new data.
Existing data might be claims or administrative data, medical charts, or disease registries.
New data could be post-marketing observational studies or case-series data.
Just like clinical trials, real-world evidence has its advantages and limitations.
One advantage is that real-world evidence is based on everyday clinical practice, and this can show us how physicians actually prescribe a product and help patients actually use it. And real-world evidence can allow us to look at large and diverse populations, including patients often excluded from clinical trials, giving us a broader picture.
Another advantage of real-world evidence is that it helps us evaluate comparisons that may have not been made in a randomized controlled trial.
For example, comparisons between a range of different medicines. And real-world evidence allows us to collect data typically not gathered in a random controlled trial, such as the cost of the disease burden or the impact of treatment on health care costs.
Of course, real-world evidence has some limitations. The main one is the potential for bias. Given the diversity of the real-world population, it is important to ensure that real-world evidence data sources are a good representation of the population as a whole.
Another issue is the quality of data available in the real world. Data may not be complete enough or of high enough quality to answer specific questions. For example, a database may contain information on hundreds of cancer patients, but some may be missing information about the dose of the medication received.
Ideally, clinical trials and real-world evidence could be used together to provide a more comprehensive understanding of a treatment or disease. It is an approach that can provide powerful insights into health care delivery.
The problem with Randomized controlled trials:
Randomized controlled trials have been used mostly in medicine. They have become very widely used not only in economics but also in other social sciences. “Evidence-based policy” is one of the magic phrases of the modern medical world. Of course, everybody knows that evidence is integral to a good scientific footing, some argue that there should not be a special status conferred to the randomized clinical trial. It should not be given a special status just by existing, and its evidence is not always better than other evidence.
Usually in healthcare, like in economics, you have to control for other things. So, if someone has more money than others and they behave differently, it might be because they are richer, have more education, man vs woman, there might be lots of different things, and a standard economic procedure would be that you have to control for these various things. The same with healthcare, there are multiple factors outside the usual enrollment criteria of ECOG score or previous lines of therapy. Age, comorbidities, inherited diseases, lifestyle/diet, and exercise habits throughout their lives should all be taken into account, but are not always, as the desperation of finding patients to enlist outweighs the “perfect fit” that physicians would most value.
In randomized controlled trials, it appears that this awareness of mitigating factors is eschewed in favor of reaching the endpoint. The rhetoric being: “The only difference between these 2 groups is the treatment, because they have been randomly assigned”. And this is a pure misunderstanding. Randomization does not equalize the 2 groups.
What we should really strive for is that these 2 groups are evenly balanced on all the things that matter, because randomization in itself does not achieve this. More fundamentally, there is the question that even if you get a good estimate from a specific trial, what is its use actually in policy? How could you take it from “Well, it worked here” to “Therefore, will it work somewhere else”?
Of course, some would argue that the goal of randomized controlled trials has never been to make sure that the control and target groups are “equalized”, but rather to make sure that the treatment variable is not caused by any other confounder variable, and it is only the result of pure chance. And conditional probabilities like Propensity Score Matching have been introduced to ensure that the balance of covariates exists.
If a policy is going to work, it often needs an environment in which it can work, and it will not work in other environments. The randomized controlled trial more or less only tells you this thing only worked in one place.
There is nothing “magic” about randomized controlled trials. It is not like – well this is the washing powder, that washed whiter than any other washing powder, so buy it! These studies have good advantages, but many qualitative and quantitative disadvantages. The quality of the patient, the quantity of patients enlisted, etc.,
If we are going to be serious, as policymakers, about using the endpoints from these clinical trials as a clear indication that a drug must be bought, the question must move toward, ok, the experiment is over, how do I transfer the results to policy changing decisions. We have seen this work well with your patient population, what about my one? This has to be done before the experiment. Unfortunately, there continues a ‘bulldozing’ of new drugs into the ‘network’ by powerful pharmaceuticals, lily-livered governments and health decision-makers are only too happy to fast-track these products that have flimsy real-world evidence to back them up, and it is the physician and patient who ultimately bear the brunt.