EN | PT | TR | RO | BG | SR
;
Marked as Read
Marked as Unread


NEXT TOPIC

CONTENT OF THE UNIT




PART I. META-ANALYSIS FUNDAMENTALS




In 1976, Gene Glass introduced the term' meta-analysis' to describe the statistical analysis of a comprehensive collection of research findings from individual studies. This process, which involves integrating the findings from a group of empirical studies focused on the same research question, calculates the average and variability of overall population effects (Field & Gillett, 2010; Glass, 1976; O'Rourke, 2007).

The growth of science depends on accumulating knowledge and building on the past work of others. As scientific development quickens and the amount of information in the literature continues to explode (for example, about 500,000 new articles are added to the National Library of Medicine's PubMed database each year), scientists need help to keep up with the latest research and recommended practices (Fig. 1). 

In the past, professionals depended on experts to summarize the literature and provide recommendations. However, over time, researchers started to examine the accuracy of these review articles and discovered that the evidence often did not support the recommendations. They began to promote a more scientific approach to reviews that did not rely on the subjective opinion of a single expert. This new approach required documented evidence to support claims and a systematic process conducted by a diverse team to ensure a comprehensive review of all evidence. This process is now referred to as a systematic review.



A systematic review involves a thorough analysis of a specific research question. It involves systematically identifying, selecting, evaluating, and synthesizing all relevant, high-quality research evidence to address the question. This process combines the results of multiple interconnected primary studies using methods that reduce biases and random errors. A well-conducted systematic review provides high-quality evidence for clinical practice and is widely regarded as the standard for guiding clinical practice. (Yusuff, 2023).

A systematic literature review is an essential research method for evidence-based reasoning. It involves gathering information from multiple studies, which leads to a comprehensive understanding of a topic. Unlike a narrative review, a systematic review identifies the criteria for selecting articles and uses explicit and standardized search methods, providing the audience with enlightenment and information. This method is based on predetermined criteria and aims to help researchers choose studies and tools for developing articles with original information.

While systematic literature reviews are commonly used in medicine, they can be adapted for other research areas. However, researchers from different fields must follow relevant guidelines to ensure their studies effectively address research questions and meet their objectives. Conducting a systematic literature review in business fields like management, marketing, and information systems typically adheres to a standardized approach, albeit with some variations and adjustments. These steps are designed to yield the most pertinent findings for the research.

A systematic review of research must be impartial and transparent in its methodology. The general principles that should underpin all systematic reviews are the following:

Transparency is critical in systematic literature reviews to ensure the accuracy of conclusions and the methodological approach. This transparency safeguards against misrepresentation by evaluating each research phase and clarifying its relevance and quality.

The initial framework of a systematic review is essential in guiding and maintaining the integrity of the process, keeping the focus on research objectives, and preventing the influence of literature characteristics on the procedure. An exhaustive search aims to uncover all relevant studies, reducing bias and simplifying access to research content. Thus, it ensures that a limited set of studies does not unduly influence conclusions.

Synthesizing search results leads to concise and accessible conclusions regarding the quality of research on a given topic.

The PRISMA flowchart in Fig. 4 gives the reader a better understanding of the review process. The overall goal of the coding procedure is to provide a comprehensive description of the studies considered and to obtain an overview of the study sample quickly. The coding sheet supports this procedure.



This assessment can be carried out using various approaches, such as the medicine case and the JBI (Joanna Bridge Institute) checklist. However, depending on the concrete objectives of the studies in question, this assessment is optional for some systematic literature reviews.

Systematic reviews employ a rigorous, scientific approach to thoroughly search for and assess all evidence using established and predetermined analytical methods (Committee on Standards, 2011). A systematic review involves a methodical literature search to consolidate information from various studies using a specific protocol to address a focused research question. The process aims to locate and utilize all accessible published and unpublished evidence, meticulously evaluate it, and present an objective summary to formulate sound recommendations. The synthesis can be qualitative or quantitative, but its defining characteristic is adherence to guidelines that allow for reproducibility. The widespread adoption of systematic reviews has transformed the evaluation of practices and how practitioners acquire information about which interventions to employ. Table 1 outlines some critical distinctions between narrative and systematic reviews.

The concept of the modern systematic review can be traced back to a 1976 paper by Gene Glass in psychology. In this paper, Glass provided a quantitative summary of all the studies that evaluated the effectiveness of psychotherapy (Glass, 1976). He also introduced the term "meta-analysis" in educational psychology to describe the statistical analysis of an extensive collection of results from individual studies in order to integrate the findings (Cheung, 2015, p. 44). Today, systematic reviews are widely used across various scientific disciplines. In healthcare, however, "meta-analysis" primarily refers to quantitative data analysis from a systematic review. This means that systematic reviews without a quantitative analysis in healthcare are not typically labelled as meta-analyses, although this distinction still needs to be firmly established in other fields. We will maintain these distinct terms, using "meta-analysis" to denote the statistical analysis of data collected in a systematic review.

Systematic reviews generally involve six significant components: topic preparation, literature search, study screening, data extraction, analysis, and report preparation (Schmid et al., 2020). Each involves multiple steps, and a well-conducted review should carefully attend to all of them (Fig. 2.).

 



Meta-analysis is a widely accepted and collaborative method to synthesize research findings across various disciplines (Cheung & Vijayakumar, 2016). It is a fundamental tool that combines outcome data from individual trials to produce pooled effect estimates for different outcomes of interest. This process increases the sample size, improves the statistical power of the findings, and enhances the precision of the effect estimates. Synthesizing results across studies is crucial for understanding a problem and identifying sources of variation in outcomes, making it an essential part of the scientific process (Gurevitch et al., 2018). The reliability of the information presented relies on the calibre of the studies included and the thoroughness of the meta-analytical procedure. Some concerns have been expressed about the ultimate usefulness of such a complex and time-consuming procedure in establishing timely, valid evidence on various specified topics throughout the evolution of the current meta-analytic methodology (Papakostidis & Giannoudis, 2023).

Meta-analysis is a robust method for consolidating data from multiple studies to generate evidence on a specific topic. It is a statistical technique used to combine the findings of several studies (Gurevitch et al., 2018). However, there are various crucial considerations when interpreting the results of a meta-analysis.

Meta-analysis is a scientific research approach that objectively evaluates the literature on a given subject. As a collection of statistical methods for aggregating the effect sizes across different datasets addressing the same research question, meta-analysis provides a potent, informative, and unbiased set of tools for summarizing study results on the same topic. It offers several advantages over narrative reviews, vote counting, and combining probabilities (Table 1.). Meta-analysis is based on expressing the outcome of each study on a standard scale. This "effect size" outcome measure includes information on each study's sign and magnitude of an effect of interest. In many cases, the variance of this effect size can also be calculated (Koricheva et al., 2013).

Meta-analysis involves combining the findings of several studies to estimate a population parameter, usually an effect size, by calculating point and interval estimates. In addition, meta-analyses are important for identifying gaps in the literature, highlighting areas where more research is needed and areas where the answer is definitive, and no new studies of the same type are necessary. This aspect of meta-analysis helps keep the audience informed about the research landscape, guiding them towards areas that require further exploration.

Meta-analyses are fundamental tools of Evidence-Based Medicine (EBM) that synthesize outcome data from individual trials to produce pooled effect estimates for various outcomes of interest. Combining summary data from several studies increases the sample size, improving the statistical power and precision of the obtained effect estimates. Meta-analyses are considered to provide the best evidence to support clinical practice guidelines. The quality of the evidence presented relies on the calibre of the studies included and the thoroughness of the meta-analytic procedure. Some concerns about the usefulness of such a complex and time-consuming procedure in establishing timely, valid evidence on various specified topics have been expressed.

A systematic review is a consistent and reproducible qualitative process of identifying and appraising all relevant literature to a specific question. Meta-analysis takes this process further by using specific statistical techniques that allow for a quantitative pooling of data from studies identified through the systematic review process.

A meta-analysis can be carried out if the systematic review uncovers enough and suitable quantitative information from the summarised studies (Gurevitch et al., 2018).

Meta-analysis is now a popular statistical technique for synthesizing research findings in many disciplines, including educational, social, and medical sciences (Cheung, 2015). Google Scholar published over 107,000 meta-analyses in 2022 alone (Irsova et al., 2023). Classical meta-analysis is aggregated person data meta-analysis, in which multiple studies are the analysis units. Compared to the original studies, the analysis of multiple studies has more power and reduces uncertainty. Following this, different meta-analysis approaches have been developed. Therefore, with prior knowledge of the differences between these approaches, it is clear which approach should be used for the data aggregation. For example, in the early days, different meta-analytic approaches used the aggregation of different types of effect sizes (e.g., d, r); today, the transformation of effect sizes is common (Kaufmann & Reips, 2024).

It's important to note that meta-analysis has two distinct aggregation models: the fixed and the random effects model. The fixed effects model operates under the assumption that all studies in the meta-analysis stem from the same population and that the true magnitude of an effect remains consistent across all studies. Therefore, any variance in the effect size is believed to result from differences within each study, such as sampling errors.

Unlike the fixed-effects model, the random-effects model supposes that effects on the population differ from one study to another.

The idea behind this assumption is that the observed studies are samples drawn from a universe of studies. Random effects models have two sources of variation in a given effect size: variation arising from within studies and from variation between studies.

Evidence from a meta-analysis is inherently associated with the quality of the primary studies. Meta-analyses based on low-quality primary studies tend to overestimate the treatment effect.

Consider this: Why should we conduct a meta-analysis instead of relying solely on leading experts' reviews or primary single-study investigations as sources of the best evidence? This question prompts us to delve deeper into the unique benefits and insights that meta-analysis can offer.

While meta-analysis presents numerous benefits, including increased precision, the ability to address new questions, and resolving conflicting claims, it's crucial to tread carefully. If not conducted with meticulous attention, meta-analyses can lead to misinterpretations, mainly if study designs, biases, variation across studies, and reporting biases are not thoroughly considered (Higgins et al., 2023).

Understanding the type of data resulting from measuring an outcome in a study and selecting appropriate effect measures for comparing intervention groups is of the utmost importance. Most meta-analysis methods involve a weighted average of effect estimates from different studies, a decision that rests on the researcher's shoulders.

Studies with no events provide no information about the risk or odds ratios. The Peto method is considered less biased and more powerful for rare events. Heterogeneity across studies must be considered, although many reviews do not have enough studies to investigate its causes reliably. Random-effects meta-analyses address variability by assuming that the underlying effects are normally distributed, but it is essential to interpret their findings cautiously. Prediction intervals, which are a range of values that are likely to include the true effect, from random-effects meta-analyses help illustrate the extent of between-study variation.

Preparing a meta-analysis involves making numerous judgments. Among these, sensitivity analyses stand out as a crucial tool. They should meticulously examine whether overall findings are robust to potentially influential decisions, providing a reassuring layer of reliability and robustness to your research.

Preparing a meta-analysis requires many judgments. Sensitivity analyses, a crucial tool, should examine whether overall findings are robust to potentially influential decisions, ensuring the reliability and robustness of your research (Deeks et al., 2023).

Many leading journals feature review articles penned by experts on specific topics. While these narrative reviews are highly informative and comprehensive, they express the subjective views of the author(s), who may selectively use the literature to support personal views. Consequently, they are susceptible to numerous sources of bias, relegating them to the bottom of the level-of-evidence hierarchy. This underscores the critical importance of conducting high-quality meta-analyses, which can provide a more objective and comprehensive view of the available evidence.

Systematic reviews and meta-analyses are meticulously designed to minimize bias in a marked departure from narrative reviews. They achieve this by identifying, appraising, and synthesizing all relevant literature using a transparent and reproducible methodology. This rigorous approach ensures that the evidence obtained is the most reliable, establishing systematic reviews and meta-analyses as the gold standard at the pinnacle of the hierarchy of evidence.

However, given the massive production of flawed and unreliable synthesized evidence, a major overhaul is required to generate future meta-analyses. The quality of the chosen studies should receive strong attention, as should the consistency and transparency in conducting and reporting the meta-analysis process.

Conducting a meta-analysis properly involves combining data from multiple individual studies, ideally randomized control trials, to calculate combined effect estimates for different outcomes of interest. This is particularly useful for reconciling conflicting results from the primary studies and obtaining a single pooled effect estimate that is thought to represent the best current evidence for clinical practice. Moreover, through significantly expanding the sample size, meta-analyses enhance the statistical strength of their results and, ultimately, offer more accurate effect assessments.

Meta-analyses can be classified as cumulative/retrospective or prospective. The predominant approach in the literature is cumulative. However, in a prospective meta-analysis (PMA), study selection criteria, hypotheses, and analyses are established before the results from studies pertaining to the PMA research question are available. This approach reduces many issues associated with a traditional (retrospective) meta-analysis (Seidler et al., 2019).

The results of a meta-analysis are presented graphically in a forest plot (see Fig. 5). A forest plot would display the effect size estimates and confidence intervals for every study included in the meta-analysis. The meta-analysis should also assess the heterogeneity of the included studies. Commonly, heterogeneity is assessed using statistical tests. The x2 and I2 tests are widely used. A x2 test with a P-value of > 0.05 or I2 greater than 75% indicates significant heterogeneity. In conducting a meta-analysis, you can utilize either a fixed effect model or a random effect model. If there is no heterogeneity, a fixed effect model is used; otherwise, a random effect model is applied. An assessment of publication bias is also required to check that positive, significant, or small studies do not influence the results. Results are displayed graphically in a funnel plot (see Fig. 5), recommended where more than ten studies have been included in the meta-analysis (Yusuff, 2023).

Despite the ongoing methodological deficits in currently published meta-analyses, there is a clear path to improvement. When conducted in adherence to strict and transparent rules, systematic reviews and meta-analyses can ensure the reproducibility and robustness of the search process, the reliability and validity of their findings, and the clarity of reporting.

The meta-analysis process involves a thorough approach, considering all potential influences on the results. For example, the random-effects model assumes that the true effect estimate varies among the primary studies due to differences in their clinical characteristics. This model's combined effect size estimate represents an average estimate of all the individual study estimates. Choosing the correct statistical model for combining data is a complex decision that hinges on the degree of variation between studies. However, there are no clear thresholds regarding the amount of variation that would determine which model to use.

Moreover, the statistical tests for variation often need more power to detect significant differences. The fixed-effects model is generally used when there is no variation in a meta-analysis, especially when many studies with large sample sizes are included. In such cases, there is confidence in the ability of the variation test to detect significant differences. Results from this model usually have narrower confidence intervals. On the other hand, when there are concerns about variation, the random-effects model is considered a better choice. It generates wider confidence intervals around the estimates and is a more conservative option for the analysis. In a meta-analysis with many studies and adequate sample sizes, where statistical variation is not detected, using the fixed-effects model is justified (Papakostidis & Giannoudis, 2023).

Finally, the quality of evidence obtained through a meta-analysis should be evaluated using one of three tools: GRADE (Grading of Recommendations Assessment, Development and Evaluation)*, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis)** or AMSTAR (A Measurement Tool to Assess systematic Reviews)***. All these tools assess confidence in the effect estimate for each specific outcome of interest. Its use significantly enhances the strength and dependability of the findings, offering researchers assurance about the quality of their research. Therefore, they are a crucial component of meta-analysis that should be considered.

Even though meta-analyses, particularly those based on high-quality RCTs, are regarded to provide the best evidence, the problem of inconclusiveness of a meta-analysis is not associated with a potentially diminished methodological quality or lack of adherence to the accepted standards of conducting and reporting a proper meta-analysis. The problem is that most systematic reviews are flawed, misleading, redundant, useless or all of the above (Ioannidis, 2017).

Papakostidis and Giannoudis (2023) point out that innovative types of systematic reviews and meta-analyses (some of them stemming from older ideas) are likely to witness a renowned interest soon to achieve a more reliable evidence synthesis. There are four types of such innovative meta-analyses:

  • Prospective meta-analysis, a method based on designing prospective trials with a predefined purpose, offers a promising approach. When these trials are completed, they can serve as primary studies for a meta-analysis. This method can address a wide range of research questions, from focused clinical inquiries to comprehensive research agendas, demonstrating its versatility and potential impact. This adaptability can inspire the audience about this method's wide range of applications.
  • Meta-analysis of individual participants' data, while offering a more robust approach to handling confounders and formulating new hypotheses, presents its challenges. These include potential time constraints and logistical complexities. Moreover, the risk of selective reporting bias should be seriously considered, underscoring the need for meticulous planning and execution. This awareness of the challenges can make the audience feel prepared and cautious.
  • Network meta-analyses allow the analytic process to be extended to more than two treatment groups, utilizing direct and indirect comparisons between them. This approach not only provides a more comprehensive understanding of the treatment landscape but also allows for the comparison of treatments that have not been directly compared in individual studies. Although most are based on already published data, they can still build on prospective meta-analytic designs or individual-level data.
  • Umbrella meta-analyses, which synthesize evidence from all relevant systematic reviews and meta-analyses on a specific topic, constitute an attractive way to distil and translate large amounts of evidence.
* https://www.gradeworkinggroup.org/
** https://www.prisma-statement.org/
*** https://amstar.ca/index.php


Meta-analysis is a statistical approach widely used in the research community to combine data from multiple studies. Its primary purpose is to provide a comprehensive understanding of a particular phenomenon by identifying patterns, trends, and inconsistencies that may need to be apparent in individual studies. Meta-analysis is advantageous in reconciling contradictory findings from different studies and increasing statistical power.

However, it is essential to recognize the potential biases associated with meta-analysis, such as publication bias and the quality of included studies. Rigorous planning and execution of several vital steps are required to conduct a reliable meta-analysis. There are various meta-analysis methods, each with unique strengths and limitations. Lastly, it is crucial to report the results of a meta-analysis transparently and accurately to enhance interpretability and reproducibility, contributing to the advancement of knowledge in respective fields.

Based on the query about meta-analysis fundamentals, here is a summary based on the relevant abstracts:

  • Definition: Meta-analysis is a statistical technique that combines the results of multiple primary studies to calculate point and interval estimates of a population parameter, usually an effect size.
  • Applications: This versatile statistical technique is used in a multitude of fields, from psychology to international business, from medicine to clinical research. It provides a quantitative synthesis of literature and estimates summary effect sizes.
  • Methodology: Proper methodology application is crucial, including bibliographical search, appropriate study combination, and correct result representation to ensure validity.
  • Challenges: Issues such as heterogeneity of primary studies, publication bias, and interpretation difficulties are fundamental aspects that need to be addressed for the internal validity of meta-analyses.
  • Teaching and Guidance: The complexity of meta-analysis necessitates the availability of guidelines and practical examples to improve the quality of published meta-analyses, making it achievable for junior researchers and clinicians with expert guidance.
    In conclusion, the fundamentals of meta-analysis encompass its definition, applications, methodology, challenges, and available guidance for conducting high-quality research. However, it is essential to note that while the abstracts provide a comprehensive understanding of meta-analysis fundamentals, they do not delve into advanced methods or specific statistical techniques for meta-analysis.

In conclusion, the fundamentals of meta-analysis encompass its definition, applications, methodology, challenges, and available guidance for conducting high-quality research. However, it is essential to note that while the abstracts provide a comprehensive understanding of meta-analysis fundamentals, they do not delve into advanced methods or specific statistical techniques for meta-analysis.



Meta-analysis is a research synthesis method that involves reviewing primary research on a specific topic to integrate findings. This process is crucial to the scientific enterprise as it allows for properly evaluating evidence for different hypotheses and formulating generalizations. Research synthesis can be conducted qualitatively through narrative reviews or quantitatively using statistical methods to integrate results from individual studies (Koricheva et al., 2013).

Meta-analysis has had a transformative effect in many scientific fields, leading the way in establishing evidence-based practice. More importantly, it has been instrumental in resolving seemingly contradictory research outcomes, showcasing its problem-solving capability and revolutionary impact.

Meta-analysis is more than just a technique; it is a well-regarded and favoured approach for combining research results across different fields. Based on current studies, it offers a comprehensive evaluation of a statistic's size, thereby strengthening its reliability and significance.



Pooling data from multiple studies increases the sample size and enhances the statistical power of the results and the accuracy of the calculated effect estimates. It is considered the most effective way to assess and examine the evidence for a specific issue, offering a high level of evidence and forming recommendations for clinical practice. However, the strength of the evidence provided depends closely on the quality of the studies included and the thoroughness of the meta-analytic process (Papakostidis & Giannoudis, 2023).

While meta-analysis has numerous advantages, it also has methodological weaknesses and potential difficulties interpreting overall results. This underscores the need for readers to maintain a critical approach, fostering a sense of responsibility and diligence.

The field of meta-analysis is not without its ongoing debates and limitations, which continue to attract attention. These include issues such as publication bias and omitted variable bias, which are essential to consider in the context of meta-analytic research.

Meta-analysis has many advantages over other research synthesis methods. Does this mean that meta-analysis is always preferred and that narrative reviews, combining probabilities, and vote-counting procedures must be abandoned altogether?

Among the various advantages, it is worth highlighting (Deeks et al., 2023; Koricheva et al., 2013):

  • Meta-analysis provides a comprehensive literature assessment, offers a high level of evidence, and helps establish practice recommendations.
  • Meta-analysis provides a more objective, informative, and powerful means of summarizing individual studies' results than narrative/qualitative reviews and vote counting.
  • While the use of meta-analysis is on the rise, it is essential to note that understanding the method is valuable even if you are not planning to conduct your meta-analyses. This knowledge will enable researchers to follow and evaluate the literature in their field effectively.
  • Applying meta-analysis to applied fields (e.g., conservation and environmental management) can make results more valuable for policymakers.
  • Mastering the fundamentals of meta-analysis can significantly enhance the quality of data presentation in original research, making it possible to incorporate the findings into future research reviews.
  • Conducting meta-analysis changes the way one reads and evaluates primary studies. It makes one acutely aware that the statistical significance of the results depends on statistical power and, in general, improves one's ability to evaluate evidence critically.
  • To enhance precision: Many individual studies are too small to provide conclusive evidence about the effects of interventions. Precision is typically improved when estimates are based on a larger data pool.
  • Primary studies typically target specific participants and well-defined interventions to address questions beyond the scope of individual studies. Combining studies with varying characteristics allows us to explore the consistency of effects across a broader range of populations and interventions. This approach can also help identify reasons for differences in effect estimates.
  • Combining study results through statistical synthesis allows for a formal assessment of conflicting findings and exploration of reasons for varying results to resolve disputes from seemingly contradictory studies or to generate new hypotheses.

Meta-analysis alone or in combination with other research synthesis methods should be used whenever estimating the magnitude of an effect and understanding sources of variation in that effect is of interest and when at least some of the primary studies gathered provide sufficient data to carry out the analysis.

Emphasizing the importance of a critical approach, it becomes evident that it is crucial to identify deficiencies in methodology and interpret overall findings in meta-analyses. This approach addresses concerns about publication bias and the potential for erroneous findings when dissimilar studies with varying outcome data are included.

It is essential to note some of its drawbacks, such as excluding low-quality studies. As an alternative to meta-analysis, "best evidence synthesis" would only consider reputable studies. The challenge here is determining the criteria for distinguishing between good and bad. It is advisable to include as many papers as possible and give importance to various aspects of study design based on the widely approved methodological practice. This allows us to explore how different methods impact the estimated border effects. The impact factor of the publication vehicle and the number of citations each study receives must also be considered (Havranek & Irsova, 2016).

Replicability in research is of the utmost importance, as it enables other researchers to verify the findings and build upon the existing knowledge. To enable other researchers to reproduce our analysis, utilize the approach of seeking out studies that assess the impact of borders. It is acceptable to omit certain studies if their results do not systematically differ from those in our analysis.

Studies reporting numerous estimates significantly influence the meta-analysis. When each estimate is given equal weight, the imbalanced nature of data in meta-analysis means that studies with numerous estimates dictate the results. One potential solution is the mixed-effects multilevel model, which assigns approximately equal weight to each study if the estimates within the study are highly correlated. However, this method introduces random effects at the study level, which may be correlated with explanatory variables.

Authors' preferred estimates should carry more weight. Studies examining the border effect typically present numerous estimates and often favour a subset of these estimates (many results are presented as robustness checks). While some authors explicitly state their preferences, it is only possible to determine the preferred estimates for some studies. Instead, a researcher must control for data and methodology, which should be more straightforward to code and must encompass most of the authors' desires, such as controlling for multilateral resistance (Havranek & Irsova, 2016).

It is important to note that individual estimates are only partially independent due to authors utilizing similar data. When conducting meta-analysis, it is crucial to consider that individual clinical trials can be largely independent, particularly in medical research. However, most economic dataset's regression results and observations are not independent of economics. The dependence among observations is addressed by clustering the standard errors at the level of individual studies and datasets.

There are too many potential explanatory variables, and it needs to be clarified which ones should be included. With numerous aspects of study design, finding a theory that substantiates the inclusion of all of them is challenging. For instance, an option is to assign more weight to extensive studies published in reputable journals, but it needs to be evident why they should consistently report different results.

Meta-analysis compares dissimilar findings. In economics, meta-analysis examines heterogeneous estimates. Various estimates are produced using different methods, and it is necessary to account for differences in the design of primary studies. To enhance the comparability of the estimates in a dataset, choose only to include the results concerning the impact of specific common variables and exclude the extensive literature on the others.

Errors in data coding are inevitable. Compiling data for meta-analysis involves months of reading and coding the data. Do not use research assistants for this assignment because there is a risk of immediately moving to regression tables and coding the data without thoroughly reviewing the primary studies. However, it is impossible to eliminate errors; we can only minimize them by independently collecting, comparing, and correcting the datasets, ensuring the reliability of our research.

Publication bias undermines the validity of meta-analysis. Researchers may overestimate the mean reported effect size and not accurately represent the true effect size when they report estimates displaying a particular sign or statistical significance.

In conclusion, meta-analysis involves critical steps such as question definition, data collection, analysis, and reporting results. Defining the question is crucial in shaping the focus and direction of the research. While it offers high-level evidence and informs clinical practice, it also faces challenges related to methodological weaknesses, publication bias, and potential limitations in achieving its objectives. Despite these limitations, meta-analysis significantly contributes to evidence-based practice in healthcare by providing a comprehensive synthesis of available research.



Online versus offline differences in meta-analysis data collection must be considered. Internet-based research can collect large data sets from a diverse world population. Therefore, it is necessary to describe the sample of participants in detail to verify whether this potential of Internet-based research is used and how.
Relevant sample information, therefore, includes which country and in which languages the study was carried out, the age of the participants, and whether only university students were considered to assess the heterogeneity and generalizability of the results (Kaufman, 2024).

Like meta-analyses on traditional studies, for meta-analyses on Internet-based research for study aggregation, it is necessary to collect the number of participants and effect sizes for the output variables of interest. Especially for Internet-based surveys, the number of participants who dropped out is a valuable effect size to consider in meta-analyses.

Ideally, the coding procedure is conducted by a team of experts in the research area who will meta-analyse and agree on the different codes. At least two coders are required for any subsequent calculation of intercoder reliability values.

Freelon's (2010, 2013) ReCal software is ideal for intercoder reliability estimation and provides a data set quality value for subsequent analysis*. ReCal comprises three separate modules, each designed to handle specific types of data, whether nominal, ordinal, or interval/ratio-level. and is based on an online survey requesting study coding sent to the first authors. This strategy saves time and increases reliability in future meta-analyses. Additionally, Kaufmann & Reips (2024) provide a survey model for meta-analyses (Univ. Konstanz)**.

Text mining is a valuable support tool in the coding procedure of systematic reviews, as it can potentially increase the objectivity of the review process.
Before performing any data aggregation analysis, a data description must be provided first, typically summarized in a table.

Thus, the general steps to follow are:

  • Identify the objectives and formulate the research question.
  • Develop a protocol.
  • Conduct a literature search.
  • Define inclusion and exclusion criteria.
  • Select articles according to the defined inclusion and exclusion criteria.
  • Explore and interpret the selected articles.
  • Analyse and report the results obtained.

*https://ln.run/PEGc4
**https://acesse.dev/dDDv5