Meta-analysis: Difference between revisions

Jump to navigation Jump to search
 
(3 intermediate revisions by the same user not shown)
Line 118: Line 118:
Grounded thematic analysis can guide the collection of qualitative or narrative observations<ref name="Patton 2015 p. ">{{cite book | last=Patton | first=Michael Quinn | title=Qualitative research & evaluation methods : integrating theory and practice | publication-place=Thousand Oaks, California | date=2015 | isbn=1-4129-7212-4 | oclc=890080219 | page=}}</ref>. This approach can also be used in a [[positive deviance]] approach to [[quality improvement]] assessments<ref name="pmid22299722">{{cite journal| author=Rose AJ, Petrakis BA, Callahan P, Mambourg S, Patel D, Hylek EM | display-authors=etal| title=Organizational characteristics of high- and low-performing anticoagulation clinics in the Veterans Health Administration. | journal=Health Serv Res | year= 2012 | volume= 47 | issue= 4 | pages= 1541-60 | pmid=22299722 | doi=10.1111/j.1475-6773.2011.01377.x | pmc=3401398 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=22299722  }} </ref>
Grounded thematic analysis can guide the collection of qualitative or narrative observations<ref name="Patton 2015 p. ">{{cite book | last=Patton | first=Michael Quinn | title=Qualitative research & evaluation methods : integrating theory and practice | publication-place=Thousand Oaks, California | date=2015 | isbn=1-4129-7212-4 | oclc=890080219 | page=}}</ref>. This approach can also be used in a [[positive deviance]] approach to [[quality improvement]] assessments<ref name="pmid22299722">{{cite journal| author=Rose AJ, Petrakis BA, Callahan P, Mambourg S, Patel D, Hylek EM | display-authors=etal| title=Organizational characteristics of high- and low-performing anticoagulation clinics in the Veterans Health Administration. | journal=Health Serv Res | year= 2012 | volume= 47 | issue= 4 | pages= 1541-60 | pmid=22299722 | doi=10.1111/j.1475-6773.2011.01377.x | pmc=3401398 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=22299722  }} </ref>


Guiding principles are<ref name="pmid25642521">{{cite journal| author=Wong G, Greenhalgh T, Westhorp G, Pawson R| title=Development of methodological guidance, publication standards and training materials for realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative Evidence Syntheses – Evolving Standards) project | journal=Health Services and Delivery Research | year= 2014 | volume=  | issue=  | pages=  | pmid=25642521 | doi=10.3310/hsdr02300 | pmc= | url= }} </ref>:
# Pragmatism. "what to include is not self-evident. The reviewer must be guided by what will be most useful to the intended audience(s), for example, what is likely to promote sense making.
# Pluralism. "the topic should be illuminated from multiple angles and perspectives"
# Historicity. "research traditions are often best described as they unfolded over time, highlighting significant individual scientists, events and discoveries which shaped the tradition."
# Contestation. "'conflicting data' from different research traditions should be examined to generate higher-order insights"
# Reflexivity. "throughout the review, reviewers must continually reflect, individually and as a team, on the emerging findings."
# Peer review. "emerging findings should be presented to an external audience"
==== Reporting standards ====
The [https://www.ramesesproject.org/ RAMESES publication standards] guide methods<ref name="pmid23360661">{{cite journal| author=Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R| title=RAMESES publication standards: meta-narrative reviews. | journal=BMC Med | year= 2013 | volume= 11 | issue=  | pages= 20 | pmid=23360661 | doi=10.1186/1741-7015-11-20 | pmc=3558334 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=23360661  }} </ref>:
The [https://www.ramesesproject.org/ RAMESES publication standards] guide methods<ref name="pmid23360661">{{cite journal| author=Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R| title=RAMESES publication standards: meta-narrative reviews. | journal=BMC Med | year= 2013 | volume= 11 | issue=  | pages= 20 | pmid=23360661 | doi=10.1186/1741-7015-11-20 | pmc=3558334 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=23360661  }} </ref>:
* Wong. RAMESES publication standards for meta-narrative reviews. BMC Med. 2013<ref name="pmid23360661">{{cite journal| author=Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R| title=RAMESES publication standards: meta-narrative reviews. | journal=BMC Med | year= 2013 | volume= 11 | issue=  | pages= 20 | pmid=23360661 | doi=10.1186/1741-7015-11-20 | pmc=3558334 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=23360661  }} </ref>
* Wong. RAMESES publication standards for meta-narrative reviews. BMC Med. 2013<ref name="pmid23360661">{{cite journal| author=Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R| title=RAMESES publication standards: meta-narrative reviews. | journal=BMC Med | year= 2013 | volume= 11 | issue=  | pages= 20 | pmid=23360661 | doi=10.1186/1741-7015-11-20 | pmc=3558334 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=23360661  }} </ref> [https://bmcmedicine.biomedcentral.com/articles/10.1186/1741-7015-11-20/tables/1 Table]
 
* Wong. RAMESES II reporting standards for realist evaluations. BMC Med. 2016<ref name="pmid27342217">{{cite journal| author=Wong G, Westhorp G, Manzano A, Greenhalgh J, Jagosh J, Greenhalgh T| title=RAMESES II reporting standards for realist evaluations. | journal=BMC Med | year= 2016 | volume= 14 | issue= 1 | pages= 96 | pmid=27342217 | doi=10.1186/s12916-016-0643-1 | pmc=4920991 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=27342217  }} </ref> [https://bmcmedicine.biomedcentral.com/articles/10.1186/1741-7015-11-21/tables/1 Table]
* Wong. RAMESES II reporting standards for realist evaluations. BMC Med. 2016<ref name="pmid27342217">{{cite journal| author=Wong G, Westhorp G, Manzano A, Greenhalgh J, Jagosh J, Greenhalgh T| title=RAMESES II reporting standards for realist evaluations. | journal=BMC Med | year= 2016 | volume= 14 | issue= 1 | pages= 96 | pmid=27342217 | doi=10.1186/s12916-016-0643-1 | pmc=4920991 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=27342217  }} </ref>


==Problems with meta-analyses==
==Problems with meta-analyses==

Latest revision as of 04:08, 5 August 2024

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]

Slide set: File:Meta analyses and Systemic Reviews .pdf

Overview

In statistics, a meta-analysis is a sub-type of systematic reviews that combines the results of several studies that address a set of related research hypotheses. The first meta-analysis was performed by Karl Pearson in 1904, in an attempt to overcome the problem of reduced statistical power in studies with small sample sizes; analyzing the results from a group of studies can allow more accurate data analysis.

Although meta-analysis is widely used in epidemiology and evidence-based medicine today, a meta-analysis of a medical treatment was not published until 1955. In the 1970s, more sophisticated analytical techniques were introduced in educational research, starting with the work of Gene V. Glass, Frank L. Schmidt, and John E. Hunter.

The online Oxford English Dictionary lists the first usage of the term in the statistical sense as 1976 by Glass. The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju, Larry V. Hedges, Ingram Olkin, John E. Hunter, and Frank L. Schmidt.

Uses in modern science

Because the results from different studies investigating different independent variables are measured on different scales, the dependent variable in a meta-analysis is some standardized measure of effect size. To describe the results of comparative experiments the usual effect size indicator is the standardized mean difference (d) which is the standard score equivalent to the difference between means, or an odds ratio if the outcome of the experiments is a dichotomous variable (success versus failure). A meta-analysis can be performed on studies that describe their findings in correlation coefficients, as for example, studies of the correlation between familial relationships and intelligence. In these cases, the correlation itself is the indicator of the effect size.

The method is not restricted to situations in which one or more variables is defined as "dependent." For example, a meta-analysis could be performed on a collection of studies each of which attempts to estimate the incidence of left-handedness in various groups of people.

Researchers should be aware that variations in sampling schemes can introduce heterogeneity to the result, which is the presence of more than one intercept in the solution. For instance, if some studies used 30mg of a drug, and others used 50mg, then we would plausibly expect two clusters to be present in the data, each varying around the mean of one dosage or the other. This can be modelled using a "random effects model."

Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each studies' effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel Haenszel method and the Peto method.

A free Excel-based calculator to perform Mantel Haenszel analysis is available at: http://www.pitt.edu/~super1/lecture/lec1171/014.htm.

They also have a free Excel-based Peto method calculator at: http://www.pitt.edu/~super1/lecture/lec1171/015.htm

Cochraine and other sources provide a useful discussion of the differences between these two approaches.

Question: Why not just add up all the results across studies ?

Answer: There is concern about Simpson's paradox.

Note, however that Mantel Haenszel analysis and Peto analysis introduce their own biases and distortions of the data results.

A recent approach to studying the influence that weighting schemes can have on results has been proposed through the construct of gravity, which is a special case of combinatorial meta analysis.

Modern meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the studies' outcomes show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design are coded. These characteristics are then used as predictor variables to analyze the excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges.

Meta analysis leads to a shift of emphasis from single studies to multiple studies. It emphasises the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed Metaanalytic thinking.

The results of a meta-analysis are often shown in a forest plot.

Methods

Literature searching

Studies of automatic literature searching show that important search concepts are the exposure (E) and outcome (O) in the study abstract[1].


Risk of bias in studies

Frameworks exist for assessing the quality of individual studies and groups of studies[2]:


Assessing the quality of a trial by only using the published report may lead to inaccurate conclusions.[3]

A weakness of the method is that sources of bias are not controlled by the method. A good meta-analysis of badly designed studies will still result in bad statistics. Robert Slavin has argued that only methodologically sound studies should be included in a meta-analysis, a practice he calls 'best evidence meta-analysis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. Another weakness of the method is the heavy reliance on published studies, which may increase the effect as it is very hard to publish studies that show no significant results. This publication bias or "file-drawer effect" (where non-significant studies end up in the desk drawer instead of in the public domain) should be seriously considered when interpreting the outcomes of a meta-analysis. Because of the risk of publication bias, many meta-analyses now include a "failsafe N" statistic that calculates the number of studies with null results that would need to be added to the meta-analysis in order for an effect to no longer be reliable.

Small study effect and publication bias

The small study effect is the observation that small studies tend to report more positive results.[4][5][6] This is especially a threat when the original studies in a meta-analysis are less than 50 patients in size.[7]

Statistical methods

Measuring consistency of study results

Consistency can be statistically tested using either the Cochran's Q or I2.[8][9] The I2 is the "percentage of total variation across studies that is due to heterogeneity rather than chance."[8] These numbers are usually displayed for each group of studies on a Forest plot.

In interpreting of the Cochran's Q, heterogeneity exists if its p-value is < 0.05 or possibly if < 0.10[10][11].

The following has been proposed for interpreting I2:[8]

  • Low heterogeneity is I2 = 25%
  • Moderate heterogeneity is I2 = 50%
  • High heterogeneity is I2 = 75%

or according to the Handbook of the Cochrane Collaboration:[12]

  • 0%-40%: might not be important
  • 30%-60%: may represent moderate heterogeneity
  • 50%-90%: may represent substantial heterogeneity
  • 75%-100%: considerable heterogeneity

However, I2, even when the value is 0%, can be misleading if the confidence intervals around the value are not provided.[13][14]

Statistical methods exist for assessing the importance of subgroups.[15]

Types of meta-analyses

A conventional meta-analysis is also called a pairwise meta-analysis.

Network meta-analysis

A network meta-analysis[16] and Bayesian hierarchical models[17] pool studies in order to compare to treatments that have not been directly compared.[18][19] Network meta-analyses are commonly not well performed.[20] Network meta-analyses of both randomized controlled trials[21][22][23] and diagnostic test assessments[24] can have misleading results. Network meta-analyses have been conducted by the Cochrane Collaboration.[25][26]

Network meta-analyses can be conducted with Bugs and OpenBugs software.

Network meta-analyses are of two types:

  • Arm-based
  • Contrast-based

Individual patient data meta-analysis

Individual patient data meta-analysis can be done with a one-stage or two-stage approach. The two-stage approach, which does not require true pooling of individual patient data, can yield very similar results if confounders or modulators are statistically controlled for[27].

Narrative or qualitative data

A meta-narrative combines qualitative or narrative data.[28][29] Three approaches have been described[30]:

  • Classic Glaserian grounded theory[31] This method may have difficulty in the goal is explicating[30].
  • Straussian grounded theory[32]. This approach may be easiest for new projects.[30]
  • Constructivist grounded theory

Grounded theory can guide a meta-narrative of qualitative data[33][31][32]: an example is by Seltz[34].

Grounded thematic analysis can guide the collection of qualitative or narrative observations[35]. This approach can also be used in a positive deviance approach to quality improvement assessments[36]

Guiding principles are[37]:

  1. Pragmatism. "what to include is not self-evident. The reviewer must be guided by what will be most useful to the intended audience(s), for example, what is likely to promote sense making.
  2. Pluralism. "the topic should be illuminated from multiple angles and perspectives"
  3. Historicity. "research traditions are often best described as they unfolded over time, highlighting significant individual scientists, events and discoveries which shaped the tradition."
  4. Contestation. "'conflicting data' from different research traditions should be examined to generate higher-order insights"
  5. Reflexivity. "throughout the review, reviewers must continually reflect, individually and as a team, on the emerging findings."
  6. Peer review. "emerging findings should be presented to an external audience"

Reporting standards

The RAMESES publication standards guide methods[38]:

  • Wong. RAMESES publication standards for meta-narrative reviews. BMC Med. 2013[38] Table
  • Wong. RAMESES II reporting standards for realist evaluations. BMC Med. 2016[39] Table

Problems with meta-analyses

Obsolescence and duplications

The conclusions of meta-analyses may be mitigated by research published after the search date of the meta-analysis. This may occur by the time the meta-analysis has been published.[40][41] Strategies have been developed for updating meta-analyses.[42]

Meta-analyses may also be redundant.[43]

References

  1. Tsafnat G, Glasziou P, Karystianis G, Coiera E (2018). "Automated screening of research studies for systematic reviews using study characteristics". Syst Rev. 7 (1): 64. doi:10.1186/s13643-018-0724-7. PMC 5918752. PMID 29695296.
  2. openMetaAnalysis contributors. Assessing quality of individual studies (PRISMA Item 12)
  3. Vale CL, Tierney JF, Burdett S (2013). "Can trial quality be reliably assessed from published reports of cancer trials: evaluation of risk of bias assessments in systematic reviews". BMJ. 346: f1798. doi:10.1136/bmj.f1798. PMID 23610376.
  4. Dechartres A, Trinquart L, Boutron I, Ravaud P (2013). "Influence of trial sample size on treatment effect estimates: meta-epidemiological study". BMJ. 346: f2304. doi:10.1136/bmj.f2304. PMC 3634626. PMID 23616031.
  5. Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG; et al. (2010). "Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study". BMJ. 341: c3515. doi:10.1136/bmj.c3515. PMC 2905513. PMID 20639294.
  6. Sterne JA, Egger M, Smith GD (2001). "Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis". BMJ. 323 (7304): 101–5. PMC 1120714. PMID 11451790.
  7. F. Richy, O. Ethgen, O. Bruyere, F. Deceulaer & J. Reginster : From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi) . The Internet Journal of Epidemiology. 2004 Volume 1 Number 2
  8. 8.0 8.1 8.2 Higgins JP, Thompson SG, Deeks JJ, Altman DG (2003). "Measuring inconsistency in meta-analyses". BMJ. 327 (7414): 557–60. doi:10.1136/bmj.327.7414.557. PMC 192859. PMID 12958120. Unknown parameter |month= ignored (help)
  9. Higgins JP, Thompson SG (2002). "Quantifying heterogeneity in a meta-analysis". Stat Med. 21 (11): 1539–58. doi:10.1002/sim.1186. PMID 12111919.
  10. Fleiss JL (1986). "Analysis of data from multiclinic trials". Control Clin Trials. 7 (4): 267–75. PMID 3802849. Unknown parameter |month= ignored (help)
  11. Dickersin K, Berlin JA (1992). "Meta-analysis: state-of-the-science". Epidemiol Rev. 14: 154–76. PMID 1289110.
  12. Higgins JPT, Green S, ed. (2008). "Cochrane Handbook for Systematic Reviews of Interventions". Cochrane Collaboration. Retrieved 2016-10-04.
  13. Ioannidis JP, Patsopoulos NA, Evangelou E (2007). "Uncertainty in heterogeneity estimates in meta-analyses". BMJ. 335 (7626): 914–6. doi:10.1136/bmj.39343.408449.80. PMID 17974687.
  14. Ioannidis JP (2008). "Interpretation of tests of heterogeneity and bias in meta-analysis". J Eval Clin Pract. 14 (5): 951–7. doi:10.1111/j.1365-2753.2008.00986.x. PMID 19018930.
  15. Altman DG, Bland JM (2003). "Interaction revisited: the difference between two estimates". BMJ. 326 (7382): 219. PMC 1125071. PMID 12543843.
  16. Lumley T (2002). "Network meta-analysis for indirect treatment comparisons". Stat Med. 21 (16): 2313–24. doi:10.1002/sim.1201. PMID 12210616. Unknown parameter |month= ignored (help)
  17. Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004 Oct 30;23(20):3105-24. PMID 15449338
  18. Salanti G, Kavvoura FK, Ioannidis JP (2008). "Exploring the geometry of treatment networks". Ann. Intern. Med. 148 (7): 544–53. PMID 18378949. Unknown parameter |month= ignored (help)
  19. Ioannidis JP (2009). "Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses". CMAJ. 181 (8): 488–93. doi:10.1503/cmaj.081086. PMC 2761440. PMID 19654195.
  20. Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG (2009). "Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews". BMJ. 338: b1147. PMC 2665205. PMID 19346285.
  21. Kent DM, Thaler DE (2008). "Stroke prevention--insights from incoherence". N. Engl. J. Med. 359 (12): 1287–9. doi:10.1056/NEJMe0806806. PMID 18753641. Unknown parameter |month= ignored (help)
  22. Thijs V, Lemmens R, Fieuws S. Network meta-analysis: simultaneous meta-analysis of common antiplatelet regimens after transient ischaemic attack or stroke. ur Heart J. 2008 May;29(9):1086-92. Epub 2008 Mar 17. PMID 18349026
  23. Chou, Roger (2009-02-01). "Gabapentin Versus Tricyclic Antidepressants for Diabetic Neuropathy and Post-Herpetic Neuralgia: Discrepancies Between Direct and Indirect Meta-Analyses of Randomized Controlled Trials". Journal of General Internal Medicine. 24 (2): 178–188. doi:10.1007/s11606-008-0877-5. PMID 19089502. Retrieved 2009-01-26. Unknown parameter |coauthors= ignored (help)
  24. Takwoingi, Yemisi (2013-04-02). "Empirical Evidence of the Importance of Comparative Studies of Diagnostic Test Accuracy". Annals of Internal Medicine. 158 (7): 544–554. doi:10.7326/0003-4819-158-7-201304020-00006. ISSN 0003-4819. Retrieved 2013-04-01. Unknown parameter |coauthors= ignored (help)
  25. Singh JA, Christensen R, Wells GA, Suarez-Almazor ME, Buchbinder R, Lopez-Olivo MA; et al. (2009). "A network meta-analysis of randomized controlled trials of biologics for rheumatoid arthritis: a Cochrane overview". CMAJ. 181 (11): 787–96. doi:10.1503/cmaj.091391. PMC 2780484. PMID 19884297.
  26. Singh JA, Christensen R, Wells GA, Suarez-Almazor ME, Buchbinder R, Lopez-Olivo MA; et al. (2009). "Biologics for rheumatoid arthritis: an overview of Cochrane reviews". Cochrane Database Syst Rev (4): CD007848. doi:10.1002/14651858.CD007848.pub2. PMID 19821440.
  27. Scotti L, Rea F, Corrao G (2018). "One-stage and two-stage meta-analysis of individual participant data led to consistent summarized evidence: lessons learned from combining multiple databases". J Clin Epidemiol. 95: 19–27. doi:10.1016/j.jclinepi.2017.11.020. PMID 29197646.
  28. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O (2004). "Diffusion of innovations in service organizations: systematic review and recommendations". Milbank Q. 82 (4): 581–629. doi:10.1111/j.0887-378X.2004.00325.x. PMC 2690184. PMID 15595944.
  29. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R (2005). "Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review". Soc Sci Med. 61 (2): 417–30. doi:10.1016/j.socscimed.2004.12.001. PMID 15893056.
  30. 30.0 30.1 30.2 Rieger KL (2019). "Discriminating among grounded theory approaches". Nurs Inq. 26 (1): e12261. doi:10.1111/nin.12261. PMC 6559166 Check |pmc= value (help). PMID 30123965.
  31. 31.0 31.1 Glaser, Barney G.; Strauss, Anselm L. (1967). The discovery of grounded theory : strategies for qualitative research. Chicago. ISBN 978-0-202-30028-3. OCLC 253912.
  32. 32.0 32.1 Corbin, Juliet M.; Strauss, Anselm L. (2015). Basics of qualitative research : techniques and procedures for developing grounded theory (4 ed.). Thousand Oaks, California. ISBN 1-4129-9746-1. OCLC 898334340.
  33. Hanson JL, Balmer DF, Giardino AP (2011). "Qualitative research methods for medical educators". Acad Pediatr. 11 (5): 375–86. doi:10.1016/j.acap.2011.05.001. PMID 21783450.
  34. Seltz LB, Preloger E, Hanson JL, Lane L (2016). "Ward Rounds With or Without an Attending Physician: How Interns Learn Most Successfully". Acad Pediatr. 16 (7): 638–44. doi:10.1016/j.acap.2016.05.149. PMID 27283038.
  35. Patton, Michael Quinn (2015). Qualitative research & evaluation methods : integrating theory and practice. Thousand Oaks, California. ISBN 1-4129-7212-4. OCLC 890080219.
  36. Rose AJ, Petrakis BA, Callahan P, Mambourg S, Patel D, Hylek EM; et al. (2012). "Organizational characteristics of high- and low-performing anticoagulation clinics in the Veterans Health Administration". Health Serv Res. 47 (4): 1541–60. doi:10.1111/j.1475-6773.2011.01377.x. PMC 3401398. PMID 22299722.
  37. Wong G, Greenhalgh T, Westhorp G, Pawson R (2014). "Development of methodological guidance, publication standards and training materials for realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative Evidence Syntheses – Evolving Standards) project". Health Services and Delivery Research. doi:10.3310/hsdr02300. PMID 25642521.
  38. 38.0 38.1 Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R (2013). "RAMESES publication standards: meta-narrative reviews". BMC Med. 11: 20. doi:10.1186/1741-7015-11-20. PMC 3558334. PMID 23360661.
  39. Wong G, Westhorp G, Manzano A, Greenhalgh J, Jagosh J, Greenhalgh T (2016). "RAMESES II reporting standards for realist evaluations". BMC Med. 14 (1): 96. doi:10.1186/s12916-016-0643-1. PMC 4920991. PMID 27342217.
  40. Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007). "How quickly do systematic reviews go out of date? A survival analysis". Ann. Intern. Med. 147 (4): 224–33. PMID 17638714. Unknown parameter |month= ignored (help)
  41. "The use of older studies in meta-analyses of medical interventions: a survey" (Text.Serial.Journal). 2009-05-11. Retrieved 2009-06-04.
  42. Sampson M, Shojania KG, McGowan J; et al. (2008). "Surveillance search techniques identified the need to update systematic reviews". J Clin Epidemiol. 61 (8): 755–62. doi:10.1016/j.jclinepi.2007.10.003. PMID 18586179. Unknown parameter |month= ignored (help)
  43. Siontis, K. C. (2013-07-19). "Overlapping meta-analyses on the same topic: survey of published studies". BMJ. 347 (jul19 1): f4501–f4501. doi:10.1136/bmj.f4501. ISSN 1756-1833. Retrieved 2013-07-22. Unknown parameter |coauthors= ignored (help)

See also

External links

da:Metaanalyse de:Metaanalyse nl:Meta-analyse no:Metaanalyse su:Meta-analysis sv:Meta-analys


Template:WikiDoc Sources