Systematic Reviews A Logical Methodological Extension of Evidence-based Medicine

Share Embed


Descripción

ACADEMIC EMERGENCY MEDICINE • December 1999, Volume 6, Number 12

1255

Systematic Reviews: A Logical Methodological Extension of Evidence-based Medicine E. JOHN GALLAGHER, MD

Abstract. Academic Emergency Medicine is interested in publishing systematic reviews (also known as meta-analyses) containing explicit, quantitative compilations of valid, critically appraised evidence targeted at topics relevant to the science and practice of emergency medicine. This article presents a strategy for the performance and reporting of systematic reviews, based on established recommendations. A well-performed systematic review of the literature

A

NALOGOUS to the ‘‘CONsolidated Standards Of Reporting Trials’’ (CONSORT) statement, which calls for uniform reporting of randomized controlled/clinical trials (RCTs),1 authors of systematic reviews should adhere, insofar as possible, to the annotated outline provided here. What follows draws heavily on the Cochrane Collaboration’s recommendations for performance of a Cochrane systematic review.2 More detailed information is available in the Cochrane Reviewers’ Handbook, which can be downloaded from: http:// www.update-software.com/ccweb/cochrane/hbook. htm. Academic Emergency Medicine is specifically interested in systematic reviews that critically analyze therapeutic interventions or diagnostic testing. Both content areas, although facing somewhat different statistical challenges, share at least six defining features common to all well-executed systematic reviews: 1) clear specification of main objective(s); 2) identification of eligible studies as the primary analytic unit; 3) critical appraisal of studies selected for inclusion; 4) merger of combinable data across studies, according to rigorous methodologic and statistical standards; 5) quantitative summary of the main findings, appropriately constrained by the evidence presented; and 6) discussion of evidence-based implications for clinical practice (‘‘clinical bottom line’’) and future research.2 Received from the Department of Emergency Medicine, Montefiore Medical Center, Bronx, NY (EJG). Received August 10, 1999; accepted August 25, 1999. Address for correspondence and reprints: E. John Gallagher, MD, Albert Einstein College of Medicine, Department of Emergency Medicine, Montefiore Medical Center, 111 East 210th Street, Bronx, NY 10467-2490. Fax: 718-798-6084; e-mail: [email protected].

may provide clinically relevant information otherwise unobtainable, and has potential to alter patient care. Systematic reviews are logical methodological extensions of evidence-based medicine. Key words: systematic reviews; evidence-based medicine; metaanalysis; Cochrane Collaboration; Academic Emergency Medicine. ACADEMIC EMERGENCY MEDICINE 1999; 6:1255–1260

Although the Cochrane format was originally developed for assessment of treatment, cause, prognosis, or prevention of disease,3 this conceptual approach can be readily adapted to a metaanalysis of diagnostic testing strategies.4 The principal distinction between systematic reviews of diagnosis and treatment is the need for different statistical methods when aggregating data across studies. If the goal is to determine whether treatment A is superior to treatment B, combinable endpoints from each eligible study are first stratified, then compiled in an effort to reconcile contradictory or underpowered conclusions reached by individual prior investigations.5 Combining diagnostic tests with the goal of identifying the best available test for a particular target disorder requires an additional step to adjust summary findings for the interdependence of sensitivity and specificity intrinsic to any diagnostic test.6 Below is an outline of the preferred format for a Cochrane systematic review.2 Selected annotations follow, corresponding to numbered items in the outline.

OUTLINE OF PREFERRED FORMAT FOR SYSTEMATIC REVIEWS 1. Structured Abstract: 1.1. Objective(s) 1.2. Methods: 1.2.1. Selection criteria 1.2.2. Application of selection criteria 1.2.3. Assessment of study quality 1.2.4. Search strategy 1.2.5. Data synthesis 1.3. Results

1256

SYSTEMATIC REVIEWS

1.4. Conclusions: 1.4.1. Implications for clinical practice 1.4.2. Implications for future research 2. Body of Manuscript: 2.1. Introduction: 2.1.1. Background 2.1.2. Objective(s) 2.2. Methods: 2.2.1. Selection criteria: 2.2.1.1. Types of studies 2.2.1.2. Types of participants 2.2.1.3. Types of intervention 2.2.1.4. Types of outcome measures 2.2.2. Application of selection criteria: 2.2.2.1. Number of independent reviewers 2.2.2.2. Assessment of interreviewer concordance 2.2.3. Criteria for assessment of study quality 2.2.3.1. Selection bias 2.2.3.2. Performance bias 2.2.3.3. Attrition bias 2.2.3.4. Detection bias 2.2.3.5. Omnibus index of study quality: 2.2.3.5.1. Allocation concealment 2.2.3.5.2. Blinding 2.2.4. Search strategy: 2.2.4.1. Bibliographic databases 2.2.4.2. Reference lists 2.2.4.3. Conference proceedings 2.2.4.4. Hand searches 2.2.4.5. Constraints 2.2.5. Method of data synthesis 2.2.5.1. Statistical techniques 2.2.5.2. Sensitivity analyses 2.3. Results: 2.3.1. Description of studies 2.3.1.1. Tables of included and excluded studies (3.2 and 3.3 below) 2.3.1.2. Description of: 2.3.1.2.1. Study participants 2.3.1.2.2. Interventions 2.3.1.2.3. Outcome measures 2.3.1.2.4. Important differences among studies 2.3.2. Methodologic quality of selected studies: 2.3.2.1. Overall quality score of included studies 2.3.2.2. Important flaws

Gallagher • SYSTEMATIC REVIEWS

2.3.3. Findings: 2.3.3.1. Main results of review 2.3.3.2. Results of sensitivity analyses 2.4. Discussion 2.5. Limitations and Future Questions 2.6. Conclusions: 2.6.1. Implications for practice 2.6.2. Implications for research 3. Tables and Figures: 3.1. Table of comparisons: 3.1.1. Lists comparisons made in data tables (3.4 below) 3.1.2. Comparisons correspond to questions or hypotheses posed under objectives above (2.1.2) 3.2. Table of included studies: 3.2.1. Study 3.2.2. Methods: 3.2.2.1. Concealment of allocation must be scored 3.2.2.2. Data sources must be indicated 3.2.3. Participants 3.2.4. Interventions 3.2.5. Outcomes 3.3. Table of excluded studies: 3.3.1. List excluded studies 3.3.2. Indicate basis for exclusion of each 3.4. Data tables and graphs 3.4.1. One table and graph for each comparison listed in table of comparisons above (3.1) 3.4.2. Types of data tables and graphs: 3.4.2.1. Dichotomous data 3.4.2.2. Continuous data 3.4.2.3. Summary receiver operating characteristics (SROC) curve 4. References: 4.1. Standard references 4.2. References to studies relevant to this review: 4.2.1. Included studies 4.2.2. Excluded studies 4.2.3. Unassessed completed studies 4.2.4. Ongoing studies

ANNOTATIONS 1. Structured Abstract: 1.1. Objective(s): Explicitly identify the work as a systematic review. Describe the focus of the review (treatment, etiology, prognosis, prevention, or diagnostic testing) and its principal objective(s) within that focus.

ACADEMIC EMERGENCY MEDICINE • December 1999, Volume 6, Number 12

1.2. Methods: The Methods section of the abstract may be abbreviated as necessary in order to limit abstract length to 250 words. Selection criteria should describe the types of studies, participants, intervention(s), and outcome measures. Briefly mention criteria for assessment of study quality and means of measuring interreviewer concordance in the application of these criteria. Search strategies should include all sources examined, including databases (electronic and other), reference lists, conference proceedings, and hand searches. Constraints such as years and languages searched should be specified. Data synthesis should briefly describe main statistical techniques used for data merger and assessment of interstudy heterogeneity. These differ with data type and focus of the review. 1.3. Results: State number of included and excluded studies, important differences among studies, and overall quality score of included studies. The main results should be expressed as data-appropriate summary statistic(s), accompanied by measures of precision (confidence intervals),7 and results of any sensitivity analyses. Dichotomous endpoints are preferred wherever possible, and should be reported in both relative and absolute terms, with an estimate of number needed to treat or harm (NNT/NNH).8 1.4. Conclusions: Evidence-based implications for clinical practice (‘‘clinical bottom line’’) and for future research. 2. Body of Manuscript: 2.1. Introduction: Background: Establish a context by briefly reviewing prior work, followed by a rationale of the need for a systematic review of this topic. Typically, different studies have arrived at contradictory conclusions or no single study has had sufficient statistical ‘‘power’’ to exclude a clinically important difference. Objective(s): State the focus of the systematic review being undertaken (treatment, etiology, prognosis, prevention, or diagnostic testing) and clearly identify the primary, secondary, and other endpoint(s). 2.2. Methods: 2.2.1. Selection criteria9: This includes types of studies meeting criteria for entry into the systematic review. For assessment of health care effectiveness, particularly comparison of treatments, these are usually experimental, i.e., RCTs, since these have been shown to provide the least biased information.10 However, many important clinical questions can be examined only through observational studies, particularly toxic exposures, e.g., smoking. The most common types of observational

1257

studies include prospective or retrospective cohort, case–control, and cross-sectional designs. Types of participants may be intrinsically restricted on the basis of age (e.g., bronchiolitis) or gender (e.g., ectopic pregnancy), or the investigators may wish to examine a specific population of interest (e.g., by age group, gender, race, ethnicity, educational status, or socioeconomic status). Alternatively, participants may be defined on the basis of illness (e.g., asthma). The intervention should be common to all studies meeting entry criteria. Endpoints chosen should be important health care outcomes, explicitly defined, and preferably identified a priori. Primary outcomes of a systematic review need not necessarily be the same ones used in the original studies. Indeed, systematic reviews often provide sufficient power to facilitate examination of dichotomous outcomes drawn from individual studies reporting continuous endpoints (which intrinsically require much smaller sample sizes).11 Outcomes may also be combined. 2.2.2. Application of selection criteria incorporates two main features: The number of independent reviewers applying the criteria (typically two), and a method of assessing interreviewer concordance. This is usually measured with the kappa statistic,12 unless the marginal totals of the table are markedly unbalanced, in which case a proportionate measure of adjusted agreement is preferred.13,14 Adjudication by a third party or by discussion typically resolves disagreements between the two principal reviewers. There is some evidence that reviewer bias may be reduced by pairing a content expert with a second individual who is not a content expert on the topic under examination.15 2.2.3. Study quality is traditionally assessed on the basis of the presence or absence of any of the four main types of intrinsic bias that threaten validity.16 These include: 2.2.3.1. Selection bias: Systematic distortion in the assignment of individuals to groups under comparison, such that their susceptibility to the target outcome is unequal. This form of bias is offset by randomization with allocation concealment.17 2.2.3.2. Performance bias: Systematic differences in care provided to groups under comparison other than the target intervention. This is reduced by blinding of patients and providers to group allocation.18 Performance bias commonly takes one of two forms: contamination (application of the maneuver intended only for the treatment group to some portion of the controls) or cointervention (provision of unintended care to either arm of the trial). 2.2.3.3. Attrition bias: Systematic differences

1258

SYSTEMATIC REVIEWS

in losses of individuals from the groups under comparison. Theoretically this is offset by scrupulous tracking of those withdrawing from the trial, accompanied by an explicit description of protocol deviations. However, this has not been clearly demonstrated to affect validity.18 2.2.3.4. Detection bias: Systematic differences in assessment of the target outcome. This is offset by blinding. 2.2.3.5. An evidence-based omnibus index of study quality for systematic reviews should therefore be based principally on the adequacy of allocation concealment17 and double blinding.18 The following is a suggested adaptation of the Cochrane Collaboration’s simple three-tiered classification of all studies entered into a systematic review2: Class A: Both major criteria of allocation concealment and double-blinding met. Class B: Major criteria partly but mostly met. Class C: Major criteria mostly not met. 2.2.4. Search strategy: Bibliographic databases searched should include MEDLINE (hard copy, Index Medicus readily accessible at no cost) and EMBASE (hard copy, Excerpta Medica neither readily accessible nor free). Because the journal overlap between these two major electronic databases is on the order of only about 30–35%,19 parallel searches are strongly encouraged. From the many other searchable databases available, the Cochrane Library and the Science Citation Index are among the most popular. Other sources of information include reference lists from identified citations, conference proceedings, and results of hand searches of relevant journals.20 Constraints such as years searched and languages searched should be clearly stated. 2.2.5. Methods of data synthesis typically use stratification of each study followed by aggregation. Results of individual studies are not combined without prior stratification, since lumping these together as if they were part of one large study would: 1) undermine validity by destroying the protection against selection bias afforded by randomization, and 2) falsely inflate precision.2 This is one of several important differences between a pooled analysis and a meta-analysis (systematic review). It is generally recommended that, for dichotomous data, a measure of both relative and absolute ‘‘risk’’ be provided, e.g., a relative risk and an absolute risk difference. The reciprocal of the latter in a study of competing therapies then becomes the number needed to treat (NNT). This is arguably the single most useful piece of information to emerge from either an RCT or a systematic review of an intervention.8

Gallagher • SYSTEMATIC REVIEWS

For continuous data, a weighted mean difference may be used (with adjustment for differences in sample size using the inverse of the variance for each individual study) or an overall ‘‘effect size’’21 may be calculated. One advantage of an effect size is that it is unit-free. This is because an effect size is comprised of a numerator that reflects central tendency (usually a mean), divided by a denominator that reflects dispersion (usually a standard deviation). Both of these are expressed in the same units of measurement, thus canceling out one another, resulting in a unitless quantitative estimate of the overall effect of the target intervention. An effect size is particularly useful in combining data from studies reporting their findings in different units thought to measure similar phenomena. For example, peak expiratory flow rate (L/min) have been combined with forced expiratory flow rate (mL) across otherwise comparable studies of airflow obstruction using effect size as a common metric.22 Usually a test of heterogeneity is applied to assess combinability of studies.23 If heterogeneity is present, sensitivity analyses should be conducted to determine whether this can be accounted for by differences in study quality or methodology. If it cannot, it may be inadvisable to merge the data.2 Because most of the tests available for detection of heterogeneity are of relatively low power, absence of statistical evidence of heterogeneity does not necessarily mean that the data are homogeneous. Rather, heterogeneity may have gone undetected. On the other hand, if heterogeneity is found, it is likely to be present. Even if heterogeneity is not detected, sensitivity analyses should be undertaken by regrouping studies in different ways, followed by repeated analysis of these new groupings. The main goal of sensitivity analysis is to determine whether the overall findings of the systematic review are robust or brittle. For systematic reviews of diagnostic tests, data aggregation initially follows the same format as above, i.e., stratification, aggregation, heterogeneity assessment, and appropriate sensitivity analyses. However, an additional step is necessary to adjust for interstudy variation in the threshold chosen for declaring a test result positive.6,24 This requires a ‘‘logit (logistic) transformation’’ of the data from each study, which is then manipulated and back-transformed to provide a summary receiver operating characteristics (SROC) curve that fits the same graphical coordinates as the familiar ROC curve, i.e., true-positive rate (sensitivity) on the vertical axis plotted against the false-positive rate (1 ⫺ specificity) on the horizontal axis. Summary test properties of sensitivity and specificity for the combined data can then be read directly off

ACADEMIC EMERGENCY MEDICINE • December 1999, Volume 6, Number 12

the SROC curve. Although the need for this may not be intuitively evident, if the simpler alternative approach is taken, i.e., pooling the unadjusted data, this may yield misleading summary test properties. This is because pooling without adjustment does not take into account the tradeoff between sensitivity and specificity that is an inherent feature of any diagnostic test. Further details of this method can be found clearly described in papers by Irwig et al.6 and Moses et al.24 This method has been applied in the emergency medicine literature to a systematic review of the utility of spiral CT in the diagnosis of pulmonary embolism.25 In general, test properties summarized in systematic reviews should be expressed as measures that are largely independent of the prevalence of the target disorder, i.e., as sensitivity/specificity or positive/negative likelihood ratios.26 2.3. Results: Because the included and excluded studies will be listed in tabular format (3.2 below), it is necessary to mention only the number of each here and any important differences between and among studies. 2.3.1. Description of studies: Pertinent demographic and other features of study participants should be noted and all interventions explicitly delineated. Outcome measures should be clearly ordered, e.g., primary target outcome, secondary, etc., carefully described, and identified as a priori or post hoc. 2.3.2. Methodologic quality of selected studies: Provide an overall assessment of study quality, using the proposed Cochrane classification noted above (2.2.3.5.).2 Many reviewers also use a simple rating scale devised by Jadad et al.27 The quality scores for individual studies should be recorded in the table of included studies. Important methodologic flaws in included studies, if present, need to be reported here and explained in appropriate detail in the Discussion or Limitations and Future Questions section. 2.3.3. Main findings: 2.3.3.1. The main results of the review should be a logical product of the Methods outlined above (2.2.5), presented in a format determined by the focus of the review and type of summary data analyzed. 2.3.3.2. The results of any sensitivity analyses should be recorded. Statistical summaries of individual trials should be included in data tables (3.4 below). 2.4. Discussion: The Discussion of a systematic review is similar to the Discussion section of any other original contribution. The principal difference is that the analytic unit of interest is not an individual patient or test, grouped for purposes of analysis within a single study, but rather individ-

1259

ual studies, grouped according to the methodology outlined above for purposes of analysis within a systematic review. 2.5. Limitations and Future Questions: Limitations will typically center about the validity of the component studies, their suitability for merger, evidence of heterogeneity, and other limitations identified by sensitivity analyses. The future questions, in parallel with the Conclusions offered below, should be divided into those pertinent to clinical activity and those relevant to further investigation of the topic of the systematic review. 2.6. Conclusions: 2.6.1. Implications for practice: This should focus on a summary recommendation, constrained by the data as presented, expressed as an evidence-based ‘‘clinical bottom line.’’ The latter is often best stated as a single sentence. Alternatively, it is entirely acceptable—if supported by data—to state that no clinically useful inferences can be drawn from the systematic review at this time. 2.6.2. Implications for research: This should be as specific as possible. A statement that ‘‘Further work is needed’’ without qualification is not generally helpful to readers or other investigators. 2.7. Acknowledgments 2.8. Conflicts of interest 3. Tables and Figures: 3.1. Table of comparisons: List all comparisons made in the data tables. These should correspond directly to questions or hypotheses posed under Objectives (2.1.2. above). 3.2. Table of included studies: All included studies are listed, using the following column format from left to right: First author with reference, methodology rating [specifically indicating the Cochrane score (A,B,C) based on allocation concealment and blinding].2 The Jadad score may also be listed here for each study.27 Participants are summarized numerically, followed by interventions (if any), and outcomes, clearly ordered as indicated previously into primary, secondary, etc. 3.3. Table of excluded studies: All studies meeting, or appearing to meet, inclusion criteria that were subsequently excluded, with the basis for exclusion clearly stated. 3.4. Data tables and graphs: Generally, there is one data table and one graph for each comparison listed above in the table of comparisons (3.1.). Graphs typically display each study with a measure of central tendency, corresponding to the units of the horizontal axis (weighted mean, simple proportion, relative risk or odds ratio) bounded by 95% confidence intervals (CIs) expressed as a horizontal line extending to the limits of the interval. For ease of inspection, a vertical line is drawn at the null point (zero for proportions and means, one

1260

SYSTEMATIC REVIEWS

for ratios), in order to display clearly those studies whose CIs embrace or exclude the null. At the bottom, beneath the individual studies, is an overall summary measure of the effect of the intervention, displayed horizontally. The CI of the summary statistic will have greater precision than any of the component studies, since it contains a much larger N. The graph of a systematic review of a diagnostic test will be displayed as an SROC curve, from which the true-positive and false-positive rates can be read from the vertical and horizontal axes, respectively. These values incorporate the tradeoff between sensitivity and specificity adjusted through the SROC method for different thresholds in each constituent study included in the analysis. As with standard ROC curves, a sense of overall test performance can be obtained simply by integrating under the curve to determine the area it encloses (AUC). 3.4.2. Types of data tables and graphs: As noted above (2.2.5 and 2.3.3), endpoints expressed as dichotomous data are managed very differently than endpoints expressed as continuous data. Data summarizing diagnostic testing strategies are handled in yet a third fashion, as summarized above (2.2.5). 4. References: 4.1. Standard references 4.2. References to studies relevant to this review: 4.2.1. Studies included in this review 4.2.2. Studies excluded from this review 4.2.3. Studies awaiting assessment 4.2.4. Ongoing studies References 1. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996; 276:637–9. 2. Mulrow CD, Oxman AD (eds). Cochrane Collaboration Handbook. In: The Cochrane Library [database on CD-ROM]. The Cochrane Collaboration, Issue 1. Oxford: Update Software, 1997. 3. Cochrane AL. Effectiveness and Efficiency: Random Reflections on Health Services [reprint of 1st ed]. London: Royal Society of Medicine Press, 1999. 4. Lau J, Ioannidis JP, Schmid CH. Quantitative synthesis in systematic reviews. Ann Intern Med. 1997; 127:820–6.

View publication stats

Gallagher • SYSTEMATIC REVIEWS

5. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997; 126:376–80. 6. Irwig L, Tosteson AN, Gatsonis C, et al. Guidelines for meta-analyses evaluating diagnostic tests. Ann Intern Med. 1994; 120:667–76. 7. Gardner MJ, Altman DG. Statistics With Confidence—Confidence Intervals and Statistical Guidelines. London: British Medical Journal Publishing, 1990. 8. Laupacis A, Sackett DL, Roberts RS. An assessment of clinically useful measures of the consequences of treatment. N Engl J Med. 1988; 318:1728–33. 9. Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematic reviews. BMJ. 1994; 309:1286–91. 10. Sacks HS, Chalmers TC, Smith H. Randomized versus historical controls for clinical trials. Am J Med. 1982; 72:233–40. 11. Hintze JL. PASS 6.0 User’s Guide, Kaysville, UT: Number Cruncher Statistical Systems, 1996, pp 55–76, 155–62. 12. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Measurement. 1960; 20:37–46. 13. Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990; 43:543–9. 14. Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990; 43: 551–8. 15. Cooper HM, Ribble RG. Influences on the outcome of literature searches for integrative research reviews. Knowledge. 1989; 10:179–201. 16. Feinstein AR. Clinical Epidemiology: The Architecture of Clinical Research. Philadelphia: Saunders, 1985, pp 39–52. 17. Chalmers TC, Celano P, Sacks HS, Smith H. Bias in treatment assignment in controlled clinical trials. N Engl J Med. 1983; 309:1358–61. 18. Schulz KF, Chalmers I, Hayes RJ, Altman D. Empirical evidence of bias. JAMA. 1995; 273:408–12. 19. Smith BJ, Darzins PJ, Quinn M, Heller RF. Modern methods of searching the medical literature. Med J Aust. 1992; 157: 603–11. 20. Langham J, Thompson E, Rowan K. Identification of randomized controlled trials from the emergency medicine literature: comparison of hand searching versus MEDLINE searching. Ann Emerg Med. 1999; 34:25–34. 21. Dawson-Saunders B, Trapp RG. Basic and Clinical Biostatistics (2nd ed). Norwalk, CT: Appleton & Lange, 1994, pp 224–7. 22. Stoodley RG, Aaron SD, Dales RE. The role of ipratropium bromide in the emergency management of acute asthma exacerbation: a meta-analysis of randomized clinical trials. Ann Emerg Med. 1999; 33:8–18. 23. Cochran W. The combination of estimates from different experiments. Biometrics. 1954; 10:101–29. 24. Moses LE, Shapiro D, Littenbert S. Combining independent studies of a diagnostic test into a summary ROC curve: data-analytic approaches and some additional considerations. Stat Med. 1993; 12:1293–316. 25. Vo TT, Jackson RE. Helical CT as a diagnostic tool for pulmonary embolism: a meta-analysis [abstract]. Acad Emerg Med. 1999; 6:540. 26. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology. A Basic Science for Clinical Medicine. Toronto: Little Brown and Company, 1985, p 76. 27. Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996; 17:1–12.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.