Journal List > Korean J Anesthesiol > v.70(5) > 1156732

Kim, Kim, In, Lee, Lee, and Kang: Assessment of risk of bias in quasi-randomized controlled trials and randomized controlled trials reported in the Korean Journal of Anesthesiology between 2010 and 2016

Abstract

Bias affects the true intervention effect in randomized controlled trials (RCTs), making the results unreliable. We evaluated the risk of bias (ROB) of quasi-RCTs or RCTs reported in the Korean Journal of Anesthesiology (KJA) between 2010 and 2016. Six kinds of bias (selection, performance, detection, attrition, reporting, and other biases) were evaluated by determining low, unclear, or high ROB for eight domains (random sequence generation, allocation concealment, blinding of participants, blinding of personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other bias) according to publication year. We identified 296 quasi-RCTs or RCTs. Random sequence generation was performed better than allocation concealment (51.7% vs. 20.9% for the proportion of low ROB, P < 0.001 and P = 0.943 for trend, respectively). Blinding of outcome assessment was superior to blinding of participants and personnel (42.9% vs. 15.5% and 23.0% for the proportion of low ROB, P = 0.026 vs. P = 0.003 and 0.896 for trend, respectively). Handling of incomplete outcome data was performed best with the highest proportion of low ROB (84.8%). Selective reporting had the lowest proportion of low ROB (4.7%). However, the ROB improved year by year (P < 0.001 for trend). Authors and reviewers should consider allocation concealment after random sequence generation, blinding of participants and personnel, and full reporting of results to improve the quality of RCTs submitted hereafter for publication in the KJA.

Introduction

Randomized controlled trials (RCTs) and their meta-analyses provide the most reliable evidence for medical interventions. Given a sufficient number of participants, random assignment to control or intervention groups produces baseline characteristics comparable between the two groups. The difference in the outcomes of interest produced by control or treatment interventions given to the respective groups represents the causal effects of the intervention of interest on the outcomes. However, the scientific reliability of the results of RCTs is challenged by recognized or unrecognized flaws in their design or conduct, analysis, and reporting of results, which underestimate or overestimate the true intervention effect [1].
Bias is defined as systemic error or deviation from the parameter of a population that affects the true intervention effect [2]. It is completely different from imprecision which refers to random error representing the difference in effect estimates upon multiple repetition of the same trial due to sampling variation. Bias caused by inadequate concealment or report of randomization exaggerates the estimate of intervention effect compared to adequate concealment or report [3]. Even the absence of a description of double blinding causes a similar, albeit less significant, result. Hence, avoiding bias maintains the reliability of RCT results. Bias can be classified into six categories, namely, selection, performance, attrition, detection, reporting, and other biases, according to the Cochrane handbook [2]. Therefore, we determined the validity of the quasi-RCTs or RCTs reported in the Korean Journal of Anesthesiology (KJA) between 2010 and 2016 by appraising the risk of these six biases.

Materials and Methods

Identification of quasi-randomized controlled trials or randomized controlled trials

Quasi-RCTs and RCTs with human subjects were identified by six independent board members of the Statistical Round of the KJA (KJH, KTK, IJ, LDK, LS, and KH) after excluding editorials, review articles, experimental studies using non-human species, case reports, letters to the editor, corrigenda or errata, and opinions from the papers published in the KJA between 2010 and 2016. The strategy for selecting quasi-RCTs and RCTs was based on the study Design Algorithm for Medical literature of Intervention [4], which was modified by consensus of the board members of the Statistical Round of the KJA before beginning this analysis. A study was determined to be a quasi-RCT or RCT if it featured a comparison made prospectively by exposure or intervention regarding interested outcome between different groups that had been randomly allocated by investigators, regardless of the adequacy of the random allocation (Fig. 1). We included quasi-RCTs in this analysis, as excluding them would make the assessment of random sequence generation redundant. Three groups, each of which consisted of two independent authors blinded to the other's identification results, identified quasi-RCTs and RCTs published between 2010 and 2011 (KJH and LS), between 2012 and 2013 (LDK and KH), and between 2014 and 2016 (KTK and IJ). If any inconsistencies in the identification results were detected, the final decision was made following an intensive discussion among all six authors in consensus meetings, which were held several times.

Assessment of risk of bias

Six kinds of bias (selection, performance, detection, attrition, reporting, and other biases) were assessed by determining the level of risk of bias (low, unclear, or high) in terms of seven domains (random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other bias) in each quasi-RCT or RCT by their publication year based on the criteria for judging risk of bias in the ‘Risk of bias’ assessment tool provided in the Cochrane handbook [2,5]. The risk of bias was determined to be low if the process to reduce bias was performed appropriately, unclear if sufficient information about the process was not provided, and high if the process was performed inappropriately. Because the criteria were ambiguous and open to controversy, we produced additional guidelines, including specific examples, and separated the domain “blinding of participants and personnel” into two domains of “blinding of participants” and “blinding of personnel” (Table 1). The authors assessed the risk of bias of quasi-RCTs and RCTs that they had identified. Agreement in the results of the assessments between assessors was achieved following several consensus meetings.

Statistical analysis

The agreement between the two independent authors of each group for the identification of quasi-RCTs and RCTs and the level of risk of bias regarding the eight domains (random sequence generation, allocation concealment, blinding of participants, blinding of personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other bias) was assessed using Cohen's kappa, which is represented by 1 for complete agreement, 0.81–0.99 for almost perfect agreement, 0.61–0.80 for substantial agreement, 0.41–0.60 for moderate agreement, 0.21–0.40 for fair agreement, 0.01–0.20 for slight agreement, 0 for agreement expected by chance, and < 0 for less agreement than would be expected by chance [6]. The annual change in ‘Low risk’ of bias in terms of the eight domains was assessed using the chi-square test for a linear trend. The statistical analysis was performed using IBM SPSS statistics software, ver. 23.0 (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant.

Results

Of the 497 studies retrieved after excluding editorials, review articles, experimental studies using non-human species, case reports, letters to the editor, corrigenda or errata, and opinions from the 1,431 papers published in the KJA between 2010 and 2016, 296 quasi-RCTs and RCTs were finally identified following intensive discussion (Cohen's kappa = 0.929). The Cohen's kappa values were 0.927, 0.322, 0.616, 0.676, 0.630, 0.673, 0.874, and 0.472 for random sequence generation, allocation concealment, blinding of participants, blinding of personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other bias, respectively.
The proportion of ‘Low risk’ of bias was highest in incomplete outcome data (84.8%) and lowest in selective reporting (4.7%) over the entire period (2010–2016) (Fig. 2). The proportion of ‘Unclear risk’ of bias was highest in selective reporting (94.3%) and lowest in incomplete outcome data (11.1%). The highest proportion of ‘High risk’ of bias was observed in blinding of personnel (20.6%), while the lowest proportion was observed in random sequence generation (0.7%). Compared to random sequence generation, in which the proportions of ‘Low risk’ and ‘Unclear risk’ of bias were 51.7% and 47.6%, respectively, allocation concealment had a lower proportion of ‘Low risk’ of bias (20.9%) and a higher proportion of ‘Unclear risk’ of bias (69.6%). Blinding of outcome assessment was performed better than blinding of participants and blinding of personnel (42.9% vs. 15.5% and 23.0% for the percentage of ‘Low risk’ of bias, respectively).
The risk of bias for random sequence generation significantly improved from the beginning of the analysis period and reached 100% of ‘Low risk’ of bias in 2016 (P for trend < 0.001) (Fig. 3), while that for allocation concealment has remained unimproved (P for trend = 0.943), with the percentage of ‘Low risk’ of bias < 40% and that of ‘Unclear risk’ of bias > 60% (Fig. 4). Despite the lower proportion of ‘Low risk’ of bias compared to ‘Unclear risk’ of bias, the proportion of ‘Low risk’ of bias for blinding of participants increased since the earliest year of the analysis (2010) (P for trend = 0.003) (Fig. 5). Similarly, the proportion of ‘Low risk’ of bias was lower than that of ‘Unclear risk’ of bias regarding blinding of personnel. However, no improvement in the risk of bias was detected (P for trend = 0.896) (Fig. 6). The proportion of ‘Low risk’ of bias was higher in blinding of outcome assessment than in blinding of participants and personnel, with a significant improvement (P for trend = 0.026) (Fig. 7). The percentage of ‘Low risk’ of bias for incomplete outcome data was maintained at a high level (≥ 80%) throughout the analysis period (Fig. 8). The percentage of ‘Low risk’ of bias for selective reporting was 0% until 2012 and increased to 27.3% in 2016 (P for trend < 0.001) (Fig. 9). The risk of bias for other bias improved significantly between 2011 and 2012 (P for trend < 0.001) (Fig. 10).

Discussion

This analysis shows that authors who contributed to the KJA between 2010 and 2016 are aware of the importance of random sequence generation, blinding of outcome assessment, and incomplete outcome data when conducting RCTs. In contrast, they missed the significance of allocation concealment, blinding of participants and personnel, and selective reporting. The risk of bias for random sequence generation, blinding of participants, blinding of outcome assessment, selective reporting, and other bias improved during the analysis period, whereas that of allocation concealment and blinding of personnel did not. Authors have dealt with incomplete outcome data appropriately since the earliest year of the analysis.
If the next allocation, which is a new promising treatment group, is open to medical personnel or patients prior to the application of the assigned treatment in an RCT to prove that the new treatment is more effective than the control treatment, medical personnel would subconsciously try to enroll the next patient who was expected to produce a favorable result and the next patient would want to receive the new treatment. Accordingly, inadequate concealment of allocation exaggerates the estimate of the intervention effect, particularly in trials where a subjective outcome was evaluated [1]. Therefore, once an unpredictable random allocation sequence is generated to create groups comparable for any known or unknown potential confounding factors, it should be implemented without knowledge of the subjects to prevent selection bias [7]. In this analysis, although random sequence generation was performed adequately in more than 50% of the quasi-RCTs or RCTs during the analysis period with 100% performance in 2016, allocation concealment was not performed by the majority of the authors without a significant improvement.
Participants, medical personnel, and outcome assessors who are aware of group allocation during and after the assigned treatment would anticipate the beneficial effects of the new promising treatment and the negative effects of the control treatment causing different behavior between the experimental and control groups, such as differences in drop-out or administration of co-interventions across the groups, which eventually affects the actual outcomes. Actually, biased intervention estimates are observed in RCTs lacking blinding [8]. The bias is more exaggerated in trials assessing subjective outcomes [1]. Our analysis shows poorer performance of blinding of participants and personnel in comparison to blinding of outcome assessment according to the proportions of ‘Low risk’ of bias. In particular, the risk of bias for blinding of personnel did not improve significantly despite a significant yearly improvement in the risk of bias for the other blinding procedures. This might mean that most of the authors have remained unaware of the importance of blinding of personnel even until the present day. In addition, the highest proportion of ‘Unclear risk’ of bias for blinding of participants suggests that many anesthesiologists do not intentionally blind patients to the group assignment because patients participating in anesthesia studies are usually unconscious. However, the annual improvement in the risk of bias is promising. In contrast, group assignments cannot be blinded to personnel and outcome assessor by nature in many anesthesia studies according to the higher proportion of ‘High risk’ of bias for blinding of personnel and outcome assessment.
Statistically significant outcomes are more reported than insignificant outcomes [9,10,11]. One previous study showed that at least one primary outcome from 62% of published RCTs was changed, newly introduced, or omitted in no accordance with their protocol [9]. In another analysis, more than 20% of the outcomes planned to be measured in the methods section were incompletely reported in the results section of the same study [11]. Selective withholding of non-significant results from publication renders a meta-analysis include insufficient information. In May 2010, the KJA began to recommend that authors submitting clinical trials register their study in a clinical trial registry. In August 2013, the recommendation became “strong.” In accordance with the change in the strength of the recommendation, the rate of registration in a clinical trial registry, which had been 0%, started to increase.
To study an entire population is neither practical nor feasible. Hence, the characteristics of the population can be statistically inferred by studying a sample, which is a set of participants extracted from the population, representing the population [12]. Thus, sample size estimation is an essential aspect of planning a clinical study. However, criteria for judging risk of bias in the ‘Risk of bias’ assessment tool do not assess the adequacy of sample size estimation. Therefore, we assessed whether sample size estimation was performed appropriately in addition to the original risk of biases belonging to the domain. We found that many authors estimated sample size appropriately with a yearly improvement.
Several limitations should be considered in this analysis. First, although the additional guidelines built by the members of statistical round belonging to the KJA editorial board were used to compensate for the ambiguity of the original criteria for judging risk of bias in the ‘Risk of bias’ assessment tool [2], there was still inconsistency in the results of the assessments between independent authors, particularly in the domains “allocation concealment” and “other bias,” which were solved by consensus meetings. In addition, because the additional guidelines did not undergo critical review by peer professionals, they should not be regarded as mandatory for the assessment of risk of bias for studies in a meta-analysis. Second, because assessment of the 497 studies by each author was highly challenging, not all independent authors were involved in the assessment of all the studies published during the entire analysis period (7 years). Alternatively, albeit less desirably, two independent authors evaluated studies published for 2–3 years. Third, we analyzed only the yearly trend of ‘Low risk’ of bias for each domain to facilitate an intuitive understanding of the yearly improvement in the risk of bias for each domain. The trend in the other levels of risk (‘Unclear risk’ or ‘High risk’) was not analyzed. Fourth, the comparisons between the risks of bias for each domain and between the levels of the risk of bias within one domain were non-statistical (i.e., arithmetic), possibly leading to controversies over the results of this analysis. Last, it was unknown whether researchers had really conducted their study according to what they described in their published report.
In summary, authors contributing to the KJA dealt with incomplete data in an appropriate way. However, many did not conceal the group allocation of subjects after random allocation sequence generation, which was performed by more than half of the authors with a yearly improvement in its risk of bias. Outcome assessors were blinded to group assignments better than participants and personnel. The risk of bias for blinding of personnel did not improve significantly despite the yearly improvement in that for other blinding methods. Approximately 75% of the studies did not have study protocols available from clinical trial registries or, if available, the study protocols were inconsistent with those published in the journal. In conclusion, it is expected that comprehensive understanding of the current status of the risk of bias in the quasi-RCTs or RCTs published in the KJA will promote awareness of the reviewers and potential authors about the effects of the risk of bias on the quality of the manuscript they review or submit, thereby making the journal one of the best in its field.

Acknowledgments

We appreciate Miss Ji Youn Ha, the manuscript editor of Korean Journal of Anesthesiology (KJA), for her great efforts to provide us with the data of the journal required for writing this manuscript.
All the authors contributing to this manuscript belong to the Statistical Round of KJA.

References

1. Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008; 336:601–605. PMID: 18316340.
crossref
2. Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions, version 5.1.0. The Cochrane Collaboration;2011. updated 2011 Mar. cited 2017 Jun. Available from www.handbook.cochrane.org.
3. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995; 273:408–412. PMID: 7823387.
crossref
4. Kim SY, Park JE, Seo HJ, Lee YJ, Jang BH, Son HJ, et al. NECA's guidance for undertaking systematic reviews and meta-analysis for intervention. National Evidence-based Healthcare Collaborating Agency;2011.
5. Kang H. How to understand and conduct evidence-based medicine. Korean J Anesthesiol. 2016; 69:435–445. PMID: 27703623.
crossref
6. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005; 37:360–363. PMID: 15883903.
7. Dettori J. The random allocation process: two things you need to know. Evid Based Spine Care J. 2010; 1:7–9.
crossref
8. Pildal J, Hróbjartsson A, Jørgensen KJ, Hilden J, Altman DG, Gøtzsche PC. Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol. 2007; 36:847–857. PMID: 17517809.
crossref
9. Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004; 291:2457–2465. PMID: 15161896.
10. Chan AW, Krleza-Jerić K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004; 171:735–740. PMID: 15451835.
crossref
11. Chan AW, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ. 2005; 330:753. PMID: 15681569.
crossref
12. Kadam P, Bhalerao S. Sample size calculation. Int J Ayurveda Res. 2010; 1:55–57. PMID: 20532100.
crossref
Fig. 1

Modified study Design Algorithm for Medical literature of Intervention [4]. The adequacy of the random allocation principle is not considered to retrieve quasi-randomized controlled trials and randomized controlled trials because the risk of bias for random sequence generation should be assessed.

kjae-70-511-g001
Fig. 2

Risk of bias for all the domains during the whole analysis period (2010–2016). RSG: random sequence generation, AC: allocation concealment, Bpa: blinding of participants, Bpe: blinding of personnel, BA: blinding of outcome assessment, IO: incomplete outcome data, SR: selective reporting, Other: other bias.

kjae-70-511-g002
Fig. 3

Risk of bias for random sequence generation. P for trend of low risk of bias < 0.001.

kjae-70-511-g003
Fig. 4

Risk of bias for allocation concealment. P for trend of low risk of bias = 0.943.

kjae-70-511-g004
Fig. 5

Risk of bias for blinding of participants. P for trend of low risk of bias = 0.003.

kjae-70-511-g005
Fig. 6

Risk of bias for blinding of personnel. P for trend of low risk of bias = 0.896.

kjae-70-511-g006
Fig. 7

Risk of bias for blinding of outcome assessment. P for trend of low risk of bias = 0.026.

kjae-70-511-g007
Fig. 8

Risk of bias for incomplete outcome data. P for trend of low risk of bias = 0.150.

kjae-70-511-g008
Fig. 9

Risk of bias for selective reporting. P for trend of low risk of bias < 0.001.

kjae-70-511-g009
Fig. 10

Risk of bias for other bias. P for trend of low risk of bias < 0.001.

kjae-70-511-g010
Table 1

Additional Guidelines Used to Determine the Level of Risk of Bias

RANDOM SEQUENCE GENERATION AND ALLOCATION CONCEALMENT
 Criterion for a judgment of ‘Unclear risk’ of bias in terms of random sequence generation No method for random sequence generation is found, although the study is described as randomized.
 Criterion for a judgment of ‘High risk’ of bias in terms of allocation concealment A random number table, which may be blinded inappropriately, is used for randomization without concurrent use of a sealed envelope.
Examples
 1. No randomization method is described in the presence of the following example sentences
  “This study is a prospective, double-blinded, clinical study...”
  “Patients were randomly divided...”
  – ‘Unclear risk’ of bias in terms of random sequence generation
2. Simple random sampling – ‘Unclear risk’ of bias in terms of random sequence generation and ‘Unclear risk’ of bias in terms of allocation concealment
3. “Patients were randomly allocated to one of two groups by the investigator using a sealed envelope system” – ‘Unclear risk’ of bias in terms of random sequence generation and ‘Low risk’ of bias in terms of allocation concealment
4. “According to a concealed random number table” – ‘Low risk’ of bias in terms of random sequence generation and allocation concealment
BLINDING OF PARTICIPANTS, PERSONNEL, AND OUTCOME ASSESSMENT
 Criteria for a judgment of ‘Unclear risk’ of bias 1. Only the presence of the words “double-blinded” or “triple-blinded” without any comments on blinding does not suffice for ‘Low risk’ of bias.
2. Placebo is not appropriately blinded.
Examples
1. No statement for blinding except for the sentence “This prospective, double-blinded, clinical study is….” – ‘Unclear risk’ of bias
2. “Placebo drug was administered to the control group using a syringe identical to that used in the experimental group” – ‘Low risk’ of bias
3. “Placebo drug was administered to the control group” – “Unclear risk’ of bias
BLINDING OF PARTICIPANTS
 Criterion for a judgement of ‘High risk’ of bias The design of a study does not allow blinding of participants.
Examples
1. A comparison between general and spinal anesthesia or between sedation and no sedation does not permit blinding of participants – “High risk’ of bias
2. A comparison between sedatives – ‘Low risk’ of bias if a blinding process is available or ‘High risk’ of bias otherwise
BLINDING OF PERSONNEL
 Criterion for a judgment of ‘High risk’ of bias The design of a study does not allow blinding of personnel.
Examples
1. A comparison between laryngeal mask airway and streamlined liner of the pharynx airway or between patients’ positions – ‘High risk’ of bias
2. “A single anesthesiologist, who was blinded to the group allocation, managed patients…” – ‘Low risk’ of bias
3. “A single anesthesiologist managed patients…” – ‘Unclear risk’ of bias (only reduces inter-experimenter bias by providing uniform patient management)
BLINDING OF OUTCOME ASSESSMENT*
 Criterion for a judgment of ‘High risk’ of bias The design of a study does not allow blinding of outcome assessment.
Examples
1. A comparison of airway sealing pressure between the laryngeal mask airway and streamlined liner of the pharynx – ‘High risk’ of bias
2. A comparison of sore throat between laryngeal mask airway and streamlined liner of the pharynx – ‘Low risk’ of bias if a blinding process is available or ‘Unclear risk’ of bias otherwise
3. “A single anesthesiologist who was blinded to the group allocation evaluated…” – ‘Low risk’ of bias
4. “A single anesthesiologist evaluated…” – ‘Unclear risk’ of bias (only reduces inter-assessor bias)
INCOMPLETE OUTCOME DATA
 If drop-out rate is considered during sample size estimation
  Criterion for a judgment of ‘Low risk’ of bias The number of analyzed subjects is equal to or more than the minimum required number of subjects in the absence of drop-out rate and the reasons for the drop-out are available.
  Criteria for a judgment of ‘High risk’ of bias 1. The number of analyzed subjects is less than the minimum required number of subjects in the absence of drop-out rate causing loss of statistical power.
2. Although the number of analyzed subjects is equal to or more than the minimum required number of subjects in the absence of drop-out rate, the reasons for the drop-out are not available.
3. The number of analyzed subjects is more than the sample size calculated with drop-out rate.
  Examples
  1. Thirty-nine subjects were analyzed although at least 40 subjects were required in the absence of drop-out rate (10%), application of which would produce the final sample size of 45 – ‘High risk’ of bias
  2. Forty-one subjects were analyzed when at least 40 subjects were required in the absence of drop-out rate (10%), of which application would produce the final sample size of 45 – ‘High risk’ of bias (if the reasons for drop-out are unavailable), ‘Low risk’ of bias (if the reasons for drop-out are available)
 If drop-out rate is not considered during sample size estimation
  Criteria for a judgment of ‘Low risk’ of bias 1. There is no drop-out from the final analysis.
2. The actual drop-out rate for all groups (not for each group) is less than 5% in the presence of the reasons for the drop-out.
  Criterion for a judgment of ‘Unclear risk’ of bias The actual drop-out rate for all groups (not for each group) is 5%–20% in the presence of the reasons for the drop-out.
  Criteria for a judgment of ‘High risk’ of bias 1. No reason for the drop-out from the final analysis is presented in the presence of drop-out.
2. The actual drop-out rate for all groups (not for each group) is more than 20% in the presence of the reasons for the drop-out.
  Example
  Six, four, and two subjects dropped out of groups A, B, and C, respectively, under the estimated sample size of 40 per group in the presence of the reasons for the drop-out – ‘Unclear risk’ of bias (a total of 12 subjects (10%) were dropped from all groups)
OTHER BIAS
 Criterion for a judgment of ‘Low risk’ of bias Sample size estimation is adequate except for errors in the use of drop-out rate.
 Criterion for a judgment of ‘High risk’ of bias Sample size is inadequately estimated or is not estimated.
Example
Thirty-three subjects (not thirty-four subjects) were required based on the drop-out rate of 10% for 30 subjects required to achieve the expected statistical power at the pre-determined significance level – ‘Low risk’ of bias
MISCELLANEOUS
 Abstracts are not evaluated for determining the level of risk of bias.
Example
Although the abstract includes the sentence “The patients were randomly allocated to….”, the materials and methods section does not mention any comments about random sequence generation – ‘Unclear risk’ of bias in terms of random sequence generation

*Only blinding of primary outcome is assessed regardless of blinding of secondary outcomes. Inadequate estimation of sample size or number of analyzed subjects more than the estimated sample size are assessed in terms of “OTHER BIAS”.

TOOLS
Similar articles