### INTRODUCTION

### STATISTICAL ANALYSES SECTION

^{2}or Fisher's exact test, when appropriate. A value of

*P*< 0.05 was considered significant.”

### PRECISION OF NUMBERS

### REPORTING MEAN, MEDIAN, SD, IQR, STANDARD ERROR OF THE MEAN (SEM), and 95% CONFIDENCE INTERVAL (CI)

^{2}= 1.3 × 5.2. If it is not, then the results are again inconsistent. The same is true for RR.

### REPORTING DIAGNOSTIC TEST RESULTS

### ERROR BARS IN GRAPHS

### UNITS OF MEASUREMENT

### REPORTING *P* VALUE

*P*value is by far the most commonly reported statistics. Editorial policies of reporting

*P*values differ. Some journals use the well-known arbitrarily chosen threshold of 0.05 and report

*P*values as either “

*P*< 0.05” or “non-significant.” Most experts suggest reporting the exact

*P*value. For example, instead of “

*P*< 0.05,” it is suggested reporting “

*P*= 0.032” (313).

*P*= 0.0234” is therefore inappropriate whereas “

*P*= 0.023” is better. Sometimes a highly significant

*P*value is mistakenly reported as “

*P*= 0.000” (e.g., for “

*P*= 0.0000123”). In such cases, the value should be reported as “

*P*< 0.001” (4).

*P*values should only be reported when a hypothesis is tested. Believing in that reporting significant

*P*values are important for positive editorial decisions, authors without a clear hypothesis inappropriately report several

*P*values to dress up their manuscript and make them look scientific. For example, part of “Results” in a submitted manuscript reads “Mean age of patients with wheezing 11.6 months (

*P*= 0.001).” But it is unclear whether any hypothesis is tested.

*P*values. For example, it is advisable to report “Smoking was associated with a higher incidence of lung cancer (OR, 2.6; 95% CI, 1.3 to 5.2).” instead of “Smoking was significantly (

*P*= 0.04) associated with a higher incidence of lung cancer (OR, 2.6).” Reporting both

*P*value and 95% CI is no more informative than reporting only 95% CI. While

*P*value only indicates if the observed effect is significant or not, 95% CI additionally delineates the magnitude of the effect (i.e., the effect size). Editors can omit a

*P*value when the corresponding 95% CI is reported, as the latter is more informative.

*P*values (14). Considering the well-established threshold of 0.05, a

*P*= 0.049 is considered statistically significant, while a

*P*= 0.051 is not. This results in coining some interesting terms including “partially significant” or “marginally significant” used by some authors and interpreting the non-significant results (with a

*P*= 0.06, for example) in the manuscript “Discussion” section in a way if the difference is in fact significant. If we accept to use the set cut-off value of 0.05, we should abide to it and consider all results with a

*P*value equal to or more than 0.05 non-significant and interpret that based on observed data, there are no evidence to support that the observed effect likely exists in the population (and it likely results from sampling error), and not discuss the observed effect. Interpretations by representatives of the dominant school of statistics, the so-called frequentist statistics, may differ from those by representatives of Bayesian statistics (1516). For example, frequentist statistics tests if the null/alternative hypothesis can be rejected or accepted, considering the data collected from a representative sample (using a pre-defined cut-off of say 0.05 for

*P*value). Bayesian statistics gives the post-test probability (odds) of a hypothesis being true, based on the pre-test probability (odds) of the hypothesis and the collected data. No hypothesis is rejected or accepted. And what researchers have is only a change in likelihoods, which seems more natural. Researchers investigate available evidence to report increased or decreased probability of a hypothesis.