Validation of the R statistical package has become a hot topic since 2015, when the FDA issued the Statistical Software Clarifying Statement, stating officially that no specific software is required for submissions, and that any tool can be used if only it is reliable and documented appropriately. It instantly brought the attention of the pharmaceutical industry. Individual attempts to fulfil validation requirement and bring R to the controlled environments were made by a number of companies independently. In addition, combined efforts of the biggest pharmaceutical companies resulted in launching the R Validation Hub project. While most of the initiatives seem to focus on documentation and package quality assessment, relying on the results of unit tests delivered “as is” by the authors of R packages, we at 2KMM CRO set different priorities, driven by the importance of exhaustive numerical validation done in the first place. Without that, there is a risk that all the efforts on documentation and quality assurance will pertain to routines which results differ from those obtained with other trusted software in a way that cannot be adequately justified. While we do not undermine the importance of documentation and early unit testing, we believe that numerical validation, going far beyond running those tests, is mandatory to achieve satisfying level of reliability. We would like to share our findings in this area, including the choice of reference input data and results used during the validation, sources of discrepancies between R and other software, interpretation and acceptance of the results.