Cell-Penetrating Peptides (CPP) are emerging as an alternative to small-molecule drugs to expand the range of biomolecules that can be targeted for therapeutic purposes. Due to the importance of identifying and designing new CPP, a great variety of predictors have been developed to achieve these goals. To establish a ranking for these predictors, a couple of recent studies compared their performances on specific datasets, yet their conclusions cannot determine if the ranking obtained is due to the model, the set of descriptors or the datasets used to test the predictors. We present a systematic study of the influence of the peptide sequence's similarity of the datasets on the predictors' performance. The analysis reveals that the datasets used for training have a stronger influence on the predictors performance than the model or descriptors employed. We show that datasets with low sequence similarity between the positive and negative examples can be easily separated, and the tested classifiers showed good performance on them. On the other hand, a dataset with high sequence similarity between CPP and non-CPP will be a hard dataset, and it should be the one to be used for assessing the performance of new predictors.