This repository is designed to provide a minimal benchmark framework comparing commonly used machine learning libraries in terms of scalability, speed, and classification accuracy. The focus is on binary classification tasks without missing data, where inputs can be numeric or categorical (after one-hot encoding). It targets large scale settings by varying the number of observations (n) up to millions and the number of features (after expansion) to about a thousand, to stress test different implementations. The benchmarks cover algorithms like logistic regression, random forest, gradient boosting, and deep neural networks, and they compare across toolkits such as scikit-learn, R packages, xgboost, H2O, Spark MLlib, etc. The repository is structured in logical folders (e.g. “1-linear”, “2-rf”, “3-boosting”, “4-DL”) each corresponding to algorithm categories.
Features
- Comparative benchmarks across ML toolkits (scikit-learn, R, H2O, xgboost, Spark MLlib)
- Algorithm coverage: logistic regression, random forests, boosting, deep neural nets
- Scalable testing with large n (e.g. 10K → 10M) and p (~1K)
- Synthetic data generation and real dataset integration (e.g. Higgs)
- Structured folder organization by algorithm type
- Runtime, memory, and accuracy measurement tools to compare implementations