PyHard#

Instance Hardness analysis in Python, with a two-fold objective: insights on data quality issues; and better understanding of the weaknesses and strengths of different algorithms.

PyHard employes a methodology known as Instance Space Analysis (ISA) to analyse performance at the instance level rather than at dataset level. The result is an alternative for visualizing algorithm performance for each instance within a dataset, by relating predictive performance to estimated instance hardness measures extracted from the data. This analysis reveals regions of strengths and weaknesses of predictors (aka footprints), and highlights individual instances within a dataset that warrant further investigation, either due to their unique properties or potential data quality issues.

Citing#

If you’re using PyHard in your research or application, please cite our paper:

Paiva, P. Y. A., Moreno, C. C., Smith-Miles, K., Valeriano, M. G., & Lorena, A. C. (2022). Relating instance hardness to classification performance in a dataset: a visual approach. Machine Learning, 111(8), 3085-3123. https://doi.org/10.1007/s10994-022-06205-9

@article{paiva2022relating,
   title={Relating instance hardness to classification performance in a dataset: a visual approach},
   author={Paiva, Pedro Yuri Arbs and Moreno, Camila Castro and Smith-Miles, Kate and Valeriano, Maria Gabriela and Lorena, Ana Carolina},
   journal={Machine Learning},
   volume={111},
   number={8},
   pages={3085--3123},
   year={2022},
   publisher={Springer}
}

References#

Base

  1. Michael R. Smith, Tony Martinez, and Christophe Giraud-Carrier. 2014. An instance level analysis of data complexity. Mach. Learn. 95, 2 (May 2014), 225–256.

  2. Ana C. Lorena, Luís P. F. Garcia, Jens Lehmann, Marcilio C. P. Souto, and Tin Kam Ho. 2019. How Complex Is Your Classification Problem? A Survey on Measuring Classification Complexity. ACM Comput. Surv. 52, 5, Article 107 (October 2019), 34 pages.

  3. Mario A. Muñoz, Laura Villanova, Davaatseren Baatar, and Kate Smith-Miles. 2018. Instance spaces for machine learning classification. Mach. Learn. 107, 1 (January  2018), 109–147.

Feature selection

  1. Luiz H. Lorena, André C. Carvalho, and Ana C. Lorena. 2015. Filter Feature Selection for One-Class Classification. Journal of Intelligent and Robotic Systems 80, 1 (October  2015), 227–243.

  2. Goldberger, J., Hinton, G., Roweis, S., Salakhutdinov, R. (2005). Neighbourhood Components Analysis. Advances in Neural Information Processing Systems. 17, 513-520.

  3. Yang, W., Wang, K., & Zuo, W. (2012). Neighborhood component feature selection for high-dimensional data. J. Comput., 7(1), 161-168.

  4. Amankwaa-Kyeremeh, B., Greet, C., Zanin, M., Skinner, W. and Asamoah, R. K., (2020), Selecting key predictor parameters for regression analysis using modified Neighbourhood Component Analysis (NCA) Algorithm. Proceedings of 6th UMaT Biennial International Mining and Mineral Conference, Tarkwa, Ghana, pp. 320-325.

  5. Artur J. Ferreira and Mário A. T. Figueiredo. 2012. Efficient feature selection filters for high-dimensional data. Pattern Recognition Letters 33, 13 (October, 2012), 1794–1804.

  6. Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, and Huan Liu. 2017. Feature Selection: A Data Perspective. ACM Comput. Surv. 50, 6, Article 94 (January 2018), 45 pages.

  7. Shuyang Gao, Greg Ver Steeg, and Aram Galstyan. Efficient Estimation of Mutual Information for Strongly Dependent Variables. Available in http://arxiv.org/abs/1411.2003. AISTATS, 2015.

Hyper-parameter optimization

  1. James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS’11). Curran Associates Inc., Red Hook, NY, USA, 2546-2554.

  2. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2 (NIPS’12). Curran Associates Inc., Red Hook, NY, USA, 2951-2959.

  3. J. Bergstra, D. Yamins, and D. D. Cox. 2013. Making a science of model search: hyperparameter optimization in hundreds of dimensions for vision architectures. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28 (ICML’13). JMLR.org, I-115-I-123.

Indices and tables#