Working Groups/PRESC: Difference between revisions

Jump to navigation Jump to search
no edit summary
(Created page with "'''Resources'''<br /> https://github.com/mozilla/PRESC")
 
No edit summary
Line 1: Line 1:
<big>'''Performance Robustness Evaluation for Statistical Classifiers (PRESC)
</big>'''<br />
PRESC was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. It is a toolkit for the evaluation of machine learning classification models. Its goal is to provide insights into model performance which extend beyond standard scalar accuracy-based measures and into areas which tend to be underexplored in application, including:
* Generalizability of the model to unseen data for which the training set may not be representative
* Sensitivity to statistical error and methodological choices
* Performance evaluation localized to meaningful subsets of the feature space
* In-depth analysis of misclassifications and their distribution in the feature space
'''Problem'''
Watch video [https://youtu.be/Z92yAAm7cl8 here] to learn more about the problem PRESC solves.
'''Solution'''
We believe that these evaluations are essential for developing confidence in the selection and tuning of machine learning models intended to address user needs, and are important prerequisites towards building trustworthy AI.
As a tool, PRESC is intended for use by ML engineers to assist in the development and updating of models. PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.
'''''Examples'''''<br />
An example script demonstrating how to run a report is available [https://github.com/mozilla/PRESC/blob/master/examples/report/sample_report.py here].
There are a number of notebooks and explorations in the examples/ dir, but they are not guaranteed to run or be up-to-date as the package has undergone major changes recently and we have not yet finished updating these.
Some well-known datasets are provided in CSV format in the datasets/ dir for exploration purposes.
'''Contributors'''<br />
All contributors can be found [https://github.com/mozilla/PRESC/graphs/contributors here]
'''Resources'''<br />
'''Resources'''<br />
https://github.com/mozilla/PRESC
https://github.com/mozilla/PRESC
https://mozilla.github.io/PRESC/index.html
51

edits

Navigation menu