v0.9.0 (2023-xx-xx)
- New feature
PermutationImportance
explainer implementing the permutation feature importance global explanations. Also included is aplot_permutation_importance
utility function for flexible plotting of the resulting feature importance scores (docs, #798). - New feature
PartialDependenceVariance
explainer implementing partial dependence variance global explanations. Also included is aplot_pd_variance
utility function for flexible plotting of the resulting PD variance plots (docs, #758).
GradientSimilarity
explainer now automatically handles sparse tensors in the model by converting the gradient tensors to dense ones before calculating similarity. This used to be a source of bugs when calculating similarity for models with embedding layers for which gradients tensors are sparse by default. Additionally, it now filters any non-trainable parameters and doesn't consider those in the calculation as no gradients exist. A warning is raised if any non-trainable layers or parameters are detected (#829).- Updated the discussion of the interpretation of
ALE
. The previous examples and documentation had some misleading claims; these have been removed and reworked with an emphasis on the mostly qualitative interpretation ofALE
plots (#838, #846).
- Deprecated the use of the legacy Boston housing dataset in examples and testing. The new examples now use the California housing dataset (#838, #834).
- Modularized the computation of prototype importances and plotting for
ProtoSelect
, allowing greater flexibility to the end user (#826). - Roadmap documentation page removed due to going out of date (#842).
- Tests added for
tensorflow
models used inCounterfactualRL
(#793). - Tests added for
pytorch
models used inCounterfactualRL
(#799). - Tests added for
ALE
plotting functionality (#816). - Tests added for
PartialDependence
plotting functionality (#819). - Tests added for
PartialDependenceVariance
plotting functionality (#820). - Tests added for
PermutationImportance
plotting functionality (#824). - Tests addef for
ProtoSelect
plotting functionality (#841). - Tests added for the
datasets
subpackage (#814). - Fixed optional dependency installation during CI to make sure dependencies are consistent (#817).
- Synchronize notebook CI workflow with the main CI workflow (#818).
- Version of
pytest-cov
bumped to4.x
(#794). - Version of
pytest-xdist
bumped to3.x
(#808). - Version of
tox
bumped to4.x
(#832).
v0.8.0 (2022-09-26)
- New feature
PartialDependence
andTreePartialDependence
explainers implementing partial dependence (PD) global explanations. Also included is aplot_pd
utility function for flexible plotting of the resulting PD plots (docs, #721). - New
exceptions.NotFittedError
exception which is raised whenever a compulsory call to afit
method has not been carried out. Specifically, this is now raised inAnchorTabular.explain
whenAnchorTabular.fit
has been skipped (#732). - Various improvements to docs and examples (#695, #701, #698, #703, #717, #711, #750, #784).
- Edge case in
AnchorTabular
where an error is raised during anexplain
call if the instance contains a categorical feature value not seen in the training data (#742).
- Improved handling of custom
grid_points
for theALE
explainer (#731). - Renamed our custom exception classes to remove the verbose
Alibi*
prefix and standardised the*Error
suffix. Concretely:exceptions.AlibiPredictorCallException
is nowexceptions.PredictorCallError
exceptions.AlibiPredictorReturnTypeError
is nowexceptions.PredictorReturnTypeError
. Backwards compatibility has been maintained by subclassing the new exception classes by the old ones, but these will likely be removed in a future version (#733).
- Warn users when
TreeShap
is used with more than 100 samples in the background dataset which is due to a limitation in the upstreamshap
package (#710). - Minimum version of
scikit-learn
bumped to1.0.0
mainly due to upcoming deprecations (#776). - Minimum version of
scikit-image
bumped to0.17.2
to fix a possible bug when using theslic
segmentation function withAnchorImage
(#753). - Maximum supported version of
attrs
bumped to22.x
(#727). - Maximum supported version of
tensorflow
bumped to2.10.x
(#745). - Maximum supported version of
ray
bumped to2.x
(#740). - Maximum supported version of
numba
bumped to0.56.x
(#724). - Maximum supported version of
shap
bumped to0.41.x
(#702). - Updated
shap
example notebooks to recommend installingmatplotlib==3.5.3
due to failure ofshap
plotting functions withmatplotlib==3.6.0
(#776).
- Extend optional dependency checks to ensure the correct submodules are present (#714).
- Introduce
pytest-custom_exit_code
to let notebook CI pass when no notebooks are selected for tests (#728). - Use UTF-8 encoding when loading
README.md
insetup.py
to avoid a possible failure of installation for some users (#744). - Updated guidance for class docstrings (#743).
- Reinstate
ray
tests (#756). - We now exclude test files from test coverage for a more accurate representation of coverage (#751). Note that this has led to a drop in code covered which will be addressed in due course (#760).
- The Python
3.10.x
version on CI has been pinned to3.10.6
due to typechecking failures, pending a new release ofmypy
(#761). - The
test_changed_notebooks
workflow can now be triggered manually and is run on push/PR for any branch (#762). - Use
codecov
flags for more granular reporting of code coverage (#759). - Option to ssh into Github Actions runs for remote debugging of CI pipelines (#770).
- Version of
sphinx
bumped to5.x
but capped at<5.1.0
to avoid CI failures (#722). - Version of
myst-parser
bumped to0.18.x
(#693). - Version of
flake8
bumped to5.x
(#729). - Version of
ipykernel
bumped to6.x
(#431). - Version of
ipython
bumped to8.x
(#572). - Version of
pytest
bumped to7.x
(#591). - Version of
sphinx-design
bumped to0.3.0
(#739). - Version of
nbconvert
bumped to7.x
(#738).
v0.7.0 (2022-05-18)
This release introduces two new methods, a GradientSimilarity
explainer and a ProtoSelect
data summarisation algorithm.
- New feature
GradientSimilarity
explainer for explaining predictions of gradient-based (PyTorch and TensorFlow) models by returning the most similar training data points from the point of view of the model (docs). - New feature We have introduced a new subpackage
alibi.prototypes
which contains theProtoSelect
algorithm for summarising datasets with a representative set of "prototypes" (docs). ALE
explainer now can take a custom grid-point per feature to evaluate theALE
on. This can help in certain situations when grid-points defined by quantiles might not be the best choice (docs).- Extended the
IntegratedGradients
method target selection to handle explaining any scalar dimension of tensors of any rank (previously only rank-1 and rank-2 were supported). See #635. - Python 3.10 support. Note that
PyTorch
at the time of writing doesn't support Python 3.10 on Windows.
- Fixed a bug which incorrectly handled multi-dimensional scaling in
CounterfactualProto
(#646). - Fixed a bug in the example using
CounterfactualRLTabular
(#651).
tensorflow
is now an optional dependency. To use methods that requiretensorflow
you can installalibi
usingpip install alibi[tensorflow]
which will pull in a supported version. For full instructions for the recommended way of installing optional dependencies please refer to Installation docs.- Updated
sklearn
version bounds toscikit-learn>=0.22.0, <2.0.0
. - Updated
tensorflow
maximum allowed version to2.9.x
.
- This release introduces a way to manage the absence of optional dependencies. In short, the design is such that if an optional dependency is required for an algorithm but missing, at import time the corresponding public (or private in the case of the optional dependency being required for a subset of the functionality of a private class) algorithm class will be replaced by a
MissingDependency
object. For full details on developingalibi
with optional dependencies see Contributing: Optional Dependencies. - The CONTRIBUTING.md has been updated with further instructions for managing optional dependencies (see point above) and more conventions around docstrings.
- We have split the
Explainer
base class intoBase
andExplainer
to facilitate reusability and better class hierarchy semantics with introducing methods that are not explainers (#649). mypy
has been updated to~=0.900
which requires additional development dependencies for type stubs, currently onlytypes-requests
has been necessary to add torequirements/dev.txt
.- Fron this release onwards we exclude the directories
doc/
andexamples/
from the source distribution (by addingprune
directives inMANIFEST.in
). This results in considerably smaller file sizes for the source distribution.
v0.6.5 (2022-03-18)
This is a patch release to correct a regression in CounterfactualProto
introduced in v0.6.3
.
- Added a Frequently Asked Questions page to the docs.
- Fix a bug introduced in
v0.6.3
which preventedCounterfactualProto
working with categorical features (#612). - Fix an issue with the
LanguageModelSampler
where it would sometimes sample punctuation (#585).
- The maximum
tensorflow
version has been bumped from 2.7 to 2.8 (#588).
v0.6.4 (2022-02-28)
This is a patch release to correct a regression in AnchorImage
introduced in v0.6.3
.
- Fix a bug introduced in
v0.6.3
whereAnchorImage
would ignore usersegmentation_kwargs
(#581).
- The maximum versions of
Pillow
andscikit-image
have been bumped to 9.x and 0.19.x respectively.
v0.6.3 (2022-01-18)
- New feature A callback can now be passed to
IntegratedGradients
via thetarget_fn
argument, in order to calculate the scalar target dimension from the model output. This is to bypass the requirement of passingtarget
directly toexplain
when thetarget
of interest may depend on the prediction output. See the example in the docs. (#523). - A new comprehensive Introduction to explainability added to the documentation (#510).
- Python 3.6 has been deprecated from the supported versions as it has reached end-of-life.
- Fix a bug with passing background images to
AnchorImage
leading to an error (#542). - Fix a bug with rounding errors being introduced in
CounterfactualRLTabular
(#550).
- Docstrings have been updated and consolidated (#548). For developers, docstring conventions have been documented in CONTRIBUTING.md.
numpy
typing has been updated to be compatible withnumpy 1.22
(#543). This is a prerequisite for upgrading totensorflow 2.7
.- To further improve reliability, strict
Optional
type-checking withmypy
has been reinstated (#541). - The Alibi CI tests now include Windows and MacOS platforms (#575).
- The maximum
tensorflow
version has been bumped from 2.6 to 2.7 (#377).
v0.6.2 (2021-11-18)
- Documentation on using black-box and white-box models in the context of alibi, see here.
AnchorTabular
,AnchorImage
andAnchorText
now expose an additionaldtype
keyword argument with a default value ofnp.float32
. This is to ensure that whenever a userpredictor
is called internally with dummy data a correct data type can be ensured (#506).- Custom exceptions. A new public module
alibi.exceptions
defining thealibi
exception hierarchy. This introduces two exceptions,AlibiPredictorCallException
andAlibiPredictorReturnTypeError
. See #520 for more details.
- For
AnchorImage
, coerceimage_shape
argument into a tuple to implicitly allow passing a list input which eases use of configuration files. In the future the typing will be improved to be more explicit about allowed types with runtime type checking. - Updated the minimum
shap
version to the latest0.40.0
as this fixes an installation issue ifalibi
andshap
are installed with the same command.
- Fix a bug with version saving being overwritten on subsequent saves (#481).
- Fix a bug in the Integrated Gradients notebook with transformer models due to a regression in the upstream
transformers
library (#528). - Fix a bug in
IntegratedGradients
withforward_kwargs
not always being correctly passed (#525). - Fix a bug resetting
TreeShap
predictor (#534).
- Now using
readthedocs
Docker image in our CI to replicate the doc building environment exactly. Also enabledreadthedocs
build on PR feature which allows browsing the built docs on every PR. - New notebook execution testing framework via Github Actions. There are two new GA workflows, test_all_notebooks which is run once a week and can be triggered manually, and test_changed_notebooks which detects if any notebooks have been modified in a PR and executes only those. Not all notebooks are amenable to be tested automatically due to long running times or complex software/hardware dependencies. We maintain a list of notebooks to be excluded in the testing script under testing/test_notebooks.py.
- Now using
myst
(a markdown superset) for more flexible documentation (#482). - Added a CITATION.cff file.
v0.6.1 (2021-09-02)
- New feature An implementation of Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning is now available via
alibi.explainers.CounterfactualRL
andalibi.explainers.CounterfactualRLTabular
classes. The method is model-agnostic and the implementation is written in both PyTorch and TensorFlow. See docs for more information.
- Future breaking change The names of
CounterFactual
andCounterFactualProto
classes have been changed toCounterfactual
andCounterfactualProto
respectively for consistency and correctness. The old class names continue working for now but emit a deprecation warning message and will be removed in an upcoming version. dill
behaviour was changed to not extend thepickle
protocol so that standard usage ofpickle
in a session withalibi
does not change expectedpickle
behaviour. See discussion.AnchorImage
internals refactored to avoid persistent state betweenexplain
calls.
- A PR checklist is available under CONTRIBUTING.md. In the future many of these may be turned into automated checks.
pandoc
version for docs building updated to1.19.2
which is what is used onreadthedocs
.- Citation updated to the JMLR paper.
v0.6.0 (2021-07-08)
- New feature
AnchorText
now supports sampling according to masked language models via thetransformers
library. See docs and the example for using the new functionality. - Breaking change due to the new masked language model sampling for
AnchorText
the public API for the constructor has changed. See docs for a full description of the new API. AnchorTabular
now supports one-hot encoded categorical variables in addition to the default ordinal/label encoded representation of categorical variables.IntegratedGradients
changes to allow explaining a wider variety of models. In particular, a newforward_kwargs
argument toexplain
allows passing additional arguments to the model andattribute_to_layer_inputs
flag to allow calculating attributions with respect to layer input instead of output if set toTrue
. The API and capabilities now track more closely to the captum.aiPyTorch
implementation.- Example of using
IntegratedGradients
to explaintransformer
models. - Python 3.9 support.
IntegratedGradients
- fix the path definition for attributions calculated with respect to an internal layer. Previously the paths were defined in terms of the inputs and baselines, now they are correctly defined in terms of the corresponding layer input/output.
v0.5.8 (2021-04-29)
- Experimental explainer serialization support using
dill
. See docs for more details.
- Handle layers which are not part of
model.layers
forIntegratedGradients
.
- Update type hints to be compatible with
numpy
1.20. - Separate licence build step in CI, only check licences against latest Python version.
v0.5.7 (2021-03-31)
- Support for
KernelShap
andTreeShap
now requires installing theshap
dependency explicitly after installingalibi
. This can be achieved by runningpip install alibi && pip install alibi[shap]
. The reason for this is that the build process for the upstreamshap
package is not well configured resulting in broken installations as detailed in SeldonIO#376 and shap/shap#1802. We expect this to be a temporary change until changes are made upstream.
- A
reset_predictor
method for black-box explainers. The intended use case for this is for deploying an already configured explainer to work with a remote predictor endpoint instead of the local predictor used in development. alibi.datasets.load_cats
function which loads a small sample of cat images shipped with the library to be used in examples.
- Deprecated the
alibi.datasets.fetch_imagenet
function as the Imagenet API is no longer available. IntegratedGradients
now works with subclassed TensorFlow models.- Removed support for calculating attributions wrt multiple layers in
IntegratedGradients
as this was not working properly and is difficult to do in the general case.
- Fixed an issue with
AnchorTabular
tests not being picked up due to a name change of test data fixtures.
v0.5.6 (2021-02-18)
- Breaking change
IntegratedGradients
now supports models with multiple inputs. For each input of the model, attributions are calculated and returned in a list. Also extends the method allowing to calculate attributions for multiple internal layers. If a list of layers is passed, a list of attributions is returned. See SeldonIO#321. ALE
now supports selecting a subset of features to explain. This can be useful to reduce runtime if only some features are of interest and also indirectly helps dealing with categorical variables by being able to exclude them (asALE
does not support categorical variables).
AnchorTabular
coverage calculation was incorrect which was caused by incorrectly indexing a list, this is now resolved.ALE
was causing an error when a constant feature was present. This is now handled explicitly and the user has control over how to handle these features. See https://docs.seldon.io/projects/alibi/en/stable/api/alibi.explainers.ale.html#alibi.explainers.ale.ALE for more details.- Release of Spacy 3.0 broke the
AnchorText
functionality as the waylexeme_prob
tables are loaded was changed. This is now fixed by explicitly handling the loading depending on thespacy
version. - Fixed documentation to refer to the
Explanation
object instead of the olddict
object. - Added warning boxes to
CounterFactual
,CounterFactualProto
andCEM
docs to explain the necessity of clearing the TensorFlow graph if switching to a new model in the same session.
- Introduced lower and upper bounds for library and development dependencies to limit the potential for breaking functionality upon new releases of dependencies.
- Added dependabot support to automatically monitor new releases of dependencies (both library and development).
- Switched from Travis CI to Github Actions as the former limited their free tier.
- Removed unused CI provider configs from the repo to reduce clutter.
- Simplified development dependencies to just two files,
requirements/dev.txt
andrequirements/docs.txt
. - Split out the docs building stage as a separate step on CI as it doesn't need to run on every Python version thus saving time.
- Added
.readthedocs.yml
to control how user-facing docs are built directly from the repo. - Removed testing related entries to
setup.py
as the workflow is both unused and outdated. - Avoid
shap==0.38.1
as a dependency as it assumesIPython
is installed and breaks the installation.
v0.5.5 (2020-10-20)
- New feature Distributed backend using
ray
. To use, installray
usingpip install alibi[ray]
. - New feature
KernelShap
distributed version using the new distributed backend. - For anchor methods added an explanation field
data['raw']['instances']
which is a batch-wise version of the existingdata['raw']['instance']
. This is in preparation for the eventual batch support for anchor methods. - Pre-commit hook for
pyupgrade
vianbqa
for formatting example notebooks using Python 3.6+ syntax.
- Flaky test for distributed anchors (note: this is the old non-batchwise implementation) by dropping the precision treshold.
- Notebook string formatting upgraded to Python 3.6+ f-strings.
- Breaking change For anchor methods, the returned explanation field
data['raw']['prediction']
is now batch-wise, i.e. forAnchorTabular
andAnchorImage
it is a 1-dimensionalnumpy
array whilst forAnchorText
it is a list of strings. This is in preparation for the eventual batch support for anchor methods. - Removed dependency on
prettyprinter
and substituted with a slightly modified standard library version ofPrettyPrinter
. This is to prepare for aconda
release which requires all dependencies to also be published onconda
.
v0.5.4 (2020-09-03)
update_metadata
method for anyExplainer
object to enable easy book-keeping for algorithm parameters
- Updated
KernelShap
wrapper to work with the newestshap>=0.36
library - Fix some missing metadata parameters in
KernelShap
andTreeShap
v0.5.3 (2020-09-01)
- Updated roadmap
- Bug in integrated gradients where incorrect layer handling led to output shape mismatch when explaining layer outputs
- Remove tf.logging calls in example notebooks as TF 2.x API no longer supports tf.logging
- Pin shap to 0.35.0, pending shap 0.36.0 patch release to support shap API updates/library refactoring
v0.5.2 (2020-08-05)
This release changes the required TensorFlow version from <2.0 to >=2.0. This means that alibi
code depends on TenorFlow>=2.0, however the explainer algorithms are compatible for models trained with both TF1.x and TF2.x.
The alibi
code that depends on TensorFlow itself has not been fully migrated in the sense that the code is still not idiomatic TF2.x code just that we now use the tf.compat.v1
package provided by TF2.x internally. This does mean that for the time being to run algorithms which depend on TensorFlow (CounterFactual
, CEM
and CounterFactualProto
) require disabling TF2.x behaviour by running tf.compat.v1.disable_v2_behavior()
. This is documented in the example notebooks. Work is underway to re-write the TensorFlow dependent components in idiomatic TF2.x code so that this will not be necessary in a future release.
The upgrade to TensorFlow 2.x also enables this to be the first release with Python 3.8 support.
Finally, white-box explainers are now tested with pre-trained models from both TF1.x and TF2.x. The binaries for the models along with loading functionality and datasets used to train these are available in the alibi-testing
helper package which is now a requirement for running tests.
- Minimum required TensorFlow version is now 2.0
- Tests depending on trained models are now run using pre-trained models hosted under the
alibi-testing
helper package
- A bug in
AnchorText
resulting from missing string hash entries in some spacy models (SeldonIO#276) - Explicitly import
lazy_fixture
in tests instead of relying on the deprecated usage ofpytest
namespace (SeldonIO#281) - A few bugs in example notebooks
v0.5.1 (2020-07-10)
This is a bug fix release.
- Fix an issue with
AnchorText
not working on text instances with commas due to not checking for empty synonym lists - Enable correct behaviour of
AnchorText
withspacy>=2.3.0
, this now requires installingspacy[lookups]
as an additional dependency which contains model probability tables - Update the
expected_value
attribute ofTreeSHAP
which is internally updated after a call toexplain
- Fix some links in Integrated Gradients examples
- Coverage after running tests on Travis is now correctly reported as the reports are merged for different
pytest
runs - Old
Keras
tests now requireKeras<2.4.0
as the new release requirestensorflow>=2.2
- Bump
typing_extensions>=3.7.2
which includes the typeLiteral
v0.5.0 (2020-06-10)
This version supports Python 3.6 and 3.7 as support for Python 3.5 is dropped.
- New feature
TreeSHAP
explainer for white-box, tree based model SHAP value computation - New feature
ALE
explainer for computing feature effects for black-box, tabular data models - New feature
IntegratedGradients
explainer for computing feature attributions for TensorFlow and Keras models - Experimental
utils.visualization
module currently containing visualization functions forIntegratedGradients
on image datasets.The location, implementation and content of the module and functions therein are subject to change. - Extend
datasets.fetch_imagenet
to work with any class - Extend
utils.data.gen_category_map
to take a list of strings of column names
- Internal refactoring of
KernelSHAP
to reuse functionality forTreeSHAP
. Both SHAP wrappers are now underexplainers.shap_wrappers
- Tests are now split into two runs, one with TensorFlow in eager mode which is necessary for using
IntegratedGradients
- Added
typing-extensions
library as a requirement to take advantage of more precise types - Pinned
scikit-image<0.17
due to a regression upstream - Pinned
Sphinx<3.0
for documentation builds due to some issues with them2r
plugin
- Various improvements to documentation
- Some tests were importing old
keras
functions instead oftensorflow.keras
v0.4.0 (2020-03-20)
NB: This is the last version supporting Python 3.5.
- New feature
KernelSHAP
explainer for black-box model SHAP scores - Documentation for the
LinearityMeasure
algorithm
- Breaking change New API for explainer and explanation objects. Explainer objects now inherit from
Explainer
base class as a minimum. When calling.explain
method, anExplanation
object is returned (previously a dictionary). This contains two dictionariesmeta
anddata
accessed as attributes of the object, detailing the metadata and the data of the returned explanation. The common interfaces are underapi.interfaces
and default return metadata and data for each explainer are underapi.defaults
. - Complete refactoring of the Anchors algorithms, many code improvements
- Explainer tests are now more modular, utilizing scoped fixtures defined in
explainers.tests.conftest
and various utility functions - Tests are now run sequentially insted of in parallel due to overhead of launching new processes
v0.3.2 (2019-10-17)
- All explanations return a metadata field
meta
with aname
subfield which is currently the name of the class
- Provide URL options for fetching some datasets, by default now fetches from a public Seldon bucket
v0.3.1 (2019-10-01)
- Pin
tensorflow
dependency to versions 1.x as the new 2.0 release introduces breaking changes
v0.3.0 (2019-09-25)
- New feature
LinearityMeasure
class andlinearity_measure
function for measuring the linearity of a classifier/regressor - New feature
CounterFactualProto
now supports categorical variables for tabular data
- Breaking change Remove need for the user to manage TensorFlow sessions for the explanation methods that use TF internally (
CEM
,CounterFactual
,CounterFactualProto
). The session is now inferred or created depending on what is passed topredict
. For finer control thesess
parameter can still be passed in directly - Breaking change Expose low-level arguments to
AnchorText
to the user for finer control of the explanation algorithm, also rename some arguments for consistency - Various improvements to existing notebook examples
CounterFactualProto
andCEM
bug when the class is initialized a second time it wouldn't run as the TF graph would become disconnected- Provide more useful error messages if external data APIs are down
- Skip tests using external data APIs if they are down
v0.2.3 (2019-07-29)
gen_category_map
utility function to facilitate using AnchorTabular explainer- Extend
CounterFactualProto
with a more flexible choice for prototypes using k closest encoded instances - Allow user to specify a hard target class for
CounterFactualProto
- Distributed tests usign
pytest-xdist
to overcome TF global session interfering with tests running in the same process
- Sample datasets now return a
Bunch
object by default, bundling all necessary and optional attributes for each dataset - Loading sample datasets are now invoked via the
fetch_
functions to indicate that a network download is being made
- Remove
Home
from docs sidebar as this was causing the sidebar logo to not show up on landing page
v0.2.2 (2019-07-05)
codecov
support to CI
- Remove lexemes without word vectors in
spacy
models forAnchorTabular
. This suppressesspacy
warnings and also make the method (and tests) run a lot faster.
v0.2.1 (2019-07-02)
- Remove
Keras
andseaborn
from install requirements and create optional[examples]
extras_require
- Remove
python-opencv
dependency in favour ofPIL
- Improve type checking with unimported modules - now requires
python>3.5.1
- Add some tests for
alibi.datasets
v0.2.0 (2019-05-24)
New features:
Implemented enhancements:
- Return nearest not predicted class for trust scores #63
- Migrate Keras dependency to tf.keras #51
- Add warning when no anchor is found #30
- add anchor warning #74 (arnaudvl)
- Return closest not predicted class for trust scores #67 (arnaudvl)
Closed issues:
Merged pull requests:
- Update example #100 (jklaise)
- Revert "Don't mock keras for docs" #99 (jklaise)
- Don't mock keras for docs #98 (jklaise)
- Cf #97 (jklaise)
- Cf #96 (jklaise)
- Cf #95 (jklaise)
- Cf #94 (jklaise)
- Cf #92 (jklaise)
- Cf #90 (jklaise)
- Cf #88 (jklaise)
- Add return type for counterfactuals #87 (jklaise)
- prototypical counterfactuals #86 (arnaudvl)
- Remove unnecessary method, rename loss minimization #85 (jklaise)
- Cf #84 (jklaise)
- Fix linting and remove old statsmodels tests #82 (jklaise)
- Some style and test fixes #81 (jklaise)
- Influence functions current work #79 (jklaise)
- WIP: Counterfactual instances #78 (jklaise)
- Counterfactual work so far #77 (jklaise)
- Add additional Python versions to CI #73 (jklaise)
- Add building docs and the Python package in CI #72 (jklaise)
- Bump master version to 0.1.1dev #68 (jklaise)
v0.1.0 (2019-05-03)
Closed issues:
- Migrate CI to Travis post release #46
- Trust scores #39
- Make explicit Python>=3.5 requirement before release #18
- Remove dependency on LIME #17
- Set up CI #5
- Set up docs #4
Merged pull requests:
- Update theme_overrides.css #66 (ahousley)
- Add logo and trustscore example #65 (jklaise)
- Readme #64 (jklaise)
- Trustscore MNIST example #62 (arnaudvl)
- Fix broken links to methods notebooks #61 (jklaise)
- Initial Travis integration #60 (jklaise)
- Add tensorflow to doc generation for type information #59 (jklaise)
- Add numpy as a dependency to doc building for type information #58 (jklaise)
- Autodoc mocking imports #57 (jklaise)
- Avoid importing library for version #56 (jklaise)
- Add full requirement file for documentation builds #55 (jklaise)
- Focus linting and type checking on the actual library #54 (jklaise)
- Trust score high level docs and exposing confidence in alibi #53 (jklaise)
- fix bug getting imagenet data #52 (arnaudvl)
- WIP: Flatten explainer hierarchy #50 (jklaise)
- Add missing version file #48 (jklaise)
- Fix package version from failing install #47 (jklaise)
- trust scores #44 (arnaudvl)
- WIP: High level docs #43 (jklaise)
- WIP: CEM and Anchor docs #40 (arnaudvl)
- WIP: CEM #36 (arnaudvl)
- Counterfactuals #34 (gipster)
- Refactoring counterfactuals to split work #32 (jklaise)
- Counterfactuals #31 (gipster)
- Clean up #29 (jklaise)
- Make minimum requirements versions for testing and CI #27 (jklaise)
- Add Python >= 3.5 requirement #26 (jklaise)
- Change CI test commands to use correct dependencies #21 (jklaise)
- add anchor image #20 (arnaudvl)
- Anchor text #15 (arnaudvl)
- Add support for rendering notebooks using nbsphinx and nbsphinx-link #14 (jklaise)
- WIP: Sphinx configuration #11 (jklaise)
- Ignore missing mypy imports globally for now #10 (jklaise)
- Add mypy to CI and create code style guidelines #9 (jklaise)
- Flake8 setup #8 (jklaise)
- Initial CI & docs setup #6 (jklaise)
- Anchor #3 (arnaudvl)
- Create initial package skeleton #2 (jklaise)
- Add licence #1 (jklaise)
* This Change Log was automatically generated by github_changelog_generator