Displaying 1 to 13 from 13 results

AIX360 - Interpretability and explainability of data and machine learning models

  •    Python

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Image-Captioning-Attack - Codes for reproducing the adversarial attacks on image captioning systems in “Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning,” ACL 2018 ​​​​​​​

  •    Python

This paper is accepted by the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this work. The Show-and-Fool model is designed to generate adversarial examples for neural image captioning. Our model is based on Show and Tell.

cc-dbp - A dataset for knowledge base population research using Common Crawl and DBpedia.

  •    Java

A dataset for knowledge base population research using Common Crawl and DBpedia. For a quick introduction see configSmall.properties and createSmall.sh. This will download 1/80th of the December 2017 Common Crawl and create a KBP dataset from it.

AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models

  •    Python

The AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.




Semantic-Search-for-Sustainable-Development - Semantic Search for Sustainable Development is experimental code for searching documents for text that "semantically" corresponds to any of the UN's Sustainable development goals/targets

  •    Python

Semantic Search for Sustainable Development is experimental code for searching documents for text that "semantically" corresponds to any of the UN's Sustainable development goals/targets. For example, it can be used to mine the national development plan documents of a country and identify pieces of text that correspond to any of the SDGs in order to verify alignment of the plan with the SDGs.

EAD-Attack - Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” AAAI 2018

  •    Python

EAD is a elastic-net attack to deep neural networks (DNNs). We propose formulating the attack process as a elastic-net regularized optimization problem, featuring an attack which produces L1-oriented adversarial examples which includes the state-of-the-art L2 attack (C&W) as a special case. Experimental results on MNIST, CIFAR-10, and ImageNet show that EAD yields a distinct set of adversarial examples and attains similar attack performance to state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging L1 distortion in generating robust adversarial examples.

Autozoom-Attack - Codes for reproducing query-efficient black-box attacks in “AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks” ​​​​, published at AAAI 2019

  •    Python

The images and labels are stored with numpy format. Please download them and put them under AutoZOOM folder. We use the class ImageNetDataNp defined under file setup_inception.py to load these two files. This will download the inception_v3 model pre-trained for imagenet.

CLEVER-Robustness-Score - Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach,” ICLR 2018 ​​​​​​​

  •    Python

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a metric for measuring the robustness of deep neural networks. It estimates the robustness lower bound by sampling the norm of gradients and fitting a limit distribution using extreme value theory. CLEVER score is attack-agnostic; a higher score number indicates that the network is likely to be less venerable to adversarial examples. CLEVER can be efficiently computed even for large state-of-the-art ImageNet models like ResNet-50 and Inception-v3. We received some inquires on Ian Goodfellow's comment “Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size” on our paper. We thank Ian for the discussion but the comments are inappropriate and not applicable to our paper. CLEVER is intended to be a tool for network designer and to evaluate network robustness in the “white-box” setting. Especially, the argument that on digital computers all functions are not Lipschitz continuous and behave like a staircase function (where the gradient is zero almost everywhere) is incorrect. Under the white-box setting, gradients can be computed via automatic differentiation, which is well supported by mature packages like TensorFlow. See our reply and discussions with Ian Goodfellow on gradient masking and implementation on digital computers.


ZOO-Attack - Codes for reproducing the black-box adversarial attacks in “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models,” ACM CCS Workshop on AI-Security, 2017

  •    Python

ZOO is a zeroth order optimization based attack to attack deep neural networks (DNNs). We propose an effective black-box attack that only requires access to the input (images) and the output (confidence scores) of a targeted DNN. We formularize the attack as an optimization problem (similar as Carlini and Wagner's attack), and propose a new loss function suitable for the black-box setting. We use zeroth order stochastic coordinate descent to optimize on the target DNN directly, along with dimension reduction, hierarchical attack and importance sampling techniques to make the attack efficient. No transferability or substitute model is required. There are two variants of ZOO, ZOO-ADAM and ZOO-Newton, corresponding to different solvers (ADAM and Newton) to find the best coordinate update. In practice ZOO-ADAM usually works better with fine-tuned parameters, but ZOO-Newton is more stable when close to the optimal solution.

Contrastive-Explanation-Method - Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives”

  •    Python

This would find the PP of image ID 2953 in the test images from the MNIST dataset. From left to right: the original image and the pertinent positive component. This PP in Image 2953 is sufficient to be classified as 5.

CROWN-Robustness-Certification - CROWN: A Neural Network Verification Framework for Networks with General Activation Functions

  •    Python

We proposed a new framework, CROWN, to certify robustness of neural networks with general activation functions including but not limited to ReLU, tanh, sigmoid, arctan, etc. CROWN is efficient and can deliver lower bounds of minimum adversarial distortions with guarantees (the so-called certified lower bound or certified robustness). We compare CROWN with various certified lower bounds methods including Global Lipschitz constant and Fast-Lin and show that CROWN can certify much large lower bound than the Global Lipschitz constant based approach while improve the quality (up to 28%) of robustness lower bound on ReLU networks of state-of-the-art robustness certification algorithms Fast-Lin. We also compare CROWN with robustness score estimate CLEVER and adversarial attack methods (CW,EAD). Please See Section 4 and Appendix E of our paper for more details.

commonsense-rl - Knowledge-Aware RL agents with Commonsense Reasoning

  •    Inform

TextWorld Commonsense (TWC) is a new text-based environment for RL agents that requires the use of commonsense knowledge from external knowledge sources to solve challenging problems. This repository provides the code for the work described below. TextWorld Commonsense (TWC) dataset/environment and code for the sample RL agents reported in the paper Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines.

bridging-resolution

  •    Python

This repository contains code for bridging resolution and its sub-tasks (i.e., bridging anaphora recognition and bridging anaphora resolution or antecedent selection for bridging anaphors).






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.