Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 4, 2015

AdversariaLib

Filed under: Algorithms,Machine Learning,Python — Patrick Durusau @ 5:35 pm

AdversariaLib

Speaking of combat machine learning environments:

AdversariaLib is an open-source python library for the security evaluation of machine learning (ML)-based classifiers under adversarial attacks. It comes with a set of powerful features:

  • Easy-to-use. Running sophisticated experiments is as easy as launching a single script. Experimental settings can be defined through a single setup file.
  • Wide range of supported ML algorithms. All supervised learning algorithms supported by scikit-learn are available, as well as Neural Networks (NNs), by means of our scikit-learn wrapper for FANN. In the current implementation, the library allows for the security evaluation of SVMs with linear, rbf, and polynomial kernels, and NNs with one hidden layer, against evasion attacks.
  • Fast Learning and Evaluation. Thanks to scikit-learn and FANN, all supported ML algorithms are optimized and written in C/C++ language.
  • Built-in attack algorithms. Evasion attacks based on gradient-descent optimization.
  • Extensible. Other attack algorithms can be easily added to the library.
  • Multi-processing. Do you want to further save time? The built-in attack algorithms can run concurrently on multiple processors.

Last, but not least, AdversariaLib is free software, released under the GNU General Public License version 3!

The “full documentation” link on the homepage returns a “no page.” I puzzled over it until I realized that the failing link reads:

http://comsec.diee.unica.it/adversarialib/

and the successful link reads:

https://comsec.diee.unica.it/adversarialib/advlib.html

I have pinged the site owners.

The sourceforge link for the code: http://sourceforge.net/projects/adversarialib/ still works.

The full documentation page notes:

However, learning algorithms typically assume data stationarity: that is, both the data used to train the classifier and the operational data it classifies are sampled from the same (though possibly unknown) distribution. Meanwhile, in adversarial settings such as the above mentioned ones, intelligent and adaptive adversaries may purposely manipulate data (violating stationarity) to exploit existing vulnerabilities of learning algorithms, and to impair the entire system.

Not quite the case of reactive data that changes representations depending upon the source of a query but certainly a move in that direction.

Do you have a data stability assumption?

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress