Why does this webpage exist?

This webpage was put together for the FUZZ-IEEE 2017 tutorial "Fuzzy Set Theory in Computer Vision" (link), hosted by Derek T. Anderson, James M. Keller and Chee Seng Chan. This webpage is a combination of theory and application. Our goal is to produce two open source fuzzy set theory and computer vision libraries for the community (with the hope being others continue this trend for reproducible research and benchmarking in the future!). Our aim is to NOT optimize every last possible aspect of our code (at the expense of its readability). Our goal is to educate and promote reproducible research. If you find anything that is not described well enough, any errors, etc., please contact us and we will get it fixed/updated. This is definitely a work in progress. Enjoy!

UPDATE: 10/1/2017

When this website was first created (for the FUZZ-IEEE 2017 tutorial), Octave/Matlab and MatConvNet were supported. We are currently in the process of porting examples and codes to Python and TensorFlow. Bare with us as we beef up the documentation, tutorials and etc.

2016 WCCI slides

In 2016, we (Chan, Jim and myself) held a related CV tutorial, about computational intelligence versus just fuzzy set theory, at WCCI in Canada. You can find the slides at Part I and Part II.

2017 FUZZ-IEEE slides

Since there are three presenters in this tutorial, we divided it into “parts”. Part 1 is an introduction by Jim Keller. Part 2 is a series of fusion in CV research discussions (with accompanying code examples for the community) by Derek Anderson. Part 3 is a series of fuzzy qualitative reasoning CV research discussions (with accompanying code) by Chee Seng Chan. Part 4 is a wrap up by Jim Keller.

Part 1: Introduction slides by Jim.

Part 2: Chee Seng Chan's slides.

Part 3: Derek's slides are below; broken up and presented relative to the provided examples.

Part 4: Conclusion slides by Jim.

Chee Seng Chan's fuzzy set theory and computer vision library

Chee has developed an open source library in Python and OpenCV; it can be accessed via GitHub at link.

Reference

If this library is used in your research, reference it as follows please:


@article{LimRC14,
  author    = {Chern Hong Lim and Anhar Risnumawan and Chee Seng Chan},
  title     = {Scene Image is Non-Mutually Exclusive - {A} Fuzzy Qualitative Scene Understanding},
  journal   = {{IEEE} Trans. Fuzzy Systems},
  volume    = {22},
  number    = {6},
  pages     = {1541--1556},
  year      = {2014},
  url       = {https://doi.org/10.1109/TFUZZ.2014.2298233},
  doi       = {10.1109/TFUZZ.2014.2298233},
}

Derek's fuzzy set theory and computer vision library

The goal of this library is to explore some of the different roles of fusion, via the fuzzy integral, in computer vision. As such, we provide examples of fuzzy integration relative to "human engineered" features and "machine learned" features.

License

This file is part of FuzzyIntegralComputerVisionLibrary. FuzzyIntegralComputerVisionLibrary is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. FuzzyIntegralComputerVisionLibrary is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with FuzzyIntegralComputerVisionLibrary. If not, see here.

Reference

If this library and/or tutorial is used in your research, reference it as follows please:


@misc{ FuzzyIntegralComputerVisionLibrary,
author = "Derek Anderson and James Keller and Chee Seng Chan",
title = "Fuzzy Integral and Computer Vision Library",
url = {http://derektanderson.com/FuzzyLibrary/},
urldate = {2017-07-03}
year = "2017" };

@misc{ FuzzySetTheoryFuzzIeeeTutorial,
author = "Derek Anderson and James Keller and Chee Seng Chan",
title = "FUZZ-IEEE 2017 Tutorial: Fuzzy Set Theory in Computer Vision",
url = {http://derektanderson.com/FuzzyLibrary/},
urldate = {2017-07-03}
year = "2017" };

Credit

This fusion and computer vision library was primarily written by Derek T. Anderson and Muhammad Islam at Mississippi State University (MSU). However, Timothy C. Havens, Anthony Pinar, Christian Wagner, James M. Keller, Melissa Anderson, Daniel Wescott, Chee Seng Chan, Lequn Hu, Ryan Smith, Charlie Veal, Alina Zare, Xiaoxiao Du, Fred Petry and Paul Elmore (sorry if I forgot and left anyone out!) played a significant role in either the tutorial, core theories/articles, and/or development of various codes (full or partial). So credit goes to everyone. However, I have yet to decide who deserves the blame if any bugs or etc. are found! On a serious note, if you find any mistakes, please contact us and let us know and we will make the respective changes.

Getting started (Octave/Matlab, VLFeat and MatConvNet)

Much of this code will run in Octave (link), which is free! However, some parts are written in Matlab (link); those codes that require toolboxes that we did not have time to write (e.g., genetic algorithms, some signal/image processing, etc.).

Our codes: first, download our code at; general library, examples, mkl, defimkl, mici, cnn pre trained and cnn self trained.

Libraries: the version of VLFeat (link) that we used is here and the MatConvNet (link) version is here. Of course we recommend that you download the newest and greatest versions of those codes. But, due to the fact that some codes are unsupported or change over time, we link the versions we tested with here. VLFeat is used to extract some "human engineered" features (e.g., HOG and LBP). MatConvNet is a toolbox (from VLFeat!) for implementing convolutional neural networks (CNNs) in Matlab. It has support for building and training your own nets and loading trained nets. Note, these are obviously not the only two libraries out there! However, their simplicity of use and documentation made them desirable with respect to integration for our tutorial. Without loss of generality, you can of course extend what we have done to other deep learners (e.g., tensorflow), languages (e.g., C/C++) and etc. (such as OpenCV and its various codes).

Data sets: Our codes work for a variety of computer vision data sets, however we picked a few well-known but adequately sized data sets that we can in theory run during our tutorial. You can download the MNST data set at here. Note, on that site you can find published classifier rates to compare your algorithm against, e.g., SK-SVM, CNN, KNN, etc. We include a link to the data we used for completeness sake; (link). Charlie also used some custom imagery (versus working with ALL of ImageNet) in testing the CNNs, which you can find (here).

Fusion examples (Octave/Matlab)

First and foremost, our goal was NOT to develop all-inclusive tutorials/examples. Meaning, a tutorial/example where anyone can learn without having any prior knowledge about the subjects at hand. We do not have the time to educate every aspect of fuzzy integrals, fuzzy measures, how to specify them, learn them, or impute them, etc. We obviously do not have time to cover the VAST sea of what is nowadays computer vision. Our GOAL is to provide some codes and examples that accompany what we are teaching in our FUZZ-IEEE 2017 tutorial. We have provided a few relevant links on fuzzy measures and integrals at the bottom of this website. We also selected a few papers on the HOG, LBP and deep learning. If you are just coming across this website, you might want to read some of those articles first then navigate the examples below.

Example 1: creating a basic fuzzy measure and calling the Choquet integral (ChI). (Slides)
Example 2: interval-valued ChI.
Example 3: set-valued ChI (NDFI).
Example 4: set-valued ChI (gFI).
Example 5: learning the fuzzy measure and ChI from data using quadratic programming and lp-norm regularization. (Slides)
Example 6: DeFIMKL on benchmark data for DIDO fusion. (Slides)
Example 7: GAMKLp on benchmark data for FIFO fusion. (Slides)
Example 8: Mobius-based ChI.
Example 9: Missing data fuzzy integral.
Example 10: Shapley and interaction indices. (Slides)
Example 11: Visualization of a data-driven fuzzy integral (courtesy of Dr. Tony Pinar).
Example 12: Different density driven fuzzy measure methods. (Slides)
Example 13: GpChI (aka, fusion of fusions).
Example 14: Multiple instance learning-based ChI (courtesy of Xiaoxiao and Dr. Alina Zare).
Example 15: Efficient binary ChI.
Example 16: Optimization of data supported variables and data unsupported imputation.
Example 17: Indices of introspection (what aggregation operator is our FM/ChI?).

"Human engineered" computer vision examples (Matlab and VLFeat)

Example 1: VLFeat basics; HOG and LBP. (Slides)
Example 2: VLFeat and MKL on MNIST data set (so feature extraction, fusion and then SVM classification).

"Machine learned" CNN computer vision examples in MatConvNet

Example 1: MatConvNet instillation, high-level comments and good links
Example 2: MatConvNet for pretrained networks (Slide for pretrained) and (slides for intro to CNNs) (courtesy of Charlie Veal).
Example 3: MatConvNet for creating and training your own custom CNN (MNST) (network diagram) (courtesy of Charlie Veal).
Example 4: MatConvNet for creating and training your own custom CNN (Cifar) (network diagram) (courtesy of Charlie Veal).
Example 5: MatConvNet and batch normalization. (Slides) (network diagram) (courtesy of Charlie Veal).
Example 6: Fusion of deep learners. (coming soon)

"Machine learned" CNN computer vision examples in TensorFlow

Example 1: TensorFlow install (courtesy of Charlie Veal).
Example 2: Import Data, Matlab to Tensorflow (courtesy of Charlie Veal).
Example 3: Creating a custom network (MNST) (Basic Example) (courtesy of Charlie Veal).
Example 4: Creating a custom network (Cifar) (batch-normalization and visualization (work in progress)) (courtesy of Charlie Veal).

Some of OUR recent articles (relative to the above examples):

  • H Tahani and JM Keller, Information fusion in computer vision using the fuzzy integral, IEEE Transactions on Systems, Man, and Cybernetics, 1990
  • M. Islam, D. T. Anderson, A. Pinary, T. Havens, "Data-Driven Compression and Efficient Learning of the Choquet Integral," IEEE Transactions of Fuzzy Systems, accepted Sept, 2017
  • Fuzzy Qualitative Rank Classifier (FQRC) (link)
  • A. Pinar,D. T. Anderson, T. Havens, A. Zare, T. Adeyeba, "Measure of the Shapley Index for Learning Lower Complexity Fuzzy Integrals," Granular Computing, 2017
  • A. Pinar, J. Rice, L. Hu, D. T. Anderson, T. C. Havens, "Efficient Multiple Kernel Classification using Feature and Decision Level Fusion," IEEE Trans. on Fuzzy Systems, 2016
  • D. T. Anderson, P. Elmore, F. Petry, T. Havens, "Fuzzy Choquet Integration of Homogeneous Possibility and Probability Distributions," Information Sciences, 2016
  • T. C. Havens, D. T. Anderson, C. Wagner, "Constructing Meta-Measures From Data-Informed Fuzzy Measures for Fuzzy Integration of Interval Inputs and Fuzzy Number Inputs," IEEE Transactions on Fuzzy Systems, 2015
  • D. T. Anderson, T. C. Havens, C. Wagner, J. M. Keller, M. F. Anderson, D. J. Wescott, "Extension of the Fuzzy Integral for General Fuzzy Set-Valued Information," IEEE Transactions on Fuzzy Systems, 2014
  • D. T. Anderson, M. Islam, R. King, N. H. Younan, J. Fairley, S. Howington, F. Petry, P. Elmore, and A. Zare, "Binary Fuzzy Measures and Choquet Integration for Multi-Source Fusion," ICMT, 2017
  • R. Smith, D. T. Anderson, A. Zare, J. E. Ball, B. Alvey, "Genetic Programming Based Choquet Integral for Multi-Source Fusion," FUZZ-IEEE, 2017
  • M. Islam, D. T. Anderson, F. Petry, D. Smith, P. Elmore, "The Fuzzy Integral for Missing Data," FUZZ-IEEE, 2017
  • A. Pinar, T. Havens, M. Islam, D. T. Anderson, "Visualization and Learning of the Choquet Integral With Limited Training Data," FUZZ-IEEE 2017
  • L. Tomlin, D. T. Anderson, C. Wagner, T. C. Havens, J. M. Keller, "Fuzzy Integral for Rule Aggregation in Fuzzy Inference Systems," IPMU, 2016
  • X. Du, A. Zare, J. M. Keller, D. T. Anderson, "Multiple Instance Choquet Integral for Classifier Fusion," FUZZ-IEEE, 2016
  • S. R. Price, B. Murray, L. Hu, D. T. Anderson, T. Havens, R. Luke, J. M. Keller, "Multiple kernel based feature and decision level fusion of iECO individuals for explosive hazard detection in FLIR imagery," SPIE Defense, Security, and Sensing, 2016
  • A. Pinar, T. Havens, D. T. Anderson, L. Hu, "Feature and decision level fusion using multiple kernel learning and fuzzy integrals," IEEE International Conference on Fuzzy Systems, 2015
  • M. Islam, D. T. Anderson, T. Havens, "Multi-Criteria Based Learning of the Choquet Integral using Goal Programming," submitted to North American Fuzzy Information Processing Society, 2015

More links can be found at derektanderson.com

Other useful links (not comprehensive):

  • The evolution of the concept of fuzzy measure, by Luis Garmendia
  • Fuzzy integrals—what are they?, by Radko Mesiar and Andrea Mesiarová
  • The fuzzy integral, by Dan Ralescu. Author and Gregory Adams
  • The application of fuzzy integrals in Multicriteria Decision Making, by M. Grabisch
  • Estimation of adult skeletal age-at-death using the Sugeno fuzzy integral, by Melissa Anderson et al.
  • Linguistic Description of Adult Skeletal Age-at-Death Estimations from Fuzzy Integral Acquired Fuzzy Sets, by Derek Anderson et al.
  • A Way to Choquet Calculus, by Michio Sugeno
  • Indices for Introspection of the Choquet Integral, by S Price et al.
  • A Fuzzy Choquet Integral with an Interval Type-2 Fuzzy-Valued Integrand, by T Havens and et al.
  • Learning fuzzy valued fuzzy measures for the fuzzy valued Sugeno fuzzy integral, by D Anderson et al.

Some useful computer vision links related to our examples (not comprehensive):

  • Scott, Grant, Marcum, Richard, Davis, Curt, Nivin, Tyler, Fusion of Deep Convolutional Neural Networks for Land Cover Classification of High-Resolution Imagery, 2017
  • David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004), pp. 91-110
  • David G. Lowe, "Object recognition from local scale-invariant features," International Conference on Computer Vision, Corfu, Greece (September 1999), pp. 1150-1157
  • David G. Lowe, "Local feature view clustering for 3D object recognition," IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii (December 2001), pp. 682-688
  • Histograms of Oriented Gradients for Human Detection, Navneet Dalal and Bill Triggs
  • "An HOG-LBP Human Detector with Partial Occlusion Handling", Xiaoyu Wang, Tony X. Han, Shuicheng Yan, ICCV 2009
  • T. Ojala, M. Pietikäinen, and D. Harwood (1996), "A Comparative Study of Texture Measures with Classification Based on Feature Distributions", Pattern Recognition, vol. 29, pp. 51-59
  • ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NIPS 2012
  • Going Deeper with Convolutions, Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, 19-Sept-2014
  • LeCun, Y., Bengio, Y. and Hinton, G. E. (2015), Deep Learning, Nature, Vol. 521, pp 436-444
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014), Dropout: A simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research, 15(1), pp 1929-1958
  • Graves, A., Mohamed, A. and Hinton, G. E. (2013), Speech Recognition with Deep Recurrent Neural Networks, In IEEE International Conference on Acoustic Speech and Signal Processing (ICASSP 2013) Vancouver, 2013
  • J. Schmidhuber, “Deep Learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015). DOI:10.1016/j.neunet.2014.09.003
  • I. Arel, D. C. Rose, and T. P. Karnowski, “Deep Machine Learning - A New Frontier in Artificial Intelligence Research,” IEEE Computational Intelligence Magazine 5(4), 13–18 (2010). DOI:10.1109/MCI.2010.938364
  • H. Wang and B. Raj, “A survey: Time travel in deep learning space: An introduction to deep learning models and how deep learning models evolved from the initial ideas,” arXiv preprint arXiv:1510.04781 abs/1510.04781 (2015).
  • M. Chen, Z. Xu, K. Weinberger, et al., “Marginalized denoising autoencoders for domain adaptation,” arXiv preprint arXiv:1206.4683 (2012).
  • G. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504 – 507 (2006).
  • S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation 9(8), 1735–1780 (1997).
  • M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision, 818–833, Springer (2014).
  • M. D. Zeiler, G. W. Taylor, and R. Fergus, “Adaptive deconvolutional networks for mid and high level feature learning,” in 2011 International Conference on Computer Vision, 2018–2025 (2011).
  • M. D. Zeiler, D. Krishnan, G. W. Taylor, et al., “Deconvolutional networks,” in In CVPR, (2010).
  • H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” CoRR abs/1505.04366 (2015).
  • R. K. Srivastava, K. Greff, and J. Schmidhuber, “Highway networks,” CoRR abs/1505.00387 (2015).
  • K. Greff, R. K. Srivastava, and J. Schmidhuber, “Highway and residual networks learn unrolled iterative estimation,” CoRR abs/1612.07771 (2016).
  • J. G. Zilly, R. K. Srivastava, J. Koutn´ık, et al., “Recurrent highway networks,” CoRR abs/1607.03474 (2016).
  • R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” CoRR abs/1507.06228 (2015)