Welcome

You have reached the homepage of Derek T. Anderson. I am an Assistant Professor with the Electrical and Computer Engineering Department (ECE) at Mississippi State University (MSU). On this page, you can find details about upcoming events and most recent articles. A full list of publications, CV, funding and laboratory information can be found on the links at the top of this page.


Plenary Talk at FUZZ-IEEE 2017, Naples Italy (link)

Title: Fusion here, there and almost everywhere in computer vision - driving new advances in fuzzy integrals

Computer vision is a well-known area where computational intelligence has made a significant impact. In general, the field is diverse and objectives range from filtering to object detection, image understanding and linguistic summarization/description, to name a few. As simple as it may sound, we have been trying to make a computer “describe what it saw” since the 1960s. In an attempt to achieve this goal, researchers have looked to data/information fusion. However, most classical aggregation strategies are additive and assume independence among inputs. On the other hand, fuzzy measure theory provides a powerful parametric way to specify or learn input interactions (when/if available). More importantly, the fuzzy integral utilizes the fuzzy measure to achieve nonlinear aggregation. In this talk, I will discuss the role of nonlinear aggregation via fuzzy integrals at the levels of signal, spectrum, feature, and decision-level fusion. In particular, I highlight recently established extensions of fuzzy integrals designed to address key challenges in computer vision. These extensions focus on spatial and/or distribution level uncertainty and they are embedded into pattern recognition or automated decision making via multiple kernel learning and/or fuzzy logic. Applications are discussed for multi-sensor humanitarian demining, hyperspectral image analysis and remote sensing.


Special Issue in JARS:
Feature and Deep Learning in Remote Sensing Applications (link)

Call for Papers: The shift from ‘human features' to machine-learned features has resulted in phenomenal results in numerous signal/image processing applications, from computer vision to speech recognition. Well-known examples of deep learning include deep belief nets (DBNs), convolutional neural networks (CNNs) and morphological shared weight neural networks (MSNNs), whereas feature learning in general includes techniques such as evolutionary constructed features (ECO) and improved ECO (iECO). Recently, feature and deep learning (FaDL) has made its way into numerous remote sensing applications, which includes analysis using sensors such as synthetic aperture radar (SAR), light detection and ranging (LiDAR), hyperspectral imaging, etc. These sensors provide heterogeneous data and they represent different regions of the electromagnetic spectrum. While FaDL has seen success in applications where large amounts of diverse data exist, FaDL in remote sensing is plagued by spectral, spatial, and temporal dimensionality, and usually has few training samples available due to the high cost of providing labeled data. In addition, most FaDL tools have a large number of parameters to estimate, they take substantial hardware and time to train and test, which is often not realistic for many remotely sensed applications due to cost or time reasons.

The Journal of Applied Remote Sensing (JARS) will publish a special section on feature and deep learning applied to remote sensing applications. The scope includes, but is not limited to:

  • Remote sensing applications: agriculture, automated target detection, autonomy, change detection, disaster assessment, environmental sensing, forestry, hydrology, land cover classification, soil analysis, ocean sensing, urban analysis/planning, water resource analysis, and water control assessment.
  • Sensors: multi/hyperspectral, LiDAR, radar, synthetic aperture radar, automotive radar, stereo cameras, infrared (thermal), and sonar.
  • Multimodality: multisensor fusion at different stages in the data-processing lifetime.
  • FaDL challenges in remote sensing: limited training data, high spectral dimensionality, multisensor fusion, multiresolution data, and robust performance due to factors such as degradation effects like dust, rain, fog, etc.

Guest Editors:

John E. Ball
Mississippi State University
Bagley College of Engineering
Electrical and Computer Engineering Department
Mississippi State, Mississippi, United States
E-mail: jeball@ece.msstate.edu

Derek T. Anderson
Mississippi State University
Bagley College of Engineering
Electrical and Computer Engineering Department
Mississippi State, Mississippi, United States
E-mail: anderson@ece.msstate.edu

Chee Seng Chan
University of Malaya
Faculty of Computer Science and Information Technology
Kuala Lumpur, Malaysia
E-mail: cs.chan@um.edu.my


10/3/2016

SFPA Laboratory in ECE and SAIL at CAVS

SFPA: To learn more about the Bagley College of Engineering (BCoE) Sensor Fusion and Pattern Analysis (SFPA) laboratory, directed by Prof Anderson, click here.

SAIL: The Center for Advanced Vehicular Systems (CAVS) at MSU is an interdisciplinary center comprised of research, engineering design and development, and technology transfer teams for industry and government partners. Prof Anderson (ECE), Prof Ball (ECE) and Prof Archibald (CSE) co-direct the Sensor Analysis and Intelligence Laboratory (SAIL) at CAVS. SAIL has a range of unique state-of-the-art equipment, from an extremely lightweight push-broom hyperspectral sensor to thermal, stereoscopic, LiDAR and Radar. SAIL is interested in collaborative research and is focused on tasks like multi-sensor fusion, scene understanding, sensor characterization and sensor applications. Example applications to date include, humanitarian demining (in conjunction with U.S. Army NVESD), autonomous vehicles (in conjunction with U.S. Army ERDC), robotics (in conjunction with Prof Bethel and the STARS lab) and remote sensing using unmanned aerial vehicles (in conjunction with Prof Moorhead and the Geosystems Research Institute (GRI) at MSU).


10/3/2016

Article: IEEE TFS 2016: Efficient Multiple Kernel Classification using Feature and Decision Level Fusion

Abstract: Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. Furthermore, MKL does not work well with large datasets because of limited storage space and prediction speed. In this article, we address all three of these multiple kernel (MK) challenges. First, we introduce a new linear feature level fusion technique and learning algorithm, GAMKLp. Second, we put forth three new algorithms, DeFIMKL, DeGAMKL and DeLSMKL, for non-linear fusion of kernels at the decision level. To address MKL’s storage and speed drawbacks, we apply the Nystrom approximation to the kernel matrices. We compare our methods to a successful and state-of-the-art technique called MKL-group lasso (MKLGL), and experiments on several benchmark data sets show that some of our proposed algorithms outperform MKLGL when applied to support vector machine (SVM)-based classification. However, to no surprise, there does not seem to be a global winner but instead different strategies that a user can employ. Experiments with our kernel approximation method show that we can routinely discard most of the training data and at least double prediction speed without sacrificing classification accuracy. These results suggest that MKL-based classification techniques can be applied to big data efficiently, which is confirmed by an experiment using a large data set.


4/13/2016

FUZZ-IEEE Tutorial at WCCI 2016 (link)

Slides for Part I and Part II

Title: Computer Vision: A Computational Intelligence Perspective

Brief Description: In this tutorial we discussed a number of challenges in modern computer vision (CV) and possible directions, tools and novel ideas that the computational intelligence (CI) community could bring to the table. We covered artifical neural networks (ANNs), evolutionary algorithms (EAs) and fuzzy set theory (and combinations thereof). In terms of CV, we discussed challenges at various so-called levels: low-level, mid-level and high-level. We reviewed standard and modern CV approaches, discussed data sets currently used by the CV community, and we presented some CI techniques employed in each area, always with an eye towards where soft computing can make the best impact. This event was not meant to be a survey of all techniques; if yours is left out, please do not be sad. The intent was to provide an assessment, from our perspective, of the power, limitations, and potential of CI algorithms, with thoughts about the challenges for those of us in the CI family to have our technologies be better accepted by the CV community. In Part I, we specifically emphasized linguistic summarization of video, feature learning (deep learners and EA-based approaches), fuzzy integrals for data or information fusion at different levels (spectrum, feature and decision), and multiple kernel fusion and learning for advanced classification. In Part II, we focused specifically on human motion analysis and CNN-based deep learning (spatially, but also spatio-temporally).


Special Issue on: Fuzzy Fusion of Sensors, Data, and Information

Guest Editors: Timothy C. Havens, Derek T. Anderson, Christian Wagner, Jonathan Garibaldi

Manuscripts due: Friday, 26 Novemember 2016
First round of reviews: Friday, 17 February 2017
Publication date: Friday, 14 April 2017
Submission link: here
Call for papers: pdf link

One of the basic problems in science is the ability to measure the environment, with the goal of supporting a hypothesis. Conventional ideas for solving this problem have commonly focused on developing the best sensor or algorithm for making the appropriate measurement, for example, building a larger telescope, a better radar, or a more sensitive CCD. However, in most cases, this pursuit is very expensive, produces diminishing returns with increased development, and/or becomes impossible. Thus, many practical problems are solved through fusion: the combination of multiple sources—sensors, data, and information—to enable representations that are better than the sum of their parts.

Sensor, data, and information fusion has become an essential and integral part of many modern technologies, including Internet of Things (IoT), wireless sensor networks, multimodal surveillance and sensing systems, and recommender systems, to name a few. Fusion ideally combines multiple data or information into a robust, accurate, and useful representation to enable high-quality, informed, and comprehensive decisions. Furthermore, modern computing has enabled the fusion of big data sources, bringing up topics such as scalability, timeliness, and privacy.

The challenge of fusion is that data from different sources is often subject to mixed types of uncertainty and comes in different representations and at multiple scales and various densities. Hence, conventional processing and analysis algorithms are inappropriate for these data and newmethodsmust be developed that are capable of effectively combining the heterogeneous information. The main idea of fusion is thus to leverage the varying aspects of the multiple sources to better measure, infer, and, ultimately, understand the environment to a better level than possible by designing the perfect (single) system for the job.

The focus of this special issue is the application of fuzzy logic and fuzzy systems to the problem of sensor, data, and information fusion. Potential topics include, but are not limited to:

  • Multimodal sensor data
  • Array processing and analysis
  • Wireless sensor networks
  • Crowd-sourcing
  • Heterogeneous data mining
  • Internet of Things (IoT)
  • Web data fusion and mining
  • Scalable and multimedia big data fusion
  • Uncertainty modeling in heterogeneous data
  • Measure and capacity integrals
  • Interpretable and transparent fusion algorithms
  • Definition and generation of comprehensive models for heterogeneous fused data


4/13/2016

Special Session at FUZZ-IEEE in July 2016 (link)

Title: Special Session on Fuzzy Set Theory in Computer Vision

Brief Description: Fuzzy set theory is the subject of intense investigation in fields like control theory, robotics, biomedical engineering, computing with words, knowledge discovery, remote sensing and socioeconomics, to name a few. However, in the area of computer vision, other fields, e.g., machine learning, and communities, e.g., PAMI, ICCV, CVPR, ECCV, NIPS, are arguably state-of-the-art. In particular, the vast majority of top performing techniques on public datasets are steeped in probability theory. Important questions to the fuzzy set community include the following. What is the role of fuzzy set theory in computer vision? Does fuzzy set theory make the most sense and biggest impact in terms of low-, mid- or high-level computer vision? Furthermore, do current performance measures favor machine learning approaches? Last, is there additional benefit that fuzzy set theory brings, and if so, how is it measured?

This special session invites new research in fuzzy set theory in computer vision. It is a follow up to the 2013 FUZZ-IEEE workshop View of Computer Vision Research and Challenges for the Fuzzy Set Community and Fuzzy Set Theory in Computer Vision special sessions in 2014 and 2015. In particular, we encourage authors to investigate their research using public datasets and to compare their results to both fuzzy and non-fuzzy methods. Topics of interest include all areas in computer vision and image/video understanding. Example topics include, but are definitely not limited to, the following

  • Detection and recognition
  • Categorization, classification, indexing and matching
  • 3D-based computer vision
  • Advanced image features and descriptors
  • Motion analysis and tracking
  • Linguistic description and summarization
  • Video: events, activities and surveillance
  • Intelligent change detection
  • Face and gesture
  • Low-level, mid-level and high-level computer vision
  • Data fusion for computer vision
  • Medical and biological image analysis
  • Vision for Robotics

Important Dates:

  • Deadline for Paper Submission: Jan 15th 2016
  • Notification of Acceptance/Rejection: March 15th 2016
  • Deadline for Final Paper Submission: April 15th 2016
  • Conference Dates: July 25-29 2016

Organizers: Derek T. Anderson, Chee Seng Chan and James M. Keller


4/13/2016

Cross-Disciplinary Special Session at WCCI in July 2016 (link)

Title: Special Session on Computational Intelligence for Security, Surveillance, and Defense (CISSD)

Brief Description: Given the rapidly changing and increasingly complex nature of global security, we continue to witness a remarkable interest within the security and defense communities in novel, adaptive and resilient techniques that can cope with the challenging problems arising in this domain. These challenges are brought forth not only by the overwhelming amount of data reported by a plethora of sensing and tracking modalities, but also by the emergence of innovative classes of decentralized, mass-scale communication protocols and connectivity frameworks such as cloud computing, vehicular networks, and the Internet of Things (IoT). Realizing that traditional techniques have left many important problems unsolved, and in some cases, not addressed, further efforts have to be undertaken in the quest for algorithms and methodologies that can detect and easily adapt to emerging threats

The purpose of this Special Session is to provide a forum for the exchange and discussion of current solutions in Computational Intelligence (e.g., neural networks, fuzzy systems, evolutionary computation, swarm intelligence, and other emerging learning or optimization techniques) as applied to security, surveillance, and defense problems. High-quality technical papers addressing research challenges in these areas are solicited. Papers should present original work validated via analysis, simulation, or experimentation, pertaining, but not limited, to the following topics:

  • Computational Intelligence for Advanced Architectures for Defense Operations
    • Multi-Sensor Data Fusion
    • Employment of Autonomous Vehicles
    • Intelligence Gathering and Exploitation
    • Mine Detection
    • Situation Assessment
    • Automatic Target Recognition
    • Mission Weapon Pairing and Assignment
    • Sensor Cueing and Tasking
    • Computational Red Teaming
    • Trusted Autonomous Systems
  • Computational Intelligence for Modelling and Simulation of Defence Operations
    • Logistics Support
    • Mission Planning and Execution
    • Resource Management
    • Course of Action Generation
    • Models for War Games
    • Multi-Agent Based Simulation
    • Strategic Planning
    • Joint Operations
    • Red-Blue M and S
  • Computational Intelligence for Security Applications
    • Surveillance
    • Human Modeling: Behavior, Emotion, Motion
    • Suspect Behavior Profiling
    • Automated Handling of Dangerous Situations
    • Stationary or Mobile Object Detection, Recognition and Classification
    • Intrusion Detection Systems
    • Cyber-Security
    • Air, Maritime and Land Security
    • Network Security, Biometrics Security and Authentication Technologies
    • Trusted Autonomous Software Systems
    • Spectrum Management

Important Dates:

  • Deadline for Paper Submission: Jan 15th 2016
  • Notification of Acceptance/Rejection: March 15th 2016
  • Deadline for Final Paper Submission: April 15th 2016
  • Conference Dates: July 25-29 2016

Organizers: Derek T. Anderson, Timothy C. Havens and Hussein Abbass


4/13/2016

SPIE Newsroom Article: Computational Intelligence in Explosive Hazard Detection

Recent SPIE newsroom article highlighting some of our research (myself, colleague Prof. Havens and students Stanton Price and Tony Pinar), link (doi:10.1117/2.1201512.006239).


4/13/2016

Article: WCCI 2016: Multiple Instance Choquet Integral for Classifier Fusion

Abstract: The Multiple Instance Choquet integral (MICI) for classifier fusion and an evolutionary algorithm for parameter estimation is presented. The Choquet integral has a long history of providing an effective framework for non-linear fusion. Previous methods to learn an appropriate measure for the Choquet integral assumed accurate and precise training labels (with low levels of noise). However, in many applications, data-point specific labels are unavailable and infeasible to obtain. The proposed MICI algorithm allows for training with uncertain labels in which class labels are provided for sets of data points (i.e., bags) instead of individual data points (i.e., instances). The proposed algorithm is able to fuse multiple two-class classifier outputs by learning a monotonic and normalized fuzzy measure from uncertain training labels using an evolutionary algorithm. It produces enhanced classification performance by computing Choquet integral with the learned fuzzy measure. Results on both simulated and real hyperspectral data are presented in the paper.


9/15/2016

Article: WHISPERS 2016: Fusion of diverse features and kernels using lp-norm multiple kernel learning in hyperspectral image processing

Abstract: Multiple kernel learning (MKL) is an elegant tool for heterogeneous fusion. In support vector machine (SVM) based classification, MK is a homogenization transform and it provides flexibility in searching for high-quality linearly separable solutions in the reproducing kernel Hilbert space (RKHS). However, performance often depends on input and kernel diversity. Herein, we explore a new way to extract diverse features from hyperspectral imagery using different proximity measures and band grouping. The output is fed to lp-norm MKL for feature-level fusion, where larger p's are preferred for diverse vs sparse solutions. Preliminary results on benchmark data indicates that lp-norm MKSVM of diverse features and kernels leads to noticeable performance gain.


4/13/2016

Article: IGARSS 2016: CLODD Based Band Group Selection

Abstract: Herein, we explore both a new supervised and unsupervised technique for dimensionality reduction or multispectral sensor design via band group selection in hyperspectral imaging. Specifically, we investigate two algorithms, one based on the improved visual assessment of clustering tendency (iVAT) and the other based on the automatic extraction of block-like structure in a dissimilarity matrix (CLODD algorithm). In particular, the iVAT algorithm allows for identification of non-contiguous band groups. Experiments are conducted on a benchmark data set and results are compared to existing algorithms based on hierarchical and c-means clustering. Our results demonstrate the effectiveness of the proposed method.


4/13/2016

Article: IPMU 2016: Fuzzy Integral for Rule Aggregation in Fuzzy Inference Systems

Abstract: The fuzzy inference system (FIS) has been tuned and revamped many times over and applied to numerous domains. New and improved techniques have been presented for fuzzification, implication, rule composition and defuzzification, leaving one key component relatively underrepresented, rule aggregation. Current FIS aggregation operators are relatively simple and have remained more-or-less unchanged over the years. For many problems, these simple aggregation operators produce intuitive, useful and meaningful results. However, there exists a wide class of problems for which quality aggregation requires non-additivity and exploitation of interactions between rules. Herein, we show how the fuzzy integral, a parametric non-linear aggregation operator, can be used to fill this gap. Specifically, recent advancements in extensions of the fuzzy integral to unrestricted fuzzy sets, i.e., subnormal and non-convex, makes this now possible. We explore the role of two extensions, the gFI and the NDFI, discuss when and where to apply these aggregations, and present efficient algorithms to approximate their solutions.


4/13/2016

Article: SPIE 2016: Multiple kernel based feature and decision level fusion of iECO individuals for explosive hazard detection in FLIR imagery

Abstract: A serious threat to civilians and soldiers is buried and above ground explosive hazards. The automatic detection of such threats is highly desired. Many methods exist for explosive hazard detection, e.g., hand-held based sensors, downward and forward looking vehicle mounted platforms, etc. In addition, multiple sensors are used to tackle this extreme problem, such as radar and infrared (IR) imagery. In this article, we explore the utility of feature and decision level fusion of learned features for forward looking explosive hazard detection in IR imagery. Specifically, we investigate different ways to fuse learned iECO features pre and post multiple kernel (MK) support vector machine (SVM) based classification. Three MK strategies are explored; fixed rule, heuristics and optimization-based. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of day. Specifically, the results reveal two interesting things. First, the different MK strategies appear to indicate that the different iECO individuals are all more-or-less important and there is not a dominant feature. This is reinforcing as our hypothesis was that iECO provides different ways to approach target detection. Last, we observe that while optimization-based MK is mathematically appealing, i.e., it connects the learning of the fusion to the underlying classification problem we are trying to solve, it appears to be highly susceptible to over fitting and simpler, e.g., fixed rule and heuristics approaches help us realize more generalizable iECO solutions.


4/13/2016

Article: SPIE 2016: Comparison of spatial frequency domain features for the detection of side attack explosive ballistics in synthetic aperture acoustics

Abstract: Explosive hazards in current and former conflict zones are a threat to both military and civilian personnel. As a result, much effort has been dedicated to identifying automated algorithms and systems to detect these threats. However, robust detection is complicated due to factors like the varied composition and anatomy of such hazards. In order to solve this challenge, a number of platforms (vehicle-based, handheld, etc.) and sensors (infrared, ground penetrating radar, acoustics, etc.) are being explored. In this article, we investigate the detection of side attack explosive ballistics via a vehicle-mounted acoustic sensor. In particular, we explore three acoustic features, one in the time domain and two on synthetic aperture acoustic (SAA) beamformed imagery. The idea is to exploit the varying acoustic frequency profile of a target due to its unique geometry and material composition with respect to different viewing angles. The first two features build their angle specific frequency information using a highly constrained subset of the signal data and the last feature builds its frequency profile using all available signal data for a given region of interest (centered on the candidate target location). Performance is assessed in the context of receiver operating characteristic (ROC) curves on cross-validation experiments for data collected at a U.S. Army test site on different days with multiple target types and clutter. Our preliminary results are encouraging and indicate that the top performing feature is the unrolled two dimensional discrete Fourier transform (DFT) of SAA beamformed imagery.


4/13/2016

Article: SPIE 2016: Curvelet filter based prescreener for explosive hazard detection in hand-held ground penetrating radar

Abstract: Explosive hazards, above and below ground, are a serious threat to civilians and soldiers. In an attempt to mitigate these threats, different forms of explosive hazard detection (EHD) exist; e.g., multi-sensor hand-held platforms, downward looking and forward looking vehicle mounted platforms, etc. Robust detection of these threats resides in the processing and fusion of different data from multiple sensing modalities, e.g., radar, infrared, electromagnetic induction (EMI), etc. Herein, we focus on a new energy-based prescreener in hand-held ground penetrating radar (GPR). First, we Curvelet filter B-scan signal data using either Reverse-Reconstruction followed by Enhancement (RRE) or selectivity with respect to wedge information in the Curvelet transform. Next, we aggregate the result of a bank of matched filters and run a size contrast filter with Bhattacharyya distance. Alarms are then combined using weighted mean shift clustering. Results are demonstrated in the context of receiver operating characteristics (ROC) curve performance on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of the day.


4/13/2016

Article: SPIE 2016: Background Adaptive Division Filtering for Hand-Held Ground Penetrating Radar

Abstract: The challenge in detecting explosive hazards is that there are multiple types of targets buried at different depths in a highly-cluttered environment. A wide array of target and clutter signatures exist, which makes detection algorithm design difficult. Such explosive hazards are typically deployed in past and present war zones and they pose a grave threat to the safety of civilians and soldiers alike. This paper focuses on a new image enhancement technique for hand-held ground penetrating radar (GPR). Advantages of the proposed technique is it runs in real-time and it does not require the radar to remain at a constant distance from the ground. Furthermore, the algorithm produces B-Scan images that can be more easily read by a human operator or exploited by an automatic detection algorithm. Herein, we evaluate the performance of the proposed technique using data collected from a U.S. Army test site, which includes targets with varying amounts of metal content, placement depths, clutter and times of day. Receiver operating characteristic (ROC) curve-based results are presented for the detection of shallow, medium and deeply buried targets. Preliminary results are very encouraging and they demonstrate the usefulness of the proposed filtering technique.