Dynamic Feature Selection with Fuzzy-Rough Sets
Supervisor: Professor Qiang Shen (email@example.com)
Feature selection (FS) is a data processing technique that aims to discover a minimal feature subset from a problem domain, while retaining semantics and a suitably high accuracy in representing the original data . When analysing data with a very large number of features , it is uneasy to identify and extract patterns or rules due to the high inter-dependency amongst individual features, or the behaviour of combined features. Once the noisy, irrelevant, redundant or misleading features are removed, subsequently employed techniques to perform tasks such as object recognition , data classification , and system monitoring  can benefit significantly from FS.
A number of techniques have been developed that can efficiently identify important features, including the correlation-based FS , probabilistic consistency-based FS , and methods based on rough or fuzzy-rough set theory . Various search algorithms ,  have also emerged for the purpose of discovering good quality feature subsets, several of which are modifications of nature-inspired metaheuristics. Existing work typically assumes that the data being processed (and the underlying knowledge associated with it) is static, where all the features and instances are present a priori. While in reality, data sources possess dynamic characteristics. The data volume may grow both in terms of attributes and objects, and given information may become invalid or irrelevant over time. In order to maintain the preciseness and effectiveness of the extracted knowledge, it is necessary to develop adaptive strategies to handle such dynamic data.
The main aim of this PhD project is to identify and formulate potential approaches that can actively refine feature subsets in a dynamic environment, building upon existing methodologies ,  that are derived on the basis of fuzzy-rough set theory. Such approaches should be flexible in dealing with arbitrary combinations of the possible scenarios, including the addition and removal of features or instances, as well as modifications to the class labels of the objects. A starting point is to investigate the existing techniques proposed in-house to address dynamic FS systems, extending conventional, static methods. Initial ideas are expected to be implemented and evaluated first against simulated, benchmark data sets. The work will then be developed further to be applied to a large scale, practical problem (e.g. serious crime investigation, student performance analysis, or weather forecasting).
 R. Diao and Q. Shen, “Feature selection with harmony search,” IEEE Trans. Syst., Man, Cybern. B, vol. 42, no. 6, pp. 1509–1523, 2012.
 M. Hall, “Correlation-based feature subset selection for machine learning,” Ph.D. dissertation, University of Waikato, Hamilton, New Zealand, 1998.
 R. Jensen and Q. Shen, Computational Intelligence and Feature Selection: Rough and Fuzzy Approaches. Wiley-IEEE Press, 2008.
 ——, “New approaches to fuzzy-rough feature selection,” IEEE Trans. Fuzzy Syst., vol. 17, no. 4, pp. 824–838, Aug. 2009.
 M. Kabir, M. Shahjahan, and K. Murase, “A new hybrid ant colony optimization algorithm for feature selection,” Expert Syst. Appl., vol. 39, no. 3, pp. 3747 – 3763, 2012.
 T. Kietzmann, S. Lange, and M. Riedmiller, “Incremental GRLVQ: Learning relevant features for 3D object recognition,” Neurocomput., vol. 71, no. 13-15, pp. 2868–2879, 2008.
 H. Liu and H. Motoda, Computational Methods of Feature Selection. Chapman & Hall/CRC, 2008.
 C. Shang and D. Barnes, “Fuzzy-rough feature selection aided support vector machines for mars image classification,” Computer Vision and Image Understanding, vol. 117, no. 3, pp. 202 – 213, 2013.
 Q. Shen and R. Jensen, “Selecting informative features with fuzzy-rough sets and its application for complex systems monitoring,” Pattern Recognition, vol. 37, no. 7, pp. 1351–1363, 2004.
 E. P. Xing, M. I. Jordan, and R. M. Karp, “Feature selection for high-dimensional genomic microarray data,” in Proceedings of the Eighteenth International Conference on Machine Learning. Morgan Kaufmann, 2001, pp. 601–608.