Active Learning
This toolbox facilitates the application of active learning in multimedia data. Active learning is a machine learning variant that opts to overcome the problem of gathering a training corpus due to the need for manual annotation. In an effort to minimize the labeling effort, active learning proposes to train an initial classifier with a very small set of labeled examples and then expand the training set by selectively sampling new examples from a much larger set of unlabeled examples (also known as pool of candidates). These examples are selected based on their informativeness, i.e. how much they are expected to improve the classifier's performance. They are found in the uncertainty areas of the classifier and, in a typical case, are annotated upon request by an errorless oracle.
In the case of image classification, flickr offers an abundant set of user tagged images, which if used in the pool of candidates can automate the process of active learning. In this direction, we propose a method, SALIC (for details see the here), that uses tagged images as the pool of candidates and selects the images that are both of maximum informativess as well as accurately predicted for their content based on their tags. This toolbox also implements SALIC, in addition to the typical active learning method and a tag-based oracle.
Usage:
- Clone the project from GitHub [link].
- Install the requirements (details are included in the README file, also copied below)
- Download the datasets and modify Wrapper.m based on the comments in it (paths to datasets, etc.) and run Wrapper.m (This will execute SALIC by default)
Output:
- The toolbox will compute visual and textual features. The path and files where the features will be saved are defined in the Wrapper.m file.
- The average precision variable (AP) will be returned by Wrapper.m. It is a matrix of size #concepts x # iterations including the average precision of the classifiers for each concept and iteration
- A plot with the mAP values over each iteration
- Clone the repository
- Download and compile LIBSVM for your architecture https://www.csie.ntu.edu.tw/~cjlin/libsvm/
- Download and compile the ConvNet Feature Computation Package from http://www.robots.ox.ac.uk/~vgg/software/deep_eval/
- Change the paths to the folders including the datasets in Wrapper.m, create the required files (img_Files.mat, tag_files.mat for each dataset) and run Wrapper.
- There is only compatability for Linux (the ConvNet Feature Computation Package is not compatible with windows). If a different CNN feature extraction library is used that runs on Windows, the code should run on Windows as well (not tested)
- For MIRFLICKR (1m images as the pool dataset), 64GB of RAM is minimum
- The code was tested using Matlab 2012a
doi: 10.1109/TMM.2016.2565440
License