FGV Digital Repository
    • português (Brasil)
    • English
    • español
      Visit:
    • FGV Digital Library
    • FGV Scientific Journals
  • English 
    • português (Brasil)
    • English
    • español
  • Login
View Item 
  •   DSpace Home
  • Produção Intelectual em Bases Externas
  • Documentos Indexados pela Web of Science
  • View Item
  •   DSpace Home
  • Produção Intelectual em Bases Externas
  • Documentos Indexados pela Web of Science
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

All of DSpaceFGV Communities & CollectionsAuthorsAdvisorSubjectTitlesBy Issue DateKeywordsThis CollectionAuthorsAdvisorSubjectTitlesBy Issue DateKeywords

My Account

LoginRegister

Statistics

View Usage Statistics

Transfer learning between texture classification tasks using convolutional neural networks

Thumbnail
View/Open
000370730601129.pdf (1.155Mb)
Date
2015
Author
Hafemann, Luiz G.
Oliveira, Luiz S.
Cavalin, Paulo R.
Sabourin, Robert
Metadata
Show full item record
Abstract
Convolutional Neural Networks (CNNs) have set the state-of-the-art in many computer vision tasks in recent years. For this type of model, it is common to have millions of parameters to train, commonly requiring large datasets. We investigate a method to transfer learning across different texture classification problems, using CNNs, in order to take advantage of this type of architecture to problems with smaller datasets. We use a Convolutional Neural Network trained on a source dataset (with lots of data) to project the data of a target dataset (with limited data) onto another feature space, and then train a classifier on top of this new representation. Our experiments show that this technique can achieve good results in tasks with small datasets, by leveraging knowledge learned from tasks with larger datasets. Testing the method on the the Brodatz-32 dataset, we achieved an accuracy of 97.04% - superior to models trained with popular texture descriptors, such as Local Binary Patterns and Gabor Filters, and increasing the accuracy by 6 percentage points compared to a CNN trained directly on the Brodatz-32 dataset. We also present a visual analysis of the projected dataset, showing that the data is projected to a space where samples from the same class are clustered together - suggesting that the features learned by the CNN in the source task are relevant for the target task.
URI
http://hdl.handle.net/10438/23559
Collections
  • Documentos Indexados pela Web of Science [875]
Knowledge Areas
Tecnologia
Subject
Redes neurais (Computação)
Inteligência artificial
Keyword
Convolutional neural networks (CNNs)

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 


DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 

Import Metadata