Indexado em
  • Abra o Portão J
  • Genamics JournalSeek
  • Chaves Acadêmicas
  • Bíblia de pesquisa
  • cosmos SE
  • Acesso à pesquisa on-line global em agricultura (AGORA)
  • Biblioteca de periódicos eletrônicos
  • RefSeek
  • Diretório de Indexação de Periódicos de Pesquisa (DRJI)
  • Universidade de Hamdard
  • EBSCO AZ
  • OCLC- WorldCat
  • Scholarsteer
  • Catálogo online SWB
  • Biblioteca Virtual de Biologia (vifabio)
  • publons
  • Fundação de Genebra para Educação e Pesquisa Médica
  • Euro Pub
  • Google Scholar
Compartilhe esta página
Folheto de jornal
Flyer image

Abstrato

Explainable Convolutional Neural Network Based Tree Species Classification Using Multispectral Images from an Unmanned Aerial Vehicle

Ling-Wei Chen, Pin-Hui Lee, Yueh-Min Huang*

We seek to address labor shortages, in particular, the aging workforce of rural areas and thus facilitate agricultural management. The movement and operation of agricultural equipment in Taiwan is complicated by the fact that many commercial crops in Taiwan are planted on hillsides. For mixed crops in such sloped farming areas, the identification of tree species aids in agricultural management and reduces the labor needed for farming operations. General optical images collected by visible-light cameras are sufficient for recording but yield suboptimal results in tree species identification. Using a multispectral camera makes it possible to identify plants based on their spectral responses. We present a method for tree species classification using UAV visible light and multispectral imagery. We leverage the differences in spectral reflectance values between tree species and use near infrared band images to improve the model’s classification performance. CNN based deep neural models are widely used and yield highaccuracies, but 100% correct results are difficult to achieve, and model complexity generally increases with performance. This leads to uncertainty about the system’s final decisions. Interpretable AI extracts key information and interprets it to yield a better understanding of the model’s conclusions or actions.We use visualization(four pixel level attribution methods and one region level attribution method) to interpret the model post-hoc. Fuzzy IG for pixel level attribution best represents texture features, and region level attribution represents life regions more effectively than pixel level attribution, which aids human understanding.

Isenção de responsabilidade: Este resumo foi traduzido usando ferramentas de inteligência artificial e ainda não foi revisado ou verificado