Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

BIAS-3D Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment

dc.contributor.authorKumari, Sweta
dc.contributor.authorShobha Amala, V.Y.
dc.contributor.authorNivethithan, M.
dc.contributor.authorChakravarthy, V. Srinivasa
dc.date.accessioned2023-04-18T05:20:10Z
dc.date.available2023-04-18T05:20:10Z
dc.date.issued2022-10
dc.descriptionThis paper is submitted by the author of IIT (BHU), Varanasi, Indiaen_US
dc.description.abstractWe propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the “where” pathway. To evaluate the proposed model, we generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter or background images on the other faces. The camera goes around each cube on a circular orbit and determines the identity of the image pasted on the face. The images pasted on the cube faces were drawn from: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of three concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the target classes or the clutter. The Camera Motion Network predicts the camera's next position on the orbit (varying the azimuthal angle or “θ”). Here the camera performs one of three actions: move right, move left, or do not move. The Camera-Position Network adds the camera's current position (θ) into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the model is trained end-to-end by backpropagating the total loss using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately.en_US
dc.description.sponsorshipPavan Holla and Vigneswaranen_US
dc.identifier.issn16625188
dc.identifier.urihttps://idr-sdlib.iitbhu.ac.in/handle/123456789/2059
dc.language.isoen_USen_US
dc.publisherFrontiers Media S.A.en_US
dc.relation.ispartofseriesFrontiers in Computational Neuroscience;Volume 16
dc.subjectAttentionen_US
dc.subjectConvolutional neural networken_US
dc.subjectflip-flop neuronsen_US
dc.subjectHuman visual systemen_US
dc.subjectMemoryen_US
dc.subjectSearch in 3Den_US
dc.subjectWhat and where pathwayen_US
dc.titleBIAS-3D Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environmenten_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fncom-16-1012559.pdf
Size:
5.72 MB
Format:
Adobe Portable Document Format
Description:
Article - Green Open Access

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: