default search action
Pattern Recognition, Volume 71
Volume 71, November 2017
- Sue Han Lee, Chee Seng Chan, Simon Mayo, Paolo Remagnino:
How deep learning extracts and learns leaf features for plant classification. 1-13 - Yushan Zheng, Zhiguo Jiang, Fengying Xie, Haopeng Zhang, Yibing Ma, Huaqiang Shi, Yu Zhao:
Feature extraction from histopathological images based on nucleus-guided convolutional neural network for breast lesion classification. 14-25 - Brijnesh J. Jain:
Consistency of mean partitions in consensus clustering. 26-35 - Hong Cheng, Yaqi Liu, Wenhao Fu, Yanli Ji, Lu Yang, Yang Zhao, Jie Yang:
Gazing point dependent eye gaze estimation. 36-44 - Miin-Shen Yang, Yessica Nataliani:
Robust-learning fuzzy c-means clustering algorithm with unknown number of clusters. 45-59 - Xiao Ke, Ming-Ke Zhou, Yuzhen Niu, Wenzhong Guo:
Data equilibrium based automatic image annotation by fusing deep model and semantic propagation. 60-77 - Ritesh Sarkhel, Nibaran Das, Aritra Das, Mahantapas Kundu, Mita Nasipuri:
A multi-scale deep quad tree based feature extraction method for the recognition of isolated handwritten characters of popular indic scripts. 78-93 - Zhibin Liao, Gustavo Carneiro:
A deep convolutional neural network module that promotes competition of multiple-size filters. 94-105 - Lazaros T. Tsochatzidis, Konstantinos Zagoris, Nikolaos Arikidis, Anna Karahaliou, Lena Costaridou, Ioannis Pratikakis:
Computer-aided diagnosis of mammographic masses based on a supervised content-based image retrieval approach. 106-117 - Guo-Sen Xie, Xu-Yao Zhang, Wenhan Yang, Mingliang Xu, Shuicheng Yan, Cheng-Lin Liu:
LG-CNN: From local parts to global discrimination for fine-grained recognition. 118-131 - Massimiliano Patacchiola, Angelo Cangelosi:
Head pose estimation in the wild using Convolutional Neural Networks and adaptive gradient methods. 132-143 - Tiberio Uricchio, Lamberto Ballan, Lorenzo Seidenari, Alberto Del Bimbo:
Automatic image annotation via label transfer in the semantic space. 144-157 - Loris Nanni, Stefano Ghidoni, Sheryl Brahnam:
Handcrafted vs. non-handcrafted features for computer vision classification. 158-172 - Sergio Muñoz-Romero, Vanessa Gómez-Verdejo, Emilio Parrado-Hernández:
A novel framework for parsimonious multivariate analysis. 173-186 - Frank-Michael Schleif, Peter Tiño:
Indefinite Core Vector Machine. 187-195 - Jianshu Zhang, Jun Du, Shiliang Zhang, Dan Liu, Yulong Hu, Jin-Shui Hu, Si Wei, Li-Rong Dai:
Watch, attend and parse: An end-to-end neural network based approach to handwritten mathematical expression recognition. 196-206 - Wujie Zhou, Lu Yu, Yang Zhou, Weiwei Qiu, Ming-Wei Wu, Ting Luo:
Blind quality estimator for 3D images based on binocular combination and extreme learning machine. 207-217 - Cairong Zhao, Xuekuan Wang, Wai Keung Wong, Wei-Shi Zheng, Jian Yang, Duoqian Miao:
Multiple metric learning based on bar-shape descriptor for person re-identification. 218-234 - Shanmukhappa A. Angadi, Vishwanath C. Kagawade:
A robust face recognition approach through symbolic modeling of Polar FFT features. 235-248 - Saadoon A. M. Al-Sumaidaee, Mohammed A. M. Abdullah, Raid Rafi Omar Al-Nima, Satnam Singh Dlay, Jonathon A. Chambers:
Multi-gradient features and elongated quinary pattern encoding for image-based facial expression recognition. 249-263 - Imad Batioua, Rachid Benouini, Khalid Zenkouar, Azeddine Zahi, Hakim el Fadili:
3D image analysis by separable discrete orthogonal moments based on Krawtchouk and Tchebichef polynomials. 264-277 - Jihao Yin, Hongmei Zhu, Ding Yuan, Tianfan Xue:
Sparse representation over discriminative dictionary for stereo matching. 278-289 - Jicong Fan, Tommy W. S. Chow:
Matrix completion by least-square, low-rank, and sparse self-representations. 290-305 - Bo Tang, Haibo He:
GIR-based ensemble sampling approaches for imbalanced learning. 306-319 - Hao Lu, Zhiguo Cao, Yang Xiao, Yanjun Zhu:
Two-dimensional subspace alignment for convolutional activations adaptation. 320-336 - Chin-Chun Chang, Bo-Han Liao:
Active learning based on minimization of the expected path-length of random walks on the learned manifold structure. 337-348 - Wanjun Zhang, Huiqi Li:
Automated segmentation of overlapped nuclei using concave point detection and segment grouping. 349-360 - Xiaoke Ma, Penggang Sun, Guimin Qin:
Nonnegative matrix factorization algorithms for link prediction in temporal networks using graph communicability. 361-374 - Liang Bai, Xueqi Cheng, Jiye Liang, Huawei Shen, Yike Guo:
Fast density clustering strategies based on the k-means algorithm. 375-386 - Jinxia Zhang, Krista A. Ehinger, Haikun Wei, Kanjian Zhang, Jingyu Yang:
Erratum to: A novel graph-based optimization framework for salient object detection [Pattern Recognition 64C (2017) 39-50]. 387-388
- Fabrice Dieudonné Atrevi, Damien Vivet, Florent Duculty, Bruno Emile:
A very simple framework for 3D human poses estimation using a single 2D image: Comparison of geometric moments descriptors. 389-401 - Kaichiro Nishi, Jun Miura:
Generation of human depth images with body part labels for complex human pose recognition. 402-413 - Xupeng Wang, Ferdous Ahmed Sohel, Mohammed Bennamoun, Yulan Guo, Hang Lei:
Scale space clustering evolution for salient region detection on 3D deformable shapes. 414-427 - Suryansh Kumar, Yuchao Dai, Hongdong Li:
Spatio-temporal union of subspaces for multi-body non-rigid structure-from-motion. 428-443
- Jianxin Wu, Xiang Bai, Marco Loog, Fabio Roli, Zhi-Hua Zhou:
Editorial of the Special Issue on Multi-instance Learning in Pattern Recognition and Vision. 444-445 - Peng Tang, Xinggang Wang, Zilong Huang, Xiang Bai, Wenyu Liu:
Deep patch learning for weakly supervised object classification and discovery. 446-459 - Dongkuan Xu, Jia Wu, Dewei Li, Yingjie Tian, Xingquan Zhu, Xindong Wu:
SALE: Self-adaptive LSH encoding for multi-instance learning. 460-482
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.