M. ATTALLAH Bilal

Prof

Directory of teachers

Department

Departement of ELECTRONICS

Research Interests

Image processing Feature extraction and selection Artificial intelligence

Contact Info

University of M'Sila, Algeria

On the Web:

  • Google Scholar N/A
  • ResearchGate
    ResearchGate N/A
  • ORCID N/A
  • SC
    Scopus N/A

Recent Publications

2025-11-25

The 2nd International Workshop on Machine Learning and Deep Learning (WMLDL25)

Driver fatigue remains one of the most critical factors in preventable road deaths, yet conventional systems based on Convolutional Neural Networks (CNNs) often struggle to strike a vital balance between accuracy, speed, and practical usability under diverse conditions. This paper introduces WakeUp AI, an intelligent fatigue detection system explicitly designed to bridge that gap. The core framework leverages the advanced feature extraction capabilities of a Vision Transformer (ViT), combined with an optimized Support Vector Machine (SVM) classifier, resulting in an outstanding 99.82% test accuracy on the CEW dataset. This hybrid ViT-SVM approach achieves superior feature discrimination while maintaining computational efficiency suitable for edge deployment. For real-time use, WakeUp AI integrates MediaPipe FaceMesh with a streamlined ViT model, achieving inference latency of < 100ms frame. Crucially, a continuous temporal logic module constantly monitors the driver’s eye state, activating instant audio alerts only when fatigue patterns (such as prolonged eye closure duration) are robustly detected. Unlike conventional systems limited to simple, rigid thresholding, WakeUp AI intelligently adapts to diverse environments, making it exceptionally robust. By combining state-of-the-art deep learning with real-time responsiveness, WakeUp AI offers a scalable, high-performance solution for critical safety applications.
Citation

M. ATTALLAH Bilal, (2025-11-25), "The 2nd International Workshop on Machine Learning and Deep Learning (WMLDL25)", [international] WakeUp AI – Fatigue Detection System , Msila, Algeria

2024-12-03

Automated brain tumor classification using fine-tuned efficient net model

Automated brain tumor classification using fine-tuned efficient net model
Citation

M. ATTALLAH Bilal, (2024-12-03), "Automated brain tumor classification using fine-tuned efficient net model", [national] the first National Conference on Artificial Intelligence and its Applications (NCAIA2024) , Mentouri Constantine

Enhanced COVID-19 detection in CT image using preprocessed CNN

Enhanced COVID-19 detection in CT image using preprocessed CNN
Citation

M. ATTALLAH Bilal, (2024-12-03), "Enhanced COVID-19 detection in CT image using preprocessed CNN", [national] the first National Conference on Artificial Intelligence and its Applications (NCAIA2024) , Mentouri Constantine

2024-12-01

MRI-based brain tumor ensemble classification using two stage score level fusion and CNN models

This paper proposes a novel two-stage approach to improve brain tumor classification accuracy using the Br35H MRI Scan Dataset. The first stage employs advanced image enhancement algorithms, GFPGAN and Real-ESRGAN, to enhance the image dataset’s quality, sharpness, and resolution. Nine deep learning models are then trained and tested on the enhanced dataset, experimenting with five optimizers. In the second stage, ensemble learning algorithms like weighted sum, fuzzy rank, and majority vote are used to combine the scores from the trained models, enhancing prediction results. The top 2, 3, 4, and 5 classifiers are selected for ensemble learning at each rating level. The system’s performance is evaluated using accuracy, recall, precision, and F1-score. It achieves 100% accuracy when using the GFPGAN-enhanced dataset and combining the top 5 classifiers through ensemble learning, outperforming current methodologies in brain tumor classification. These compelling results underscore the potential of our approach in providing highly accurate and effective brain tumor classification.
Citation

M. ATTALLAH Bilal, (2024-12-01), "MRI-based brain tumor ensemble classification using two stage score level fusion and CNN models", [national] Egyptian Informatics Journal , ELSEVIER

2024-11-24

Multiple-Instance Palmprint Recognition System Using Deep Neural Networks

The increasing prevalence of crime, piracy, and
security issues across various sectors underscores the necessity for
dependable identity verification systems. Conventional security
solutions, typically dependent on pre-existing data or token-based
access, face challenges in differentiating between authorized
individuals and impostors. This study investigates deep-learningdriven
palmprint recognition systems as a superior option, utilizing
the uniqueness, security, ease of use, and non-invasiveness
of the palmprint modality.Our approach focuses on optimizing
deep learning models for feature extraction, utilizing both singleinstance
and multiple-instance learning techniques. The system
consists of two parts: Initially, three convolutional neural network
models—VGG16, VGG19, and MobileNetV2—are employed for
feature extraction and classification. Preliminary results indicate
the application of a feature fusion technique to combine features
from both left and right palmprints, thereby establishing
our multiple-instance system.Experimental evaluations on the
IITD palmprint database illustrate the efficacy of this method,
achieving a 98.50% accuracy using the multiple-instance strategy,
highlighting its superiority compared to current techniques.
Index Terms—One instance, Multiple instances, Recognition
system, Convolution Neural Network, Palmprint.
Citation

M. ATTALLAH Bilal, (2024-11-24), "Multiple-Instance Palmprint Recognition System Using Deep Neural Networks", [national] 2nd National Conference on Electronics, Electrical Engineering Telecommunications, and Computer Vision (C3ETCV24) , Mila_Algerie

2024-11-17

A Robust Convolutional Neural Network for Iris Recognition System

Iris recognition, a biometric modality, has grown
significantly in recent years. Traditional techniques relied on
feature extraction methods and classical machine learning
classifiers. However, deep learning models, particularly
Convolutional Neural Networks (CNNs), have exhibited
remarkable performance in learning discriminative features
from iris images, making them robust to variations in imaging
conditions and achieving state-of-the-art recognition accuracy.
Nonetheless, injuries or occlusions affecting the iris can lead to
increased error rates due to missing information. This study
presents and evaluates CNN-based models for iris recognition.
Various CNN architectures were employed for feature
extraction, and the best-performing model was selected.
Features from the left and right iris (one instance) were fused to
create a multiple-instance system, enhancing recognition
performance. The proposed approach was tested using the
SDUMLA-HMT iris dataset. The results demonstrate that our
system achieves an accuracy of 100%. Comparative analysis
with existing methods indicates that our system outperforms the
current state-of-the-art techniques for iris recognition.
Citation

M. ATTALLAH Bilal, (2024-11-17), "A Robust Convolutional Neural Network for Iris Recognition System", [national] The First National Conference for Applied Sciences and Engineering (NCASE-24) , ENSTA, Algérie

2024-09-01

Gabor, LBP, and BSIF features: Which is more appropriate for finger-knuckles-print recognition?

An accurate personal identification system helps control access to secure information and data. Biometric technology mainly focuses on the physiological or behavioural characteristics of the human body. This paper investigates the Finger Knuckle Print (FKP) biometric device based on the feature extraction technique. This FKP authentication method includes all the essential processes, such as preprocessing, feature extraction and classification. The features of the FKP application are investigated. Finally, this paper proposes the selection of the best feature extraction based on FKP recognition efficiency. The primary purpose of this paper is to use the Local Binary Patterns (LBP), Binarized Statistical Image Features (BSIF), and Gabor filters and define which helps to increase the False Acceptability Rate (FAR) and Genuine Acceptability Rate (GAR). This latest FKP selection shows better results as this concept shows promising results in recognizing a person's fingerknuckle print.
Citation

M. ATTALLAH Bilal, (2024-09-01), "Gabor, LBP, and BSIF features: Which is more appropriate for finger-knuckles-print recognition?", [national] Przegląd Elektrotechniczny , Wydawnictwo SIGMA

2024-08-14

A sequential combination of convolution neural network and machine learning for finger vein recognition system

Biometric systems play a crucial role in securely recognizing an individual’s identity based on physical and behavioral traits. Among these methods, finger vein recognition stands out due to its unique position beneath the skin, providing heightened security and individual distinctiveness that cannot be easily manipulated. In our study, we propose a robust biometric recognition system that combines a lightweight architecture with depth-wise separable convolutions and residual blocks, along with a machine-learning algorithm. This system employs two distinct learning strategies: single-instance and multi-instance. Using these strategies demonstrates the benefits of combining largely independent information. Initially, we address the problem of shading of finger vein images by applying the histogram equalization technique to enhance their quality. After that, we extract the features using a MobileNetV2 model that has been fine-tuned for this task. Finally, our system utilizes a support vector machines (SVM) to classify the finger vein features into their classes. Our experiments are conducted on two widely recognized datasets: SDUMLA and FV-USM and the results are promising and show excellent rank-one identification rates with 99.57% and 99.90%, respectively.
Citation

M. ATTALLAH Bilal, (2024-08-14), "A sequential combination of convolution neural network and machine learning for finger vein recognition system", [national] Signal, Image and Video Processing , SPRINGER

2024-06-27

Two-stage deep learning classification for diabetic retinopathy using gradient weighted class activation mapping

The fundus images of patients with Diabetic Retinopathy (DR) often display numerous lesions scattered across the retina. Current methods typically utilize the entire image for network learning, which has limitations since DR abnormalities are usually localized. Training Convolutional Neural Networks (CNNs) on global images can be challenging due to excessive noise. Therefore, it's crucial to enhance the visibility of important regions and focus the recognition system on them to improve accuracy. This study investigates the task of classifying the severity of diabetic retinopathy in eye fundus images by employing appropriate preprocessing techniques to enhance image quality. We propose a novel two-branch attention-guided convolutional neural network (AG-CNN) with initial image preprocessing to address these issues. The AG-CNN initially establishes overall attention to the entire image with the global branch and then incorporates a local branch to compensate for any lost discriminative cues. We conduct extensive experiments using the APTOS 2019 DR dataset. Our baseline model, DenseNet-121, achieves average accuracy/AUC values of 0.9746/0.995, respectively. Upon integrating the local branch, the AG-CNN improves the average accuracy/AUC to 0.9848/0.998, representing a significant advancement in state-of-the-art performance within the field.
Citation

M. ATTALLAH Bilal, (2024-06-27), "Two-stage deep learning classification for diabetic retinopathy using gradient weighted class activation mapping", [national] Automatika: Journal for Control, Measurement, Electronics, Computing and Communications , Taylor & Francis

2023-12-03

ICSTEM

Ensemble Learning VS Convolutional Neural Networks for Multiclass Brain Tumor Classification of MRI Images
Citation

M. ATTALLAH Bilal, (2023-12-03), "ICSTEM", [international] ICSTEM , ISTANBUL

2023-11-19

Dispositif d'amélioration de la force de signal wifi

Dispositif d'amélioration de la force de signal wifi
Citation

M. ATTALLAH Bilal, (2023-11-19), "Dispositif d'amélioration de la force de signal wifi", [national] M'sila University

Un bracelet électronique de surveillance pour les patient de maladie d'Alzheimer

Un bracelet électronique de surveillance pour les patient de maladie d'Alzheimer
Citation

M. ATTALLAH Bilal, (2023-11-19), "Un bracelet électronique de surveillance pour les patient de maladie d'Alzheimer", [national] M'sila University

2023-10-23

Deep lerning for biometric and biomedical application

Deep lerning for biometric and biomedical application
Citation

M. ATTALLAH Bilal, (2023-10-23), "Deep lerning for biometric and biomedical application", [international] Deep lerning for biometric and biomedical application , kualalumbur

2023-06-26

Dispositif de système de sécurité domestique

Dispositif de système de sécurité domestique
Citation

M. ATTALLAH Bilal, (2023-06-26), "Dispositif de système de sécurité domestique", [national] M'sila University

2023-06-18

Accès sécurisée : système d'authentification bimodale

Accès sécurisée : système d'authentification bimodale
Citation

M. ATTALLAH Bilal, (2023-06-18), "Accès sécurisée : système d'authentification bimodale", [national] M'sila University

2023-06-15

A multi-level fine-tuned deep learning based approach for binary classification of diabetic retinopathy

Diabetes mellitus is a leading cause of diabetic retinopathy (DR), which results in retinal lesions and vision impairment. Untreated DR can lead to blindness, highlighting the need for early diagnosis and treatment. Unfortunately, DR has no cure, and treatments only help to preserve vision. Traditional manual diagnosis of DR retina fundus images by ophthalmologists is time-consuming, costly, and prone to errors. Computer-aided diagnosis methods, such as deep learning, have emerged as popular methods for improving diagnosis and reducing errors. Over the past decade, Convolutional Neural Networks (CNNs) have been shown to perform very well in medical image analysis due to their high ability to extract local features from images. Convolutional neural networks (CNNs) have shown great success in the processing of medical images, including DR color fundus images. In this paper, we proposed a multi-level fine-tuned deep learning based approach for the classification of diabetic retinopathy using three different pre-trained models including: DenseNet121, MobileNetV2, and Xception. The results are provided as classification accuracy, loss metrics, and the performance is compared with state-of-the-art works. The results indicates that the proposed Xception network surpassed its peers’ models as well as state-of-the-art methods by achieving the highest accuracy of 97.95% in binary classification of DR images.
Citation

M. ATTALLAH Bilal, (2023-06-15), "A multi-level fine-tuned deep learning based approach for binary classification of diabetic retinopathy", [national] Chemometrics and Intelligent Laboratory Systems , sciencedirect

2022-11-26

Enhancement of diabetic retinopathy classification using attention guided convolution neural network

Damage to the retina from diabetes can lead to permanent vision loss due to a condition known as diabetic retinopathy. In order to avoid this, it is essential to diagnose this disease early. To address these problems, this paper proposes a two-branch Grad-CAM attention-guided convolution neural network (AG-CNN) with initial CLAHE image preprocessing. The AG-CNN first builds a general attention to the entire image with the global branch, in order to further concentrate the system's attention on the localized areas of the problems, the system isolate the important regions (ROIs) of the global image and then feeds them to a local branch. This extensive experiment is based on the APTOS 2019 DR dataset. In order to start, we offer a solid global baseline that, using DenseNet-121 as a starting point, produced average accuracy/AUC values of 0.9746/0.995, respectively. The average accuracy and AUC of the AG-CNN are increased to 0.9848 and 0.998, respectively, after creating the local branch. which represents a new state-of-the-art in the field.
Citation

M. ATTALLAH Bilal, (2022-11-26), "Enhancement of diabetic retinopathy classification using attention guided convolution neural network", [international] ICATEEE2022 , M'sila, Algeria

2022-07-24

Onduleur Monophasé Cascadé a Sept Niveaux

Réalisation d'un Onduleur Monophasé Cascadé a Sept Niveaux
Citation

M. ATTALLAH Bilal, (2022-07-24), "Onduleur Monophasé Cascadé a Sept Niveaux", [national] Université de M'sila

2022

Ear Recognition using Ensemble of Deep Features and Machine Learning Classifiers

Ear Recognition using Ensemble of Deep Features and Machine Learning Classifiers
Citation

M. ATTALLAH Bilal, (2022), "Ear Recognition using Ensemble of Deep Features and Machine Learning Classifiers", [international] ICCTA 2022 , Alexandria, Egypt

Transfer learning approche for alzhimer's dicease diagnosis using mri image

Transfer learning approche for alzhimer's dicease diagnosis using mri image
Citation

M. ATTALLAH Bilal, rafik.zouaoui@univ-msila.dz, , (2022), "Transfer learning approche for alzhimer's dicease diagnosis using mri image", [international] ICATEEE2022 , M'sila_Algeria

Brain tumor classification based deep transfer learning

Brain tumor classification based deep transfer learning
Citation

M. ATTALLAH Bilal, oussama.bougeurra@univ-msail.dz, , (2022), "Brain tumor classification based deep transfer learning", [international] ICATEEE2022 , M'sila_Algeria

Transfer learning for diabetic retinopathy detection

Transfer learning for diabetic retinopathy detection
Citation

M. ATTALLAH Bilal, (2022), "Transfer learning for diabetic retinopathy detection", [international] ICATEEE2022 , M'sila-Algeria

Finger vein based cnn for human recognition

Finger vein based cnn for human recognition
Citation

M. ATTALLAH Bilal, (2022), "Finger vein based cnn for human recognition", [international] ICATEEE2022 , M'sila-Algeria

Deep learning based framwork for automatic diabetic retinopathy detection

Deep learning based framwork for automatic diabetic retinopathy detection
Citation

M. ATTALLAH Bilal, (2022), "Deep learning based framwork for automatic diabetic retinopathy detection", [international] ICCTA 2022 , Alexendrie_Egypt

Ear recognition using ensemble of deep learning and machine learning

Ear recognition using ensemble of deep learning and machine learning
Citation

M. ATTALLAH Bilal, (2022), "Ear recognition using ensemble of deep learning and machine learning", [international] ICCTA2022 , Alexendrie-Egypt

Systems d’accès sécurisé fondé sur la reconnaissance bimodale

Systems d’accès sécurisé fondé sur la reconnaissance bimodale
Citation

M. ATTALLAH Bilal, (2022), "Systems d’accès sécurisé fondé sur la reconnaissance bimodale", [national] M'sila

Onduleur monophasé cascadé à sept niveau

Onduleur monophasé cascadé à sept niveau
Citation

M. ATTALLAH Bilal, (2022), "Onduleur monophasé cascadé à sept niveau", [national] M'sila

Onduleur triphasé a partir d'un source monophasé

Onduleur triphasé a partir d'un source monophasé
Citation

M. ATTALLAH Bilal, (2022), "Onduleur triphasé a partir d'un source monophasé", [national] M'sila

2021-12-17

An efficient prediction system for diabetes disease based on deep neural network

One of the main reasons for disability and premature mortality in the world is diabetes disease, which can cause different sorts of damage to organs such as kidneys, eyes, and heart arteries. The deaths by diabetes are increasing each year, so the need to develop a system that can effectively diagnose diabetes patients becomes inevitable. In this work, an efficient medical decision system for diabetes prediction based on Deep Neural Network (DNN) is presented. Such algorithms are state-of-the-art in computer vision, language processing, and image analysis, and when applied in healthcare for prediction and diagnosis purposes, these algorithms can produce highly accurate results. Moreover, they can be combined with medical knowledge to improve decision-making effectiveness, adaptability, and transparency. A performance comparison between the DNN algorithm and some well-known machine learning techniques as well as the state-of-the-art methods is presented. The obtained results showed that our proposed method based on the DNN technique provides promising performances with an accuracy of 99.75% and an F1-score of 99.66%. This improvement can reduce time, efforts, and labor in healthcare services as well as increasing the final decision accuracy.
Citation

M. ATTALLAH Bilal, (2021-12-17), "An efficient prediction system for diabetes disease based on deep neural network", [national] Complexity , Hindawi

2021

Efficient heart disease diagnosis based on twin support vector machine

Heart disease is the leading cause of death in the world according to the World Health Organization
(WHO). Researchers are more interested in using machine learning techniques to help medical staff diagnose
or detect heart disease early. In this paper, we propose an efficient medical decision support system based on
twin support vector machines (Twin-SVM) for heart disease diagnosing with binary target (i.e. presence or
absence of disease). Unlike conventional support vector machines (SVM) that finds only one optimal hyper-
plane for separating the data points of first class from those of second class, which causes inaccurate decision,
Twin-SVM finds two non-parallel hyper-planes so that each one is closer to the first class and is as far from
the second class as possible. Our experiments are conducted on real heart disease dataset and many evaluation
metrics have been considered to evaluate the performance of the proposed method. Furthermore, a
comparison between the proposed method and several well-known classifiers as well as the state-of-the-art
methods has been performed. The obtained results proved that our proposed method based on Twin-SVM
technique gives promising performances better than the state-of-the-art. This improvement can seriously
reduce time, materials, and labor in healthcare services while increasing the final decision accuracy.
Citation

M. ATTALLAH Bilal, (2021), "Efficient heart disease diagnosis based on twin support vector machine", [national] DIAGNOSTYKA , PTDT

2019

Heart disease prediction using neighborhood component analysis and support vector machines

In this paper, we propose a heart disease prediction system based on Neighborhood Component Analysis (NCA) and Support Vector Machine (SVM). In fact, NCA is used for selecting the most relevant parameters to make a good decision. This can seriously reduce the time, materials, and labor to get the final decision while increasing the prediction performance. Besides, the binary SVM is used for predicting the selected parameters in order to identify the presence/absence of heart disease. The conducted experiments on real heart disease dataset show that the proposed system achieved 85.43% of prediction accuracy. This performance is 1.99% higher than the accuracy obtained with the whole parameters. Also, the proposed system outperforms the state-of-the-art heart disease prediction.
Citation

M. ATTALLAH Bilal, Youssef Chahir, , (2019), "Heart disease prediction using neighborhood component analysis and support vector machines", [international] The VIIIth International Workshop on Representation, analysis and recognition of shape and motion FroM Imaging data (RFMI 2019) , Hal , Tunisia

Feature extraction in palmprint recognition using spiral of moment skewness and kurtosis algorithm

Because of their high recognition rates, coding-based approaches that use multispectral palmprint images have become one of the most popular palmprint recognition methods. This paper describes a new multispectral palmprint recognition method that aims to further improve the performance of coding-based approaches by focusing on the local binary pattern (LBP) filters and spiral moments features. The final feature map is derived through a staged process of creating a composite of spiral and LBP features by fusing them together and passing the features through the minimum redundancy maximum relevance transformers. Using Hamming distances, the inter- and intra-similarities of the palmprint feature maps are determined. The experimental technique was evaluated using the available data on the IITD, MSPolyU and PolyU PPDB databases. The results indicate that the method achieved high levels of accuracy in the identification and verification modes. Furthermore, this method outperforms the existing advanced techniques.
Citation

M. ATTALLAH Bilal, Amina Serir, Youssef Chahir, , (2019), "Feature extraction in palmprint recognition using spiral of moment skewness and kurtosis algorithm", [national] Pattern Analysis and Applications , Springer Nature

Finger-knuckle-print, Plamprint and Fingerprint for Multimodal Recognition System Based on mRMR features selection

A Biometrics identification system is refers to the automatic recognition of individual person based on their characteristics. Basically biometrics system has two broad areas namely unimodal biometric system and multimodal biometric system. However, a reliable recognition system requires multiple resources [1].
Although multimodality improves the accuracy of the systems, it occupies a large memory space and consumes more execution time considering the collected information from different resources. Therefore we have considered the feature selection[2], that is, the selection of the best attributes that enhances the accuracy and reduce the memory space as a solution. As a result, acceptable recognition performances with less forge and steal can be guaranteed. In this work we propose an identification system using multimodal fusion of finger-knuckle-print, fingerprint and palmprint by adopting several techniques in feature level for multimodal fusion[3]. A feature level fusion and selection is proposed for the fusion of these three biological
traits. The proposed system has been tested on the largest publicly available PolyU [4] and Delhi FKP[5] databases. It has shown good performance.
Citation

M. ATTALLAH Bilal, Youssef Chahir, , (2019), "Finger-knuckle-print, Plamprint and Fingerprint for Multimodal Recognition System Based on mRMR features selection", [international] IC2MAS19 , Istanbul-Turkey

Fusing Palmprint, Finger-knuckle-print for Bi-modal Recognition System Based on LBP and BSIF

Multimodal biometrics is an evolving technology in the fields of security. Biometrics system reduces the effort of remember a memorable password. Multimodal biometrics system uses two or more traits for efficient recognition. This paper presents a hand biometric system by fusing information of palmprint and finger knuckle. To this end, BSIF ( Binarized Statistical Image Features) filter and LBP (Local binary patterns) coefficients are employed to obtain the Finger-knuckle-print and palm-print traits, and subsequently selection of the features vector is conducted with PCA (Principal Component Analysis) transforms in higher coefficients. To match the finger knuckle or palm-print feature vector, the (ELM) Extreme learning machine is applied. According to the experiment outcomes, the proposed system not only has a significantly high recognition rate but it also affords greater security compared to the single biometric system.
Citation

M. ATTALLAH Bilal, (2019), "Fusing Palmprint, Finger-knuckle-print for Bi-modal Recognition System Based on LBP and BSIF", [international] International Conference on Image and Signal Processing and their Applications , Mostaganem, Algeria, Algeria

Superpixel-based Zernike moments for palm-print recognition

In the contemporary period, significant attention has been focused on the prospects of innovative personal recognition methods based on palm-print biometrics. However, diminished local consistency and interference from noise are only some of the obstacles that hinder the most common methods of palm-print imaging such as the grey texture and other low-level of the palm. Nevertheless, the development of the process and tackling of the obstacles faced have a potential solution in the form of high-level characteristic imaging for palm-print identification. In this study, Zernike moments are used for acquiring superpixel features that are spiral scanned images, which is an innovative recognition method. By using the extreme learning machine, the inter- and intra-similarities of the palm-print feature maps are determined. Our experiments yield good results with an accuracy rate of 97.52 and an equal error rate of 1.47% on the palm-print PolyU database.
Citation

M. ATTALLAH Bilal, (2019), "Superpixel-based Zernike moments for palm-print recognition", [international] International Journal of Electronic Security and Digital Forensics , INDERSCIENCE Publisher

Neighborhood Component Analysis and Support Vector Machines for Heart Disease Prediction

Nowadays, one of the main reasons for disability and mortality premature in the world is the
heart disease, which make its prediction is a critical challenge in the area of healthcare systems.
In this paper, we propose a heart disease prediction system based on Neighborhood Component
Analysis (NCA) and Support Vector Machine (SVM). In fact, NCA is used for selecting the
most relevant parameters to make a good decision. This can seriously reduce the time,
materials, and labor to get the final decision while increasing the prediction performance.
Besides, the binary SVM is used for predicting the selected parameters in order to identify the
presence/absence of heart disease. The conducted experiments on real heart disease dataset
show that the proposed system achieved 85.43% of prediction accuracy. This performance is
1.99% higher than the accuracy obtained with the whole parameters. Also, the proposed system
outperforms the state-of-the-art heart disease prediction.
Citation

M. ATTALLAH Bilal, (2019), "Neighborhood Component Analysis and Support Vector Machines for Heart Disease Prediction", [international] Ingénierie des Systèmes d’Information (ISI) , International Information and Engineering Technology Association (IIETA)

2018

Improved Simultaneous Algebraic Reconstruction Technique Algorithm for Positron-Emission Tomography Image Reconstruction via Minimizing the Fast Total Variation

Contexte
Il y a eu des progrès considérables dans l'instrumentation de mesure de données et les méthodes informatiques permettant de générer des images des données de TEP mesurées. Ces méthodes informatiques ont été développées pour résoudre le problème inverse, aussi appelé problème de « reconstruction de l'image à partir des projections ».
But
Dans cet article, les auteurs proposent un algorithme modifié pour la technique de reconstruction algébrique simultanée (SART), de façon à améliorer la qualité de la reconstruction de l'image en incorporant la minimisation de la variation totale (TV) dans l'algorithme itératif de SART.
Méthodologie
L'algorithme SART met à jour l'image estimative en faisant une projection avant de l'image sur l'espace du sinogramme. La différence entre le sinogramme estimé et le sinogramme donné est ensuite rétroprojetée sur le domaine de l'image. Cette différence est ensuite soustraite de l'image initiale pour obtenir une image corrigée. La minimisation rapide de la variation totale (FTV) est appliquée à l'image obtenue dans l’étape SART. La deuxième étape est le résultat obtenu de la mise à jour FTV précédente. Les étapes de SART et de minimisation FTV sont conduites de façon itérative, en alternance. Cinquante itérations ont été appliquées à l'algorithme SART utilisé dans chacune des méthodes fondées sur la régularisation. En plus de l'algorithme SART conventionnel, le lissage spatial a été utilisé pour améliorer la qualité de l'image. Toutes les images ont été produites en format 128 x 128 pixels.
Résultats
L'algorithme proposé a préservé les bordures avec succès. Un examen détaillé révèle que les algorithmes de reconstruction étaient différents; par exemple, l'algorithme SART et l'algorithme SART-FTV proposé ont préservé efficacement les bordures chaudes des lésions, tandis que les artefacts et les déviations étaient plus susceptibles d'apparaître dans l'algorithme ART que dans les autres algorithmes.
Conclusion
En comparaison de l'algorithme SART standard l'algorithme proposé réussit mieux à éliminer le bruit ambiant tout en préservant les bordures pour supprimer les artefacts existants. Les mesures de qualité et l'inspection visuelle montrent une amélioration significative de la qualité de l'image comparativement à l'algorithme SART traditionnel et à l'algorithme de technique de reconstruction algébrique (ART).
Citation

M. ATTALLAH Bilal, ZoubeidaMessali, AbderrahimElmoataz, , (2018), "Improved Simultaneous Algebraic Reconstruction Technique Algorithm for Positron-Emission Tomography Image Reconstruction via Minimizing the Fast Total Variation", [national] Journal of Medical Imaging and Radiation Sciences , ELSEVIER

PET image reconstruction based on Bayesian inference regularised maximum likelihood expectation maximisation (MLEM) method

A better quality of an image can be achieved through iterative image reconstruction for positron emission tomography (PET) as it employs spatial regularisation that minimises the difference of image intensity among adjacent pixels. In this paper, the Bayesian inference rule is applied to devise a novel approach to address the ill-posed inverse problem associated with the iterative maximum-likelihood Expectation-Maximisation (MLEM) algorithm by proposing a regularised constraint probability model. The proposed algorithm is more robust than the standard MLEM and in background noise removal with preserving edges to suppress the out of focus slice blur, which is the existent image artefact. The quality measurements and visual inspections show a significant improvement in image quality compared to conventional MLEM and the state-of-the-art regularised algorithms.
Citation

M. ATTALLAH Bilal, Zoubeida Messali, , (2018), "PET image reconstruction based on Bayesian inference regularised maximum likelihood expectation maximisation (MLEM) method", [national] International Journal of Biomedical Engineering and Technology , inderscience

Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
Citation

M. ATTALLAH Bilal, (2018), "Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction", [national] Journal of Electronic Imaging , SPIE

Réalisation d’un système d’authentification automatique bimodal par l’iris et la paume de la main

La biométrie est une technique globale visant la reconnaissance automatique des individus à partir de leurs caractéristiques physiologiques et/ou comportementales.
Dans cette thèse, nous abordons plusieurs points importants concernant la biométrie bimodale. Nous nous proposons de réaliser un système d’authentification automatique à partir de la paume de la main et l'iris de l'œil. Le système proposé, doté de plusieurs modules, tire avantage de différents procédés pour réduire les taux de fausses acceptations et de faux rejets.
Après avoir d’abord dressé un état de l’art des différents systèmes biométriques mono-modaux, nous faisons le lien entre les bases de données existantes, la sélection de caractéristiques pertinentes de dimensionsréduites pour identifier l'iris ou l'empreinte palmaire ainsi que leur fusion bimodale.
En première partie, nous présentons nos différentes contributions pour l’analyse d’empreinte palmaire. Nosapproches consistent à combinerdesdescripteurs de texture locaux (tels que LBP, BSIF, LPQ, Fusion)etdes descripteurspar ondelettes (Gabor,Haar) à différents niveaux, pour l'extraction des lignes de la paume de la main. Dans notre étude, nous mettons en évidence les capacités de ces transformées à caractériser les textures oscillatoires ainsi que les courbures en vue de l'extraction de signatures biométriques robustes.Nous proposons également une méthode de caractérisation basée sur un parcours spiral des moments statistiques (moyenne, variance, asymétrie, aplatissement). En outre, nous avons affiné la caractérisation par la fusion de descripteurs de texture, suivi par une sélection des caractéristiques en exploitant les deux transforméesACP et mRMR.
En seconde partie, nous abordonsl'analyse de l’iris de l'œil.Nous proposons de combiner l'approche de Daugman avec une analyse multi-échelle (multi-résolution et multi-directionnelle) afin de mieux caractériser les structures radiales de l'iris et surmonter les problèmes inhérents à l'acquisition (présence de lentilles, fermeture partielle des paupières, changement de contraste) et à la mise en correspondance.
Dans la dernière partie, la fusion et la sélection des caractéristiques signatures biométriques issues des deux modalités (Iris et empreinte palmaire) ainsi que des analyses statistiques à grande échelle des scores de similarité provenant de chaque modalité, ont permis une connaissance approfondie des scores issus des systèmes biométriques étudiés
Nous avons utilisé les deuxclassifieurs (KNN, ELM)etleurs performances ont été comparées aux méthodes existantes. Les résultats trouvés ontmontré l’efficacité des algorithmes proposés en termes de précision.
Citation

M. ATTALLAH Bilal, (2018), "Réalisation d’un système d’authentification automatique bimodal par l’iris et la paume de la main", [national] USTHB

Geometrical Local Image Descriptors for Palmprint Recognition

A new palmprint recognition system is presented here. The method of extracting and evaluating textural feature vectors from palmprint images is tested on the PolyU database. Furthermore, this method is compared against other approaches described in the literature that are founded on binary pattern descriptors combined with spiral-feature extraction. This novel system of palmprint recognition was evaluated for its collision test performance, precision, recall, F-score and accuracy. The results indicate the method is sound and comparable to others already in use.
Citation

M. ATTALLAH Bilal, Youssef Chahir, Amina Serir, , (2018), "Geometrical Local Image Descriptors for Palmprint Recognition", [international] ICISP 2018 , Cherbourg-France

← Back to Researchers List