M. DEBBI Hichem

Prof

Directory of teachers

Department

Informatics Department

Research Interests

Probabilistic Model Checking Debugging and Causal Analysis of Safety Critical Systems Verification and Explanation for Machine Learning Formal Analysis of IoT, Big data and Cloud Computing Applications Game Theory and Stochastic Games Complex Event Processing Deep learning and computer vision Explainable AI

Contact Info

University of M'Sila, Algeria

On the Web:

  • Google Scholar N/A
  • ResearchGate
    ResearchGate N/A
  • ORCID N/A
  • SC
    Scopus N/A

Recent Publications

2024-11-20

CAUSAL EXPLANATION OF GRAPH NEURAL NETWORKS

Graph Neural Networks (GNNs) are currently used in many real-world applications. With this notable spread, the development of sophisticated techniques for explaining their decisions becomes highly necessary. Although many works have been proposed in the aim of explaining their predictions, on different aspects, such as nodes, edges, and features, they all tend to generate the explanations as subgraphs. In this paper, we will show that relying only on explanatory subgraphs is not sufficient explanation tool, especially that these subgraphs could range from small graphs to untraceable ones within the same model. In this regard, we propose a causal explanation framework based on the rigorous structural model of causality. We show that our framework does not compete with existing explanation framework for GNNs, but rather acts as a complementary approach.
Citation

M. DEBBI Hichem, (2024-11-20), "CAUSAL EXPLANATION OF GRAPH NEURAL NETWORKS", [international] IDEAL , Valencia, Spain

2023-12-01

Explaining Query Answers in Probabilistic Databases

Probabilistic databases have emerged as an extension of relational databases that can handle uncertain data under possible worlds semantics. Although the problems of creating effective means of probabilistic data representation as well as probabilistic query evaluation have been addressed so widely, low attention has been given to query result explanation. While query answer explanation in relational databases tends to answer the question: why is this tuple in the query result? In probabilistic databases, we should ask an additional question: why does this tuple have such a probability? Due to the huge number of resulting worlds of probabilistic databases, query explanation in probabilistic databases is a challenging task. In this paper, we propose a causal explanation technique for conjunctive queries in probabilistic databases. Based on the notions of causality, responsibility and blame, we will be able to address explanation for tuple and attribute uncertainties in a complementary way. Through an experiment on the real-dataset of IMDB, we will see that this framework would be helpful for explaining complex queries results. Comparing to existing explanation methods, our method could be also considered as an aided-diagnosis method through computing the blame, which helps to understand the impact of uncertain attributes.
Citation

M. DEBBI Hichem, (2023-12-01), "Explaining Query Answers in Probabilistic Databases", [national] INTERNA TIONAL JOURNAL OF INTERAC TIVE MULTIME DIA AND ARTIFICI AL INTELLIG ENCE , IJIAMI

2023-09-21

An Efficient Hierarchical LSTM-based Framework for Intrusion Detection in Internet of Things (IoT) Systems

The rapid expansion of the Internet of Things (IoT) technology has enabled the interconnection of countless devices and systems worldwide. However, due to its lack of adequate cyber security measures, it has become a prime target for malicious hackers. To address this critical issue, a fully secure IoT network must be established, allowing for a safe and efficient deployment of this technology. This paper presents a sophisticated two-level deep learning-based intrusion detection model that uses long-short-term memory (LSTM), incorporating binary and multi-level classification techniques, time-based filtering and aggregation processes, all designed to optimize performance. The proposed model was evaluated using the UNSW-NB15 dataset, achieving highly satisfactory results, including reduced false alarm rates (false positives), and high accuracy and precision.
Citation

M. DEBBI Hichem, (2023-09-21), "An Efficient Hierarchical LSTM-based Framework for Intrusion Detection in Internet of Things (IoT) Systems", [international] SOFTCOM , Croiatia

2023-06-19

CauSim: A Causal Learning Framework for Fine-grained Image Similarity

Learning image similarity is useful in many computer vision applications. In fine-grained visual classification (FGVC), learning image similarity is more challenging due to the subtle inter-class differences. This paper proposes CauSim: a framework for deep learning image similarity based on causality. CauSim applies counterfactual reasoning on Convolution Neural Networks (CNNs) to identify significant filters responding to important regions, then it measures the similarity distance based on the counterfactual information learned with respect to each filter. We have verified the effectiveness of the method on the ImageNet dataset, in addition to four fine-grained datasets. Moreover, comprehensive experiments conducted on fine-grained datasets showed that CauSim can enhance the accuracy of existing FGVC architectures. The results can be reproduced using the code available in the GitHub repository https://github.com/HichemDebbi/CauSim.
Citation

M. DEBBI Hichem, (2023-06-19), "CauSim: A Causal Learning Framework for Fine-grained Image Similarity", [international] IWAN 2023 , Portugal

2023

Constraint-based debugging in probabilistic model checking

A counterexample in model checking is an error trace that represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the counterexample generation has a quantitative aspect. A probabilistic counterexample is a set of diagnostic paths in which a path formula holds, and whose probability mass violates the probability threshold. Comparing to conventional model checking, debugging and analyzing counterexamples in PMC is a very complex task. In this paper, we propose a debugging method for Markov models described in the probabilistic model checker PRISM by analyzing probabilistic counterexamples. The probabilistic counterexample generated is subjected to a set of assertions, which are employed for detecting incorrect behavior of the model, and thus locating the erroneous transitions contributing to the error. Our method has been implemented and tested on many PRISM models of different case studies. The method shows promising results in terms of execution time as well as its efficiency in locating the erroneous transitions.
Citation

M. DEBBI Hichem, (2023), "Constraint-based debugging in probabilistic model checking", [national] Computing , Springer

2022

2022 5th International Symposium on Informatics and its Applications (ISIA)

Answering questions over images is a challenging task, it requires reasoning over both images and text. In this paper, we introduce Residual Attention Network(RAN), a new visual question answering model, and compare it with baseline models such as stacked attention model and CNN-LSTM model. We find that our model performs better than these baseline models. In addition to our model, we also evaluate several holistic models and compare them with neural module networks frameworks, and the results show that neural modules networks perform better in questions reasoning. All the experiments have been done on the CLEVER dataset, which is a recent VQA dataset for evaluating multiple-step reasoning VQA models.
Index Terms—Visual question answering, baseline, Holistic, Neural module network
Citation

M. DEBBI Hichem, (2022), "2022 5th International Symposium on Informatics and its Applications (ISIA)", [international] Residual Attention Network:A new baseline model for visual question answering , Mohamed Boudiaf University of M'sila , M’sila, Algeria

Residual Attention Network:A new baseline model for visual question answering

Residual Attention Network:A new baseline model for visual question answering
Citation

M. DEBBI Hichem, (2022), "Residual Attention Network:A new baseline model for visual question answering", [international] ISIA , M'sila

Machine Learning-based Intrusion Detection System Against Routing Attacks in the Internet of Things

Machine Learning-based Intrusion Detection System Against Routing Attacks in the Internet of Things
Citation

M. DEBBI Hichem, (2022), "Machine Learning-based Intrusion Detection System Against Routing Attacks in the Internet of Things", [international] TACC , Algeria

Modeling and Analysis of Probabilistic Real-time Systems through Integrating Event-B and Probabilistic Model Checking

Event-B
Citation

M. DEBBI Hichem, (2022), "Modeling and Analysis of Probabilistic Real-time Systems through Integrating Event-B and Probabilistic Model Checking", [national] Computer Science , Poland

A Debugging Game for Probabilistic Models

One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold.

Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).
Citation

M. DEBBI Hichem, (2022), "A Debugging Game for Probabilistic Models", [national] Formal Aspects of Computing , ACM

Modeling and Performance Analysis of Resource Provisioning in Cloud Computing using Probabilistic Model Checking

Cloud comput
Citation

M. DEBBI Hichem, (2022), "Modeling and Performance Analysis of Resource Provisioning in Cloud Computing using Probabilistic Model Checking", [national] Informatica , Informatica

2020

Causal Explanation of Convolutional Neural Networks

In this paper we introduce an explanation technique
for Convolutional Neural Networks (CNNs) based on
the theory of causality by Halpern and Pearl [12]. The
causal explanation technique (CexCNN) is based on
measuring the filter importance to a CNN decision,
which is measured through counterfactual reasoning.
In addition, we employ extended definitions of
causality, which are responsibility and blame to
weight the importance of such filters and project
Citation

M. DEBBI Hichem, (2020), "Causal Explanation of Convolutional Neural Networks", [national] ECML-PKDD , Spain

2018

Counterexamples in Model Checking - A survey

Model checking is a formal method used for the verification of finite-state systems. Given a system model and such specification, which is a set of formal properties, the model checker verifies whether or not the model meets the specification. One of the major advantages of model checking over other formal methods its ability to generate a counterexample when the model falsifies the specification. Although the main purpose of the counterexample is to help the designer to find the source of the error in complex systems design, the counterexample has been also used for many other purposes, either in the context of model checking itself or in other domains in which model checking is used. In this paper, we will survey algorithms for counterexample generation, from classical algorithms in graph theory to novel algorithms for producing small and indicative counterexamples. We will also show how counterexamples are useful for debugging, and how we can benefit from delivering counterexamples for other purposes.
Citation

M. DEBBI Hichem, (2018), "Counterexamples in Model Checking - A survey", [national] Informatica , Informatica

2017

Debugging of probabilistic systems using structural equation modelling

The counterexample in probabilistic model checking (PMC) is a set of paths in which a path formula holds, and their accumulated probability violates the probability bound. However, understanding the counterexample is not an easy task. In this paper we address the complementary task of counterexample generation, which is the counterexample analysis. We propose an aided-diagnostic method for probabilistic counterexamples based on the notions of causality and regression analysis. Given a counterexample for a probabilistic CTL (PCTL)/continuous stochastic logic (CSL) formula that does not hold over probabilistic models [discrete time Markov chain (DTMC), Markov decision process (MDP) and continuous time Markov Chain (CTMC)], this method generates the causes of the violation, and describes their contribution to the error in the form of a regression model using structural equation modelling (SEM). The interpretation of the regression model generated helps the designer to get better insight on the behaviour of the model, and thus helps him to understand how the error has emerged.
Citation

M. DEBBI Hichem, (2017), "Debugging of probabilistic systems using structural equation modelling", [national] International Journal of Critical Computer-Based Systems , INDERSCIENCE

Modeling and Formal Analysis of Probabilistic Complex Event Processing (CEP) Applications

Complex Event Processing (CEP) is a powerful technology used in complex and real-time environments. CEP is an Event Driven Architecture (EDA) style consists of processing different events within the distributed enterprise system attempting to discover interesting information from multiple streams of events in timely manner. In real world, the streams of events are uncertain, which means that is not guaranteed that an event has actually occurred, this uncertainty is due mainly to imprecise content from the event sources (Sensors, RFID,...). As a result, probabilistic CEP has become an important issue in complex environments that require a real-time reaction given streams of probabilistic events.

It is evident that building probabilistic CEP applications is not a trivial task, which makes the description of these applications and the analysis of their behavior a necessary task. In this paper, we propose a formal verification approach for probabilistic CEP applications based on probabilistic model checking. To this end, we use the probabilistic Timed Automata (PTA) for describing the probabilistic CEP applications, and the Probabilistic Timed CTL (PTCTL) logic for specifying probabilistic timed properties.
Citation

M. DEBBI Hichem, (2017), "Modeling and Formal Analysis of Probabilistic Complex Event Processing (CEP) Applications", [international] European Conference on Modelling Foundations and Applications , Marburg, Germany

← Back to Researchers List