Logo

Publikacije (51)

Nazad
Emina Tahirovic, Senka Krivic

: Artificial Intelligence techniques are widely used for medical purposes nowadays. One of the crucial applications is cancer detection. Due to the sensitivity of such applications, medical workers and patients interacting with the system must get a reliable, transparent, and explainable output. Therefore, this paper examines the interpretability and explainability of the Logistic Regression Model (LRM) for breast cancer detection. We analyze the accuracy and transparency of the LRM model. Additionally, we propose an NLP-based interface with a model interpretability summary and a contrastive explanation for users. Together with textual explanations, we provide a visual aid for medical practitioners to understand the decision-making process better.

Javeria Amin, M. A. Anjum, Senka Krivic, Muhammad Irfan Sharif

In lymphoblastic leukaemia (ALL), the bone marrow naturally produces immature cells. Each year ALL is diagnosed with over 6500 instances, and the trend is still going upward. Technological advancements in AI and big data analytics help doctors and radiologists make accurate and efficient clinical decisions. The proposed method consists of two core steps: segmentation and classification based on the quantum convolutional networks. A three‐dimensional U‐network is proposed having 70 layers that are trained on the optimal hyperparameters, which provides 0.98 dice scores. The four‐qubit quantum transfer learning model is proposed for classifying different types of blood cells. The accuracies achieved are 0.99 on blast cells, 0.99 on Basophils, 0.98 on Eosinophils, 0.97 on Neutrophils, 0.99 on Lymphocytes, and 0.96 on Monocytes. The proposed classification model provides 0.99 average accuracy.

Gerard Canal, Senka Krivic, Paul Luff, A. Coles

For users to trust planning algorithms, they must be able to understand the planner's outputs and the reasons for each action selection. This output does not tend to be user-friendly, often consisting of sequences of parametrised actions or task networks. And these may not be practical for non-expert users who may find it easier to read natural language descriptions. In this paper, we propose PlanVerb, a domain and planner-independent method for the verbalization of task plans. It is based on semantic tagging of actions and predicates. Our method can generate natural language descriptions of plans including causal explanations. The verbalized plans can be summarized by compressing the actions that act on the same parameters. We further extend the concept of verbalization space, previously applied to robot navigation, and apply it to planning to generate different kinds of plan descriptions for different user requirements. Our method can deal with PDDL and RDDL domains, provided that they are tagged accordingly. Our user survey evaluation shows that users can read our automatically generated plan descriptions and that the explanations help them answer questions about the plan.

Benjamin Krarup, F. Lindner, Senka Krivic, D. Long

The continued development of robots has enabled their wider usage in human surroundings. Robots are more trusted to make increasingly important decisions with potentially critical outcomes. Therefore, it is essential to consider the ethical principles under which robots operate. In this paper we examine how contrastive and non-contrastive explanations can be used in understanding the ethics of robot action plans. We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations. Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot’s plan.

Amila Akagić, Senka Krivic, Harun Dizdar, J. Velagić

The scientific discipline of Computer Vision (CV) is a fast developing branch of Machine Learning (ML). It addresses various tasks important for robotics, medicine, autonomous driving, surveillance, security or scene understanding. The development of sensor technologies enabled wide usage of 3D sensors, and therefore, it increased the interest of the CV research community in creating methods for 3D sensor data. This paper outlines seven CV tasks with 3D point cloud data, state-of-the-art techniques, and datasets. Additionally, we identify key challenges.

M. Brandão, Gerard Canal, Senka Krivic, P. Luff, A. Coles

Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output—thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.

M. Brandão, Gerard Canal, Senka Krivic, P. Luff, A. Coles

Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output—thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.

M. Brandão, Gerard Canal, Senka Krivic, D. Magazzeni

Recent research in AI ethics has put forth explainability as an essential principle for AI algorithms. However, it is still unclear how this is to be implemented in practice for specific classes of algorithms—such as motion planners. In this paper we unpack the concept of explanation in the context of motion planning, introducing a new taxonomy of kinds and purposes of explanations in this context. We focus not only on explanations of failure (previously addressed in motion planning literature) but also on contrastive explanations—which explain why a trajectory A was returned by a planner, instead of a different trajectory B expected by the user. We develop two explainable motion planners, one based on optimization, the other on sampling, which are capable of answering failure and constrastive questions. We use simulation experiments and a user study to motivate a technical and social research agenda.

Benjamin Krarup, Senka Krivic, D. Magazzeni, D. Long, Michael Cashmore, David E. Smith

In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user’s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.

Senka Krivic, Michael Cashmore, D. Magazzeni, S. Szedmák, J. Piater

We present a novel approach for decreasing state uncertainty in planning prior to solving the planning problem. This is done by making predictions about the state based on currently known information, using machine learning techniques. For domains where uncertainty is high, we define an active learning process for identifying which information, once sensed, will best improve the accuracy of predictions. We demonstrate that an agent is able to solve problems with uncertainties in the state with less planning effort compared to standard planning techniques. Moreover, agents can solve problems for which they could not find valid plans without using predictions. Experimental results also demonstrate that using our active learning process for identifying information to be sensed leads to gathering information that improves the prediction process.

Gerard Canal, R. Borgo, A. Coles, Archie Drake, D. Huynh, P. Keller, Senka Krivic, Paul Luff et al.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više