Logo

Publikacije (10)

Nazad
Rialda Spahic, K. Poolla, V. Hepsø, M. Lundteigen

As one of the most important assets in the transportation of oil and gas products, subsea pipelines are susceptible to various environmental hazards, such as mechanical damage and corrosion, that can compromise their structural integrity and cause catastrophic environmental and financial damage. Autonomous underwater systems (AUS) are expected to assist offshore operations personnel and contribute to subsea pipeline inspection, maintenance, and damage detection tasks. Despite the promise of increased safety, AUS technology needs to mature, especially for image-based inspections with computer vision methods that analyze incoming images and detect potential pipeline damage through anomaly detection. Recent research addresses some of the most significant computer vision challenges for subsea environments, including visibility, color, and shape reconstruction. However, despite the high quality of subsea images, the lack of training data for reliable image analysis and the difficulty of incorporating risk-based knowledge into existing approaches continue to be significant obstacles. In this paper, we analyze industry-provided images of subsea pipelines and propose a methodology to address the challenges faced by popular computer vision methods. We focus on the difficulty posed by a lack of training data and the opportunities of creating synthetic data using risk analysis insights. We gather information on subsea pipeline anomalies, evaluate the general computer vision approaches, and generate synthetic data to compensate for the challenges that result from lacking training data, and evidence of pipeline damage in data, thereby increasing the likelihood of a more reliable AUS subsea pipeline inspection for damage detection.

Rialda Spahic, V. Hepsø, M. Lundteigen

Cyber-physical systems are taking on a permanent role in the industry, such as in oil and gas or mining. These systems are expected to perform increasingly autonomous tasks in complex settings removing human operators from remote and potentially hazardous environments. High autonomy necessitates a more extensive use of artificial intelligence methods, such as anomaly detection, to identify unusual occurrences in the monitored environment. The absence of data characterizing potentially hazardous events leads to disruptive noise displayed as false alarms, a common anomaly detection issue for hazard identification applications. Contrastingly, disregarding the false alarms can result in the opposite effect, causing loss of early indications of hazardous occurrences. Existing research introduces simulating and extrapolating less represented data to expand the information on hazards and semi-supervise the methods or by introducing thresholds and rule-based methods to balance noise and meaningful information, necessitating intensive computing resources. This research proposes a novel Warning Identification Framework that evaluates risk analysis objectives and applies them to discern between true and false warnings identified by anomaly detection. We demonstrate the results by analyzing three seismic hazard assessment methods for identifying seismic tremors and comparing the outcomes to anomalies found using the unsupervised anomaly detection method. The demonstrated approach shows great potential in enhancing the reliability and transparency of anomaly detection outcomes and, thus, supporting the operational decision-making process of a cyber-physical system.

Rialda Spahic, M. Lundteigen, V. Hepsø

This research examines the factors contributing to the exterior material degradation of subsea oil and gas pipelines monitored with autonomous underwater systems (AUS). The AUS have a role of gathering image data that is further analyzed with artificial intelligence data analysis methods. Corrosion and potential ruptures on pipeline surfaces are complex processes involving several competing elements, such as the geographical properties, composition of soil, atmosphere, and marine life, whose eflt in substantial environmental damage and financial loss. Despite extensive research, corrosion monitoring and prediction remain a persistent challenge in the industry. There is a lack of knowledge map that can enable image ausing an AUS to recognize ongoing degradation processes and potentially prevent substantial damage. The main contribution of this research is the knowledge map for increased context and risk awareness to improve the reliability of image-based monitoring and inspection by autonomous underwater systems in detecting hazards and early signs of material degradation on subsea pipeline surfaces.

Rialda Spahic, M. Lundteigen

The growing need for autonomous systems in offshore industries has contributed to the increased use of machine learning methods. These systems promise to improve safety in operations. However, the methods as enablers of autonomy are susceptible to various failures while interpreting data and making decisions. Several studies have highlighted the lack of research on the reliability and resilience of autonomous systems powered by these standard methods. Recent research provides sets of data interpretation methods. Despite the popularity of machine learning, there is a significant drop in knowledge when these methods result in failures. These failures further support autonomous systems in making wrong decisions. For autonomous systems, resilience and safety management should be an integrated functionality for recovery from risky situations and reporting of incidents. This research proposes an overview of machine learning methods for interpreting sensor data captured by drones operated manually and autonomously. We apply Isolation Forest for anomaly detection analysis and evaluate the Decision tree, Random forest, kNN, Logistic Regression, SVM, and, Naive Bayes for classification analysis. The methods are chosen based on their adequacy and comparative research prevalence. Comparison between the two drone operation modes contributes to understanding the reliability level for autonomously collected data. This research’s results provide an evaluation of machine learning methods’ performance across sensor data.

Kanita Karađuzović-Hadžiabdić, Rialda Spahic, Emin Tahirović

Social media has opened the gates for collecting big data that can be used to monitor epidemic trends in real time. We evaluate whether Watson NLP service can be used to reliably predict infectious disease such as influenza-like illness (ILI) outbreaks using Twitter data during the period of the main influenza season. Watson’s performance is evaluated by computing Pearson correlation between the number of tweets classified by Watson as ILI and the number of ILI occurrences recovered from traditional epidemic surveillance system of the Centers for Disease Control and Prevention (CDC). Achieved correlation was 0.55. Furthermore, a 12 week discrepancy was found between peak occurrences of ILI predicted by Watson and CDC reported data. Additionally, we developed a scoring method for ILI prediction from Twitter posts using a simple formula with the ability to predict ILI two weeks ahead of CDC reported ILI data. The method uses Watson’s sentiment and emotion scores together with identified ILI features to analyze influenza-related posts in real time. Due to Watson's high computational costs of sentiment and emotion analysis, we tested if machine learning approach can be used to predict influenza using only identified ILI keywords as influenza predictors. All three evaluated methods (Random Forest, Logistic Regression, K-NN), achieved overall accuracy of ~68.2% and 97.5% respectively, when Watson and the developed formula are used as medical experts. The obtained results suggest that data found within social media can be used to supplement the traditional surveillance of influenza outbreaks with the help of intelligent computations.

Rialda Spahic, V. Hepsø, M. Lundteigen

In the offshore industry, unmanned autonomous systems are expected to have a permanent role in future operations. During offshore operations, the unmanned autonomous system needs definite instructions on evaluating the gathered data to make decisions and react in real-time when the situation requires it. We rely on video surveillance and sensor measurements to recognize early warning signals of a failing asset during the autonomous operation. Missing out on the warning signals can lead to a catastrophic impact on the environment and a significant financial loss. This research is helping to solve the issue of trustworthiness of the algorithms that enable autonomy by capturing the rising risks when machine learning unintentionally fails. Previous studies demonstrate that understanding machine learning algorithms, finding patterns in anomalies, and calibrating trust can promote the system’s reliability. Existing approaches focus on improving the machine learning algorithms and understanding the shortcomings in the data collection. However, recollecting the data is often an expensive and extensive task. By transferring knowledge from multiple disciplines, diverse approaches will be observed to capture the risk and calibrate the trust in autonomous systems. This research proposes a conceptual framework that captures the known risks and creates a safety net around the autonomy-enabling algorithms to improve the reliability of the autonomous operations.

27. 4. 2019.
1
Rialda Spahic, Dzana Basic, Emina Yaman

The idea of chatbots firstly appeared in the 1960s. But only after more than half a century passed the world became ready for their implementation into the real life, this being a result of the rapid progress in natural language processing, artificial intelligence, and the global presence of text messaging applications. Today, specialized chatbots exist in different domains, thus helping organizations handle large amount of inquiries. Idea of this project was to develop one friendly chatbot with whom you can talk about politics, movies, weather, sport, emotions and similar everyday things. Friendly chatbot named Zeka, is a web-based chatbot developed with the help of Chatterbot library. Chatbot relies on different natural processing and machine learning algorithms altered by its developers to increase its performance.

Kanita Karađuzović-Hadžiabdić, Rialda Spahic

We examine a machine learning approach for detecting common Class and Method level code smells (Data Class and God Class, Feature Envy and Long Method). The focus of the work is selection of reduced set of features that will achieve high classification accuracy. The proposed features may be used by the developers to develop better quality software since the selected features focus on the most critical parts of the code that is responsible for creation of common code smells. We obtained a high accuracy results for all four code smells using the selected features: 98.57% for Data Class, 97.86% for God Class, 99.67% for Feature Envy, and 99.76% for Long Method.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više