As one of the most important assets in the transportation of oil and gas products, subsea pipelines are susceptible to various environmental hazards, such as mechanical damage and corrosion, that can compromise their structural integrity and cause catastrophic environmental and financial damage. Autonomous underwater systems (AUS) are expected to assist offshore operations personnel and contribute to subsea pipeline inspection, maintenance, and damage detection tasks. Despite the promise of increased safety, AUS technology needs to mature, especially for image-based inspections with computer vision methods that analyze incoming images and detect potential pipeline damage through anomaly detection. Recent research addresses some of the most significant computer vision challenges for subsea environments, including visibility, color, and shape reconstruction. However, despite the high quality of subsea images, the lack of training data for reliable image analysis and the difficulty of incorporating risk-based knowledge into existing approaches continue to be significant obstacles. In this paper, we analyze industry-provided images of subsea pipelines and propose a methodology to address the challenges faced by popular computer vision methods. We focus on the difficulty posed by a lack of training data and the opportunities of creating synthetic data using risk analysis insights. We gather information on subsea pipeline anomalies, evaluate the general computer vision approaches, and generate synthetic data to compensate for the challenges that result from lacking training data, and evidence of pipeline damage in data, thereby increasing the likelihood of a more reliable AUS subsea pipeline inspection for damage detection.
In this study we demonstrate the appropriate use of statistically based filtering methods for feature selection and describe the application to Heart Rate Variability (HRV) features used to distinguish between arrhythmia and normal sinus rhythm electrocardiogram (ECG) signals. The initial set of HRV features is evaluated using both correlation and statistical significance tests. Normality assumption is assessed for each feature in order to select appropriate correlation methods and significance tests. In addition, the impact of outliers on the statistical test results is illustrated by an explorative analysis of correlation before and after outlier removal. Finally, a reduced set of features is selected, and the decision process guided by correlation and statistical significance test results is described and discussed.
In this paper, error performance analysis for M-ary phase shift keying (PSK) system in the inverse gamma two-ray with diffuse power (IG/TWDP) composite fading channel is presented. Using Fourier series approach, the average symbol error probability (ASEP) expression is derived in terms of hypergeometric functions, which can be evaluated using standard software packages. Derived expression is used to investigate degradation of error performance cased by shadowing, in regard to those obtained by considering only the TWDP multipath fading. All obtained results are verified by Monte-Carlo simulation.
Implementation of credit scoring models is a demanding task and crucial for risk management. Wrong decisions can significantly affect revenue, increase costs, and can lead to bankruptcy. Together with the improvement of machine learning algorithms over time, credit models based on novel algorithms have also improved and evolved. In this work, novel deep neural architectures, Stacked LSTM, and Stacked BiLSTM combined with SMOTE oversampling technique for the imbalanced dataset were developed and analyzed. The reason for the lack of publications that utilize Stacked LSTM-based models in credit scoring lies exactly in the fact that the deep learning algorithm is tailored to predict the next value of the time series, and credit scoring is a classification problem. The challenge and novelty of this approach involved the necessary adaptation of the credit scoring dataset to suit the time sequence nature of LSTM-based models. This was particularly crucial as, in practical credit scoring datasets, instances are not correlated nor time dependent. Moreover, the application of SMOTE to the newly constructed three-dimensional array served as an additional refinement step. The results show that techniques and novel approaches used in this study improved the performance of credit score prediction.
In this paper, the control of the electric vehicle with in-wheel motor drives is presented. Electric vehicle control is implemented through a drive motor control strategy based on the theory of discrete-time sliding mode. The speed controller is obtained as a combination of discrete-time first order sliding mode control and discrete-time realization of super twisting control algorithm that is commonly used in second-order sliding mode. The design of the proposed speed controller is performed using a discrete-time model of electrical drive. Various tests were performed in Matlab/Simulink software to validate the electronic differential system, vehicle model and engine control algorithm for different types of vehicle movement.
Due to increasingly widespread electoral corruption, citizens are slowly starting to lose trust in the fairness of democratic elections. The main objective of VoteChain is the elimination of the aspect of trust from the electoral process, in order to make voting more secure, transparent, and easily accessible. This paper proposes and implements a robust system that enhances voting efficiency by creating an electronic platform on top of a distributed Bitcoin Cash blockchain ledger. Blockchain represents a time-stamped series of immutable data records shared across a distributed network. When utilized in the context of voting, it guarantees full anonymity, vote integrity, and a fair, incontrovertible ledger with verifiable election results to all voters. Moreover, the system offers the ability to vote via any Internet-enabled computer or smartphone, dramatically decreasing the overall election organization costs. The system is envisioned as an application that connects to the Bitcoin Cash blockchain network via a custom feature-rich library. After discussing the system's characteristics, design, and underlying technology, this paper presents an example election scenario explaining how VoteChain works in-depth. In the end, the system's possible shortcomings are outlined, along with its prospective evolution and potential improvements that can be implemented.
Occupancy detection is one of the key elements in improving the energy performance of buildings. Due to their nature, occupancy detection models could be trained on old building data and adapted to new buildings for faster onboarding. We explore and analyse the transfer learning framework applied to occupancy detection. We use a combination of Long-short Term Memory neural network and convolutional neural network architectures and test the transfer learning framework on three datasets. The results show that the transferred models perform better than non-transferred models in almost all metric and dataset combinations.
Interest in research of the navigation problem for Unmanned Aerial Vehicles (UAVs) is on the rise. The aim of such a task is reaching a goal position while avoiding obstacles on the way. In this paper, we propose a different approach to Deep Reinforcement Learning (DRL) of navigation decision making process by introducing the reward function based of Artificial Potential Fields (APF). The validation of the proposed approach is performed by the comparison to the state-of-the-art approach. In terms of training performance, success rate, memory usage and the inference time, our approach, though sparser in terms of perceived information about the environment, yield better results.
In unstructured environments like parking lots or construction sites, due to the large search-space and kinodynamic constraints of the vehicle, it is challenging to achieve real-time planning. Several state-of-the-art planners utilize heuristic search-based algorithms. However, they heavily rely on the quality of the single heuristic function, used to guide the search. Therefore, they are not capable to achieve reasonable computational performance, resulting in unnecessary delays in the response of the vehicle. In this work, we are adopting a Multi-Heuristic Search approach, that enables the use of multiple heuristic functions and their individual advantages to capture different complexities of a given search space. Based on our knowledge, this approach was not used previously for this problem. For this purpose, multiple admissible and non-admissible heuristic functions are defined, the original Multi-Heuristic A* Search was extended for bidirectional use and dealing with hybrid continuous-discrete search space, and a mechanism for adapting scale of motion primitives is introduced. To demonstrate the advantage, the Multi-Heuristic A* algorithm is benchmarked against a very popular heuristic search-based algorithm, Hybrid A*. The Multi-Heuristic A* algorithm outperformed baseline in both terms, computation efficiency and motion plan (path) quality.
The cloud has become an essential part of modern computing, and its popularity continues to rise with each passing day. Currently, cloud computing is faced with certain challenges that are, due to the increasing demands, becoming urgent to address. One such challenge is the problem of load balancing, which involves the proper distribution of user requests within the cloud. This paper proposes a genetic algorithm for load balancing of the received requests across cloud resources. The algorithm is based on the processing of individual requests instantly upon arrival. The conducted test simulations showed that the proposed approach has better response and processing time compared to round robin, ESCE and throttled load balancing algorithms. The algorithm outperformed an existing genetic based load balancing algorithm, DTGA, as well.
This paper presents a fine-tuned implementation of the quicksort algorithm for highly parallel multicore NVIDIA graphics processors. The described approach focuses on algorith-mic and implementation-level improvements to achieve enhanced performance. Several fine-tuning techniques are explored to identify the best combination of improvements for the quicksort algorithm on GPUs. The results show that this approach leads to a significant reduction in execution time and an improvement in algorithmic operations, such as the number of iterations of the algorithm and the number of operations performed compared to its predecessors. The experiments are conducted on an NVIDIA graphics card, taking into account several distributions of input data. The findings suggest that this fine-tuning approach can enable efficient and fast sorting on GPUs for a wide range of applications.
Time-aware recommender systems extend traditional recommendation methods by revealing user preferences over time or observing a specific temporal context. Among other features and advantages, they can be used to provide rating predictions based on changes in recurring time periods. Their underlying assumption is that users are similar if their behavior is similar in the same temporal context. Existing approaches usually consider separate temporal contexts and generated user profiles. In this paper, we create user profiles based on multidimensional temporal contexts and use their combined presentation in a user-based collaborative filtering method. The proposed model provides user preferences at a future point in time that matches temporal profiles. The experimental validation demonstrates that the proposed model is able to outperform the usual collaborative filtering algorithms in prediction accuracy.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više