The evolving landscape of 5G Standalone (SA) and beyond networks is being increasingly focused on vertical industries. To unlock the full potential for verticals, it is important to tightly integrate Edge Network Applications (EdgeApps) tailored to vertical use cases, with the 5G SA network, while allowing them to interact with each other at the same time. Such interaction enables more transparency in expressing Quality of Service (QoS) demands from verticals in the form of intent while hiding the network complexity from them. In this paper, we propose two EdgeApps, which by interacting with both User Equipments (UEs) and 5G SA network, are becoming aware of network quality (quality-awareness) and context around UEs (situational-awareness). Such awareness is also enabled by initiatives such as GSMA Open Gateway and CAMARA, where network and IT functionality are exposed to application developers through standardized Application Programming Interfaces (APIs) that abstract the underlying complexity on the telco and IT systems. In this paper, we utilize the Nokia Network as Code (NaC) platform that exposes the capabilities of the Telenet 5G SA network through CAMARA APIs allowing our EdgeApps to dynamically and in real-time create events that trigger changes in QoS levels required by vertical applications. The paper showcases this concept through a case study within the Transport and Logistics (T&L) sector, which is focused on improving the safety and efficiency of remote vessel operation in busy port environments. The overall solution is deployed and tested on the Antwerp 5G SA testbed, which consists of the UEs (vehicle and vessel) and 5G network infrastructure in the Port of Antwerp-Bruges, as well as the 5G edge where EdgeApps are running. This research contributes to the broader objective of incorporating diverse industrial applications into the 5G and beyond ecosystem, showcasing tangible benefits for vertical industries.
In this paper, we demonstrate and introduce a novel Situational Awareness with Event-driven Network Programming Edge Network Application (EdgeApp), designed to optimize network resource utilization during vessel teleoperation in congested port areas. The demonstration is conducted on an open real-life EdgeApp 5G Standalone (SA) and beyond testbed situated at the port of Antwerp-Bruges. Through this showcase, we demonstrate how 5G and beyond services, utilizing an open 5G SA testbed, can enhance vessel teleoperation. The proposed solution dynamically adjusts network configurations, allowing for lower-quality camera feeds during vessel autonomy and higher-quality feeds when in the teleoperation zone. The practical application and benefits are exemplified through visual representations within the testbed environment.
Tools for predicting COVID-19 outcomes enable personalized healthcare, potentially easing the disease burden. This collaborative study by 15 institutions across Europe aimed to develop a machine learning model for predicting the risk of in-hospital mortality post-SARS-CoV-2 infection. Blood samples and clinical data from 1286 COVID-19 patients collected from 2020 to 2023 across four cohorts in Europe and Canada were analyzed, with 2906 long non-coding RNAs profiled using targeted sequencing. From a discovery cohort combining three European cohorts and 804 patients, age and the long non-coding RNA LEF1-AS1 were identified as predictive features, yielding an AUC of 0.83 (95% CI 0.82–0.84) and a balanced accuracy of 0.78 (95% CI 0.77–0.79) with a feedforward neural network classifier. Validation in an independent Canadian cohort of 482 patients showed consistent performance. Cox regression analysis indicated that higher levels of LEF1-AS1 correlated with reduced mortality risk (age-adjusted hazard ratio 0.54, 95% CI 0.40–0.74). Quantitative PCR validated LEF1-AS1’s adaptability to be measured in hospital settings. Here, we demonstrate a promising predictive model for enhancing COVID-19 patient management.
Many deep-learning computer vision systems analyse objects not previously observed by the system. However, such tasks can be simplified if the objects are marked beforehand. A straightforward method for marking is printing 2D symbols and attaching them to the objects. Selecting these symbols can affect the performance of the CV system, as similar symbols may require extended training time and a larger training dataset. It is possible to find good symbols differentiated by the given neural network easily. Still, there were no efforts to generalise such findings in the literature, and it is not known if the symbols optimal for one network would work just as well in the other. We explored how transferable symbol selection is between the networks. To this end, 30 sets of randomly selected and augmented symbols were classified by-five neural networks. Each network was given the same training dataset and the same training time. Results were ranked and compared, which allowed the identification of networks which performed similarly so that the symbol selection could be generalised between them.
Fully automated chatbots are increasingly being applied in the real-estate industry. Although, they are not completely able replace interaction between the real-estate agents, they can automate customer support, save human resources for qualitative tasks, accelerate operations, and improve business branding. In this paper, a chatbot for real-estate is developed. The chatbot is able to engage clients in meaningful conversations in real-time. It provides a 24/7 service and effectively reduces administrative costs. The architecture and infrastructure overview are presented. The rule matching algorithm is presented and discussed in detail.
In today’s technologically-driven world, protecting ICTs (Information and Communication Technologies) is of great importance. Due to the amount of personal data and the obligations of high transaction accuracy, financial institutions such as banks and insurance companies are even more sensitive to data protection. On the business side, ICT is fundamental for day-to-day operations, so investing in ICT is investing in business continuity, operating and resilience. Integration of ISO 27001:2013 and ISO 9001:2015 standards into an organization’s Information Security Management System (ISMS) and Quality Management System (QMS), respectively, further enhances the importance of protecting ICT. It is also important for organizations to implement these standards as a useful baseline for further compliances, such as for example GDPR (General Data Protection Regulation). These standards provide a framework for continually improving management systems in critical areas, which is just one more reason for implementation.
ITIL is the most accepted framework for the managing of IT services. Incident Management process is the process which is integrated inside ITIL framework and is responsible for: logging, categorization, prioritization and resolving of all incidents. This paper describes the implementation of ITIL Incident Management process inside one public institution in Bosnia and Herzegovina. Detailed implementation is described inside this paper. After successful implementation, measurements are completed in order to check is Incident Management helped organization to improve its performances. The conclusion of the paper described benefits of implementation of ITIL Incident Management inside any type of the organization and future work.
In the field of telecommunications and cloud communications, accurately and in real-time detecting whether a human or an answering machine has answered an outbound call is of paramount importance. This problem is of particular significance during campaigns as it enhances service quality, efficiency and cost reduction through precise caller identification. Despite the significance of the field, it remains inadequately explored in the existing literature. This paper presents an innovative approach to answering machine detection that leverages transfer learning through the YAMNet model for feature extraction. The YAMNet architecture facilitates the training of a recurrent-based classifier, enabling real-time processing of audio streams, as opposed to fixed-length recordings. The results demonstrate an accuracy of over 96% on the test set. Furthermore, we conduct an in-depth analysis of misclassified samples and reveal that an accuracy exceeding 98% can be achieved with the integration of a silence detection algorithm, such as the one provided by FFmpeg.
ITIL stands for Information Technology Infrastructure Library. ITIL is the most famous framework for managing IT services. ITIL provides guidelines for the implementation of Service Desk solutions in the business environment of any company. The original scientific and professional contribution of this work is proof that the ITIL framework, through the implementation of a software solution that automates its business processes, can help every company in daily work and can improve its basic and additional business processes. Measurements will be made in a real public company in Bosnia and Herzegovina. Also, the software solution itself will be independently developed with its own original programming code only for the purposes of this research.
Audio fingerprinting techniques have seen great advances in recent years, enabling accurate and fast audio retrieval even in conditions when the queried audio sample has been highly deteriorated or recorded in noisy conditions. Expectedly, most of the existing work is centered around music, with popular music identification services such as Apple’s Shazam or Google’s Now Playing designed for individual audio recognition on mobile devices. However, the spectral content of speech differs from that of music, necessitating modifications to current audio fingerprinting approaches. This paper offers fresh insights into adapting existing techniques to address the specialized challenge of speech retrieval in telecommunications and cloud communications platforms. The focus is on achieving rapid and accurate audio retrieval in batch processing instead of facilitating single requests, typically on a centralized server. Moreover, the paper demonstrates how this approach can be utilized to support audio clustering based on speech transcripts without undergoing actual speech-to-text conversion. This optimization enables significantly faster processing without the need for GPU computing, a requirement for real-time operation that is typically associated with state-of-the-art speech-to-text tools.
Deep learning techniques in computer vision (CV) tasks such as object detection, classification, and tracking can be facilitated using predefined markers on those objects. Selecting markers is an objective that can potentially affect the performance of the algorithms used for tracking as the algorithm might swap similar markers more frequently and, therefore, require more training data and training time. Still, the issue of marker selection has not been explored in the literature and seems to be glossed over throughout the process of designing CV solutions. This research considered the effects of symbol selection for 2D-printed markers on the neural network’s performance. The study assessed over 250 ALT code symbols readily available on most consumer PCs and provided a go-to selection for effectively tracking n-objects. To this end, a neural network was trained to classify all the symbols and their augmentations, after which the confusion matrix was analysed to extract the symbols that the network distinguished the most. The results showed that selecting symbols in this way performed better than the random selection and the selection of common symbols. Furthermore, the methodology presented in this paper can easily be applied to a different set of symbols and different neural network architectures.
In this paper, the multilevel image thresholding methods based on the particle swarm optimization algorithm and different chaotic inertia weight strategies are considered. The performance of each chaotic inertia weight strategy is evaluated using a set of standard test images. Different numbers of image classes are considered. In addition, the paper also considers the multilevel thresholding performance based on commonly employed linear decreasing inertia weight and random inertia weight. All considered multilevel thresholding methods are based on Kapur’s entropy. The experimental results demonstrate that the particle swarm optimization with chaotic inertia weight can be successfully used for multilevel image thresholding.
In this paper, a multilevel thresholding method for image segmentation based on Otsu’s between-class variance and multi-swarm particle swarm optimization algorithm with dynamic learning strategy is presented. The considered multilevel image thresholding method is assessed on various standard test images and for different numbers of thresholds. For each test image and a considered number of thresholds, the mean and the standard deviation of Otsu’s objective function over a number of independent runs are evaluated. The experimental results showcased that this method can be successfully employed in multilevel image thresholding.
This study scrutinizes five years of Sarajevo’s Air Quality Index (AQI) data using diverse machine learning models — Fourier autoregressive integrated moving average (Fourier ARIMA), Prophet, and Long short-term memory (LSTM)—to forecast AQI levels. Focusing on various prediction frames, we evaluate model performances and identify optimal strategies for different temporal granularities. Our research unveils subtle insights into each model’s efficacy, shedding light on their strengths and limitations in predicting AQI across varied timeframes. This research presents a robust framework for automatic optimization of AQI predictions, emphasizing the influence of temporal granularity on prediction accuracy, automatically selecting the most efficient models and parameters. These insights hold significant implications for data-driven decision-making in urban air quality control, paving the way for proactive and targeted interventions to improve air quality in Sarajevo and similar urban environments.
Over the last few years, remarkable progress has been made in the field of congenital heart diseases. Improvements considering diagnostic modalities, especially imaging, in surgical and interventional techniques, as well as in postoperative therapy and care, have contributed to a significant reduction in mortality and morbidity. One of the most important applications of medical imaging techniques in children is the detection and treatment of congenital heart anomalies. Objective of this article is to show the importance of ultrasonography in the detection of congenital heart diseases in children. The study was conducted on children with simple and complex congenital heart diseases and was conducted on the Pediatric Clinic, UKCS. The research is descriptive on a representative sample. In our study, 166 children were observed, of which 148 children (77 boys, 71 girls) with simple congenital heart diseases, and 18 children (8 boys, 10 girls) with complex congenital heart diseases. Out of the total number of observed children, 115 had a surgical correction, 97 children with simple congenital heart diseases (45 boys, 52 girls) and 18 children with complex congenital heart diseases (8 boys, 10 girls). The number of children monitored through the Cardiac Counseling Center who didn’t undergo surgical correction was 51, all with simple congenital heart diseases. Out of the total number of observed children who were frequently coming for follow-ups, 28 children had changes on the ECG, and 138 of them had no changes on the ECG, 93 were surgically treated, and 73 of them were on conservative therapy. Based on the results of the research, we conclude that ultrasonography is an important method in the detection and treatment of congenital heart diseases.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više