The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of annotated video across ten diverse categories were analyzed. Audio features were extracted with pyAudioAnalysis, while visual features (colour histograms, optical flow, object detection, facial recognition) were derived using OpenCV. Six supervised classifiers—Naive Bayes, K-Nearest Neighbors, Logistic Regression, Decision Tree, Random Forest, and XGBoost—were evaluated, with hyperparameters optimized via grid search. Temporal coherence was enhanced using median filtering. Random Forest achieved the best performance, with 74% AUC on fused features and a 3% F1-score gain after post-processing. Spectral flux, grayscale histograms, and optical flow emerged as key discriminative features. The best model was deployed as a practical web service using TensorFlow and Flask, integrating informative segment detection with subtitle generation via beam search to ensure coherence and coverage. System-level evaluation demonstrated low latency and efficient resource utilization under load. Overall, the results confirm the strength of multimodal fusion and ensemble learning for video summarization and highlight their potential for real-world applications in surveillance, digital archiving, and online education.
This work looks into the utilization of blockchain technology within the telecommunications sector, emphasizing enhancements in security, privacy, and efficiency of data management. The “TelecomSecurity” smart contract, demonstrates blockchain’s features of decentralization and immutability, enabling robust user data protection, transparent identity management, and process automation. The paper focuses on protection mechanisms and resource optimization, showcasing detailed metrics of performance and gas consumption compared to traditional environments like Python and Flask. Additionally, it includes an analysis of the study “Blockchain technology empowers telecom network operation” by Dinh C. Nguyen, Pubudu N. Pathirana, and Ming Ding, published in IEEE Communications Magazine 2020, to discuss the blockchain’s potential to enhance operations within telecom networks, especially when integrating 5G technologies. The research establishes parallels between theoretical insights and practical findings, underscoring the blockchain’s relevance and use cases in real-world telecom scenarios. It also discusses potential applications in 5G networks and IoT devices, positioning blockchain as a transformative technology for the digital age, enhancing security, lowering costs, and improving operational efficiency. More specifically, this study explores how blockchain-based decentralized user management and smart contract automation can enhance telecom service agreements, reducing reliance on centralized authorities while improving transparency and operational efficiency.
Software developers often need guides navigating them in the process of choosing the most suitable frameworks and programming languages for their needs. In this study, the impact of the programming languages on the performance of four popular backend frameworks: Spring Boot, ASP.NET Core, Express.js, and Django is examined using tools such as Apache JMeter and Docker under uniform conditions. With metrics like latency, throughput, docker build time, and deployment time the experiments revealed that ASP.NET Core exhibited the lowest latency (1ms for HTTP POST and GET), while Django achieved the shortest deployment time (0.31 seconds). Spring Boot and Express.js occupied the middle ground, balancing flexibility and performance. Besides valuable insights into the efficiency of each framework in real-world applications, this paper also includes a review of similar studies while complementing them by providing additional perspectives through concrete measurements and analyses under realistic conditions. This study contributes to a better understanding of architectural decisions and their relationship to performance while making the way for further research, such as analyzing more complex applications and energy efficiency.
Embedded systems, particularly when integrated into the Internet of Things (IoT) landscape, are critical for projects requiring robust, energy-efficient interfaces to collect real-time data from the environment. As these systems become complex, the need for dynamic reconfiguration, improved availability, and stability becomes increasingly important. This paper presents the design of a framework architecture that supports dynamic reconfiguration and “on-the-fly” code execution in IoT-enabled embedded systems, including a virtual machine capable of hot reloads, ensuring system availability even during configuration updates. A “hardware-in-the-loop” workflow manages communication between the embedded components, while low-level coding constraints are accessible through an additional abstraction layer, with examples such as MicroPython or Lua. The study results demonstrate the VM’s ability to handle serialization and deserialization with minimal impact on system performance, even under high workloads, with serialization having a median time of 160 microseconds and deserialization having a median of 964 microseconds. Both processes were fast and resource-efficient under normal conditions, supporting real-time updates with occasional outliers, suggesting room for optimization and also highlighting the advantages of VM-based firmware update methods, which outperform traditional approaches like Serial and OTA (Over-the-Air, the ability to update or configure firmware, software, or devices via wireless connection) updates by achieving lower latency and greater consistency. With these promising results, however, challenges like occasional deserialization time outliers and the need for optimization in memory management and network protocols remain for future work. This study also provides a comparative analysis of currently available commercial solutions, highlighting their strengths and weaknesses.
In the era of exponentially expanding data, particularly driven by social media development, effective data management and query processing have become critical challenges in application development. Graph databases, such as Neo4j, JanusGraph, ArangoDB, and OrientDB, offer significant advantages for applications requiring intensive processing of interconnected data, including social networks and recommendation systems. In this work, we focus on Neo4j as a representative of graph databases and MySQL as a representative of relational SQL databases for clarity and precision in data representation. We begin by introducing fundamental optimization techniques specific to each type of database. Subsequently, we concentrate on an experimental and investigative analysis of query performance on Neo4j and MySQL databases using original datasets and structures under consideration. The findings reveal that SQL databases outperform simpler queries, whereas graph databases excel in handling complex structures with multiple relationships. Moreover, the complexity of composing queries becomes apparent when addressing territories requiring table mergers (or node and relationship manipulation in graph databases). We also evaluate related research in this area, which further demonstrates that integrating graph and relational databases effectively can lead to optimal data management solutions, while utilizing both types of databases may offer combined advantages depending on the application requirements.
Abstract Environmental, Social, and Governance (ESG) criteria have emerged as pivotal benchmarks for assessing corporate sustainability and ethical business practices. This study investigates the transformative role of small and medium-sized enterprises (SMEs) in advancing ESG practices, with a particular emphasis on countries aspiring to European Union membership. Employing a quantitative methodology through a survey questionnaire the research analyzes the challenges and opportunities associated with ESG implementation. Data collected from 51 SMEs across the Balkan region reveal substantial benefits of ESG integration, notably in enhancing operational efficiency and market reputation, with transparency and strategic planning identified as critical drivers. However, SMEs face significant obstacles such as complex regulatory frameworks, limited access to financing, and inadequate training resources. The article proposes targeted strategies to strengthen SME capacity, emphasizing investment in education, technological solutions, and partnerships with key stakeholders. By adopting ESG standards, SMEs not only contribute to sustainable development but also bolster their competitiveness and resilience in a rapidly evolving global market.
Efficient and sustainable electrical grids are crucial for energy management in modern society and industry. Govern-ments recognize this and prioritize energy management in their plans, alongside significant progress made in theory and practice over the years. The complexity of power systems determines the unique nature of power communication networks, and most researches have been focusing on the dynamic nature of voltage stability, which led to the need for dynamic models of power systems. Control strategies based on stability assessments have become essential for managing grid stability, diverging from traditional methods and often leveraging advanced computational techniques based on deep learning algorithms and neural networks. This way, researchers can develop predictive models capable of forecasting voltage stability and detecting potential instability events in real-time, whereas neural networks can also optimize control strategies based on wide-area information and grid response, enabling more effective stability control measures, as well as detecting and classifying disturbances or faults in the grid. This paper explores the use of predictive models to assess smart grid stability, examining the benefits, risks, and comparing results to determine the most effective approach.
Deep learning techniques in computer vision (CV) tasks such as object detection, classification, and tracking can be facilitated using predefined markers on those objects. Selecting markers is an objective that can potentially affect the performance of the algorithms used for tracking as the algorithm might swap similar markers more frequently and, therefore, require more training data and training time. Still, the issue of marker selection has not been explored in the literature and seems to be glossed over throughout the process of designing CV solutions. This research considered the effects of symbol selection for 2D-printed markers on the neural network’s performance. The study assessed over 250 ALT code symbols readily available on most consumer PCs and provided a go-to selection for effectively tracking n-objects. To this end, a neural network was trained to classify all the symbols and their augmentations, after which the confusion matrix was analysed to extract the symbols that the network distinguished the most. The results showed that selecting symbols in this way performed better than the random selection and the selection of common symbols. Furthermore, the methodology presented in this paper can easily be applied to a different set of symbols and different neural network architectures.
The number of loan requests is rapidly growing worldwide representing a multi-billion-dollar business in the credit approval industry. Large data volumes extracted from the banking transactions that represent customers’ behavior are available, but processing loan applications is a complex and time-consuming task for banking institutions. In 2022, over 20 million Americans had open loans, totaling USD 178 billion in debt, although over 20% of loan applications were rejected. Numerous statistical methods have been deployed to estimate loan risks opening the field to estimate whether machine learning techniques can better predict the potential risks. To study the machine learning paradigm in this sector, the mental health dataset and loan approval dataset presenting survey results from 1991 individuals are used as inputs to experiment with the credit risk prediction ability of the chosen machine learning algorithms. Giving a comprehensive comparative analysis, this paper shows how the chosen machine learning algorithms can distinguish between normal and risky loan customers who might never pay their debts back. The results from the tested algorithms show that XGBoost achieves the highest accuracy of 84% in the first dataset, surpassing gradient boost (83%) and KNN (83%). In the second dataset, random forest achieved the highest accuracy of 85%, followed by decision tree and KNN with 83%. Alongside accuracy, the precision, recall, and overall performance of the algorithms were tested and a confusion matrix analysis was performed producing numerical results that emphasized the superior performance of XGBoost and random forest in the classification tasks in the first dataset, and XGBoost and decision tree in the second dataset. Researchers and practitioners can rely on these findings to form their model selection process and enhance the accuracy and precision of their classification models.
Air pollution is a major problem in developing countries and around the world causing lung diseases such as asthma, chronic bronchitis, emphysema, and chronic obstructive pulmonary disease. Therefore, innovative methods and systems for predicting air pollution are needed to reduce such risks. Some Internet of Things (IoT) technologies have been developed to assess and monitor various air quality parameters. In the context of IoT, Artificial intelligence is one of the main segments of smart cities that enables collecting a large amount of data to make recommendations, predict future events and help make decisions. Big data, as part of artificial intelligence, greatly contributes to making further decisions, determining the necessary resources, and identifying critical places thanks to the large amount of data it collects. This paper proposes a solution, with the integration of the Internet of Things (IoT), to predict pollution for any given day. This paper aims to show how sensor-derived data in smart air pollution monitoring solutions can be used for intelligent pollution management. By collecting data from the air pollution sensor that sends the data to the server via. NET 6 REST API endpoint and places it in a SQL Server database together with additional weather data that is collected from REST API for that part of the day, a dataset is created through the ETL process in Jupyter notebook. Linear regression algorithms will be used for making predictions. By detecting the largest sources of air pollution, artificial intelligence solutions can proactively reduce pollution and thus improve health conditions and reduce health costs.
The designing process of an IoT (Internet of Things) network requires adequate knowledge of various communication technologies that make the connection of the IoT modules possible. Many important factors such as scalability, bandwidth, data rate (speed), coverage, power consumption, and security support need to be considered to answer the needs of an IoT application with regards to the implemented radio communication technology. This paper studies the choices of three major LPWAN (Low-Power Wide-Area Networks) technologies that are currently leading in the market of radio communication technologies. Focusing on Sigfox, LoRaWAN (Low-Range Wide-Area Networks), and NB-IoT (Narrow-Band Internet of Things), this work intends to give the respective pros and cons of the mentioned technologies and a clear view of the recent trends and effective choices of radio communication technologies for major smart IoT applications.
With the global transition to the IPv6 (Internet Protocol version 6), IP (Internet Protocol) validation efficiency and IPv6 support from the aspect of network programming are gaining more importance. As global computer networks grow in the era of IoT (Internet of Things), IP address validation is an inevitable process for assuring strong network privacy and security. The complexity of IP validation has been increased due to the rather drastic change in the memory architecture needed for storing IPv6 addresses. Low-level programming languages like C/C++ are a great choice for handling memory spaces and working with simple devices connected in an IoT (Internet of Things) network. This paper analyzes some user-defined and open-source implementations of IP validation codes in Boost. Asio and POCO C++ networking libraries, as well as the IP security support provided for general networking purposes and IoT. Considering a couple of sample codes, the paper gives a conclusion on whether these C++ implementations answer the needs for flexibility and security of the upcoming era of IPv6 addressed computers.
With the emerging Internet of Things (IoT) technologies, the smart city paradigm has become a reality. Wireless low-power communication technologies (LPWAN) are widely used for device connection in smart homes, smart lighting, mitering, and so on. This work suggests a new approach to a smart parking solution using the benefits of narrowband Internet of Things (NB-IoT) technology. NB-IoT is an LPWAN technology dedicated to sensor communication within 5G mobile networks. This paper proposes the integration of NB-IoT into the core IoT platform, enabling direct sensor data navigation to the IoT radio stations for processing, after which they are forwarded to the user application programming interface (API). Showcasing the results of our research and experiments, this work suggests the ability of NB-IoT technology to support geolocation and navigation services, as well as payment and reservation services for vehicle parking to make the smart parking solutions smarter.
Distributed Ledger Technologies are one of the pillars of future technologies, prognozing to have a great impact to many aspects of our lives, including social, economic, juristic, security and many others. Bitcoin is still the most popular blockchain currency, but the opportunities to use Distribute Ledger Technologies are much more wide, outperforming financial applications as most known and popular. Besides blockchains, there are also other architectures of Distributed Ledger Technologies. This paper observes and analyses one technology as a very strong alternative to blockchains: hashgraphs, which are promising to outperform blockchains, but also tangles. Basis of their architecture and functionality will be explained and directions and prognosis of the further development will be given. The main paper contribution is a comparison of a hashgraph technology to its concurrent architectures, i.e. blockchains and tangles, considering different segments and different properties that define a quality of Distributed Ledgers.
Connected devices in IoT as well as the smartwatch market are getting more and more popular every year. The main mode of communication in IoT is an easy-to-use MQTT protocol suitable for devices with limited resources and battery power. Tizen is used for platforms such as mobile devices, smartwatches, TVs and even Linux kernel-based IoT devices. In this paper, we explain how MQTT protocol, Tizen operating systems and their architecture work, and suggest one possible implementation of a MQTT protocol for Smartwatches based on the Tizen operating system. We list the types of Tizen applications, develop a native application, and suggest possible future upgrades and appliances in IoT.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više