Internet of Things (IoT) devices are increasingly being deployed in critical applications, such as eHealth systems, enabled by advancements in 5G technology, which offer more than 100 Mbps of throughput, less than 5 ms of latency, and $99,999 \%$ of reliability. However, to overcome computing limitations and security measures, IoT devices rely on cloudbased solutions to outsource data processing. This dependency introduces significant security concerns, as sensitive data must be transmitted over the network and processed in external environments, increasing the risk of interception, unauthorized access, and data breaches. To mitigate these security risks, within the scope of the MOZAIK project, we deploy Network Slicing to ensure end-to-end inter-slice and intra-slice isolation across all network domains i.e., 5G Core (5GC), Transport Network (TN), and Radio Access Network (RAN). We deploy a synergy across the entire network infrastructure i.e., $5 \mathrm{GC}, \mathrm{TN}$, and RAN, to isolate the IoT data flows from the moment the data is generated until it reaches the cloud, safeguarding sensitive data during transmission. The results of our real-life experiments demonstrate that our proof of concept provides robust isolation between slices, effectively addressing the security concerns of IoT devices and enhancing the reliability and security of IoT applications. Additionally, we also include aspects of secure data storage and secure data processing, covered in the MOZAIK project.
Accurate channel estimation is critical for high-performance Orthogonal Frequency-Division Multiplexing systems such as 5G New Radio, particularly under low signal-to-noise ratio and stringent latency constraints. This letter presents HELENA, a compact deep learning model that combines a lightweight convolutional backbone with two efficient attention mechanisms: patch-wise multi-head self-attention for capturing global dependencies and a squeeze-and-excitation block for local feature refinement. Compared to CEViT, a state-of-the-art vision transformer-based estimator, HELENA reduces inference time by 45.0\% (0.175\,ms vs.\ 0.318\,ms), achieves comparable accuracy ($-16.78$\,dB vs.\ $-17.30$\,dB), and requires $8\times$ fewer parameters (0.11M vs.\ 0.88M), demonstrating its suitability for low-latency, real-time deployment.
Multiple visions of 6G networks elicit Artificial Intelligence (AI) as a central, native element. When 6G systems are deployed at a large scale, end-to-end AI-based solutions will necessarily have to encompass both the radio and the fiber-optical domain. This paper introduces the Decentralized Multi-Party, Multi-Network AI (DMMAI) framework for integrating AI into 6G networks deployed at scale. DMMAI harmonizes AI-driven controls across diverse network platforms and thus facilitates networks that autonomously configure, monitor, and repair themselves. This is particularly crucial at the network edge, where advanced applications meet heightened functionality and security demands. The radio/optical integration is vital due to the current compartmentalization of AI research within these domains, which lacks a comprehensive understanding of their interaction. Our approach explores multi-network orchestration and AI control integration, filling a critical gap in standardized frameworks for AI-driven coordination in 6G networks. The DMMAI framework is a step towards a global standard for AI in 6G, aiming to establish reference use cases, data and model management methods, and benchmarking platforms for future AI/ML solutions.
The transition from 5G to 6G networks will catalyze the development of advanced 6G Applications (6G Apps) with enhanced network programmability and intelligence, providing vertical industries and Communication Service Providers (CSPs) with new opportunities to optimize their operations. This article explores the future of the 6G Apps tailored to verticals in the 6G era, highlighting their role as middleware that abstracts network complexities and exposes Application Programming Interfaces (APIs) to enable dynamic interaction and real-time adaptation. Key enablers such as network exposure, Artificial Intelligence (AI), and edge computing are studied in the context of optimizing operations across verticals, and improving Quality of Service (QoS) and fostering innovation. A case study on teleoperated vehicles exemplifies the real-world applicability of these technological enablers for 6G Apps. Furthermore, this article offers insights and explores new research opportunities for 6G Apps tailored to verticals to evolve in the 6G era while addressing key challenges in deploying these applications in real-world commercial networks as a service.
This letter proposes a multi-stream selection framework for \ac{CF-MIMO} networks. Partially coherent transmission has been considered by clustering \acp{AP} into phase-aligned clusters to address the challenges of phase misalignment and inter-cluster interference. A novel stream selection algorithm is developed to dynamically allocate multiple streams to each multi-antenna \ac{UE}, ensuring that the system optimizes the sum rate while minimizing inter-cluster and inter-stream interference. Numerical results validate the effectiveness of the proposed method in enhancing spectral efficiency and fairness in distributed \ac{CF-MIMO} networks.
5G Standalone (SA) networks introduce a concept of Network Slicing that enables a range of new applications, such as enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and massive Machine-Type Communications (mMTC). However, despite the promising potential of 5G SA networks, real-world deployments have revealed significant limitations, particularly in terms of signal coverage, resulting in performance degradation for eMBB, URLLC, and mMTC services. To mitigate these challenges and reduce the costs associated with deploying new infrastructure, Network Sharing among multiple operators has emerged as a cost-effective solution. While the 3rd Generation Partnership Project (3GPP) introduced Network Sharing in 5G Release 15 and added an Indirect Network Sharing configuration in Release 19, real-life implementation remains limited due to immature mechanisms and the lack of automated systems for neutral hosts providers to easily onboard new operators and dynamically allocate network resources to meet specific network requirements. In this paper, we explore the application of Network Slicing as a mechanism to deploy Network Sharing among multiple operators, presenting a 5G SA Indirect Network Sharing architecture as proof of concept (PoC). Through our experiment, performed in a real-world and open-source testbed based on O-RAN principles, we demonstrate how applying Network Slicing technology, Neutral Host providers can effectively deploy resource isolation and enable collaboration in a multi-operator environment while guaranteeing service quality to their users.
The real-world deployments of 5G SA networks have highlighted significant challenges, particularly related to signal coverage, leading to performance degradation for enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and massive Machine-Type Communications (mMTC). To address these challenges and minimize the costs of new infrastructure deployment, Network Sharing among multiple operators has become a viable, cost-effective solution. The 3rd Generation Partnership Project (3GPP) began exploring network sharing in 5G with Release 15, expanding it with an Indirect Network Sharing configuration in Release 19. In this work, we present an Indirect Network Sharing approach that utilizes Network Slicing to create multiple isolated virtual networks on a single physical infrastructure, ensuring resource isolation and efficient management in a multi-operator environment. Our demonstration illustrates how a third-party entity can effectively manage network resources, maintaining isolation and performance quality across different network domains operated by various providers.
The increasing demand for high-quality and efficient Channel Estimation (CE) in 5G New Radio (5G-NR) systems has prompted the exploration of advanced Deep Learning (DL) techniques. While traditional methods, such as Linear Interpolation (LI) and Least Squares (LS), provide reasonable accuracy and are practical for real-time physical layer processing, recent DL-based CE approaches have primarily focused on accuracy, often without evidence of real-time capabilities. In this paper, we present a comprehensive evaluation of DL-based Super-resolution (SR) methods for CE, comparing models like Super Resolution Convolutional Neural Network (SRCNN), ChannelNet, and Enhanced Deep Super-Resolution (EDSR) in both 1D and 2D convolutional architectures. We optimize these models using NVIDIA TensorRT to reduce computational complexity and latency. Our results show that the optimized 1D-EDSR model achieves the best performance with a Mean Squared Error (MSE) of 0.0126, outperforming all other models in terms of accuracy. However, the optimized 1D-EDSR model fails to meet real-time constraints due to additional computational overhead (0.6798 ms/sample). In contrast, the 1D-SRCNN model offers a balanced trade-off between MSE (0.01738) and inference time (0.0866ms/sample), achieving 40% higher accuracy than LS (0.0288) while maintaining the best energy efficiency (1.48 mJ/sample).
On the threshold of a new technological era, Sixth Generation (6G) networks promise to revolutionize global connectivity, bringing mobile communications to data speeds in the terabits per second range and ultra-low latency. These networks will enhance the user experience enable a wide range of advanced applications and emerging services. Artificial Intelligence (AI)-powered network functions and services, also known as Network Intelligence Functions (NIF) and Network Intelligence Service (NIS), are essential to achieve this vision. In this study, we present the design and development of an end-to-end framework for orchestrating AI-based functions. Utilizing Kubernetes (K8s) and Prefect, we showcase its implementation through an AI-driven Traffic Classification (TC) use case. Our results confirm the feasibility of the proposed framework, offering valuable insights in the lifecycle management design, such as data collection, decision-making, and critical performance metrics, including deployment time and model performance in terms of accuracy and inference times among three different Machine Learning (ML)-based TC models.
5G Standalone (SA) networks introduce a range of new applications, including enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and massive Machine-Type Communications (mMTC). Each of these applications has distinct network requirements, which current commercial network architectures, such as 4G and 5G Non-Standalone (NSA), struggle to meet simultaneously due to their one-size-fits-all design. The 5G SA architecture addresses this challenge through Network Slicing, creating multiple isolated virtual networks on top a single physical infrastructure. Isolation between slices is crucial for performance, security, and reliability. Each slice owns virtual resources, based on the physical resources (e.g., CPU, memory, antennas, and network interfaces) shared by the overall infrastructure. To deploy Network Slicing, it is essential to understand the concept of isolation. The Third Generation Partnership Project (3GPP) is standardizing security for Network Slicing, focusing on authentication, authorization, and slice management. However, the standards do not clearly define the meaning of isolation and its implementation in the infrastructure layer.In this paper, we define and showcase a real-life Proof of Concept (PoC), which guarantees isolation between slices in 5G SA networks, for each network domain i.e., Radio Access Network (RAN), Transport Network (TN), and 5G Core (5GC) network. Furthermore, we describe the 5G SA architecture of the PoC, explaining the isolation concepts within the Network Slicing framework, how to implement isolation in each network domain, and how to evaluate it.
5G Standalone (SA) networks introduce a range of new applications, including enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and massive Machine-Type Communications (mMTC). Each of these applications has distinct network requirements, which current commercial network architectures, such as 4G and 5G Non-Standalone (NSA), struggle to meet simultaneously due to their one-size-fits-all design. The 5G SA architecture addresses this challenge through Network Slicing, creating multiple isolated virtual networks on top of a single physical infrastructure. Isolation between slices is crucial for performance, security, and reliability. Each slice owns virtual resources, based on the physical resources (e.g., CPU, memory, antennas, and network interfaces) shared by the overall infrastructure.In this demo, we define and showcase a real-life Proof of Concept (PoC), which enables Network Slicing guaranteeing isolation between slices in 5G SA networks, for each network domain i.e., Radio Access Network (RAN), Transport Network (TN), and 5G Core (5GC) network.
Only the chairs can edit The integration of vehicular communications, 5G mobile networks, and edge computing represents a significant shift in intelligent transportation. Key components of Intelligent Transportation Systems, such as Vehicle-to-Vehicle and Vehicle-to-Infrastructure communications, are essential for this transformation. The introduction of 5G improves connectivity, while edge computing brings processing capabilities closer to data sources. This combination has the potential to dramatically enhance transportation efficiency and safety. We focus on developing a sustainable Vehicle-to-Everything (V2X) framework based on experimentation in the Smart Highway testbed, located in Antwerp, focusing on protecting Vulnerable Road Users (VRUs). This study explores the interaction between vehicular communication and edge computing within a 5G network, focusing on the varying distances between On Board Units (OBUs) and Roadside Units (RSUs). The framework applications involve the development of a VRU Safety Mobile Application (SMA) and a Collision Prediction Edge Application (CPEA). Additionally, the research addresses sustainability by analyzing energy consumption in the context of the Central Processing Unit (CPU) load at the RSU using detailed real-world experiments and simulations. The findings indicate that energy consumption remains stable at shorter distances but shows increased variability at longer ranges.
TrialsNet is a project dedicated to enhancing European urban ecosystems through a variety of innovative use cases in domains including Security and Safety, Infrastructure, and Transportation. These use cases are being implemented across different clusters in Italy, Spain, Greece, and Romania, involving real users. This paper provides an overview of the diverse use cases, and of the corresponding network solutions, which leverage advanced functionalities like dynamic slicing management, NFV, MEC, AI/ML, and more. The project aims to identify network limitations, optimize infrastructure, and define new requirements for next-generation mobile networks. Ultimately, TrialsNet seeks to improve urban livability by driving advancements across multiple domains.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više