Zhang Ping, Xu Xiaodong, Dong Chen, Han Shujun, Wang Bizhu
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2002
As one of the critical technologies for the 6th generation mobile communication system (6G) mobile communication systems, artificial intelligence (AI) technology will provide complete automation for connecting the virtual and physical worlds. In order to construct the future ubiquitous intelligent network, people are beginning to rethink how mobile communication systems transmit and exploit intelligent information. This paper proposes a new communication paradigm, called the Intellicise communication system: model-driven semantic communication. Intellicise communication system is built on top of the traditional communication system and innovatively adds a new feature dimension on top of the traditional source coding, which enables the communication system to evolve from the traditional transmission of bit to the transmission of “model”. Like the semantic base (Seb) for semantic communication, the model is considered as the new feature obtained from the joint source-channel coding. The sink node can re-construct the original signal based on the received model and the encoded sequence. In addition, the performance evaluation metrics and the implementation details of the Intellicise communication system are discussed in this paper. Finally, preliminary results of model-driven image transmission in the Intellicise communication system are presented.
Zhang Zezhong, Chen Mingzhe, Xu Jie, Cui Shuguang
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2003
Recent breakthroughs in artificial intelligence (AI) give rise to a plethora of intelligent applications and services based on machine learning algorithms such as deep neural networks (DNNs). With the proliferation of Internet of things (IoT) and mobile edge computing, these applications are being pushed to the network edge, thus enabling a new paradigm termed as edge intelligence. This provokes the demand for decentralized implementation of learning algorithms over edge networks to distill the intelligence from distributed data, and also calls for new communication-efficient designs in air interfaces to improve the privacy by avoiding raw data exchange. This paper provides a comprehensive overview on edge intelligence, by particularly focusing on two paradigms named edge learning and edge inference, as well as the corresponding communication-efficient solutions for their implementations in wireless systems. Several insightful theoretical results and design guidelines are also provided.
Liu Guangyi, Deng Juan, Zheng Qingbi, Li Gang, Sun Xin, Huang Yuhong
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2004
The application of the artificial intelligence (AI) technology in the 5th generation mobile communication system (5G) networks promotes the development of the mobile communication network and its application in vertical industries, however, the application models of "patching" and "plug-in" have hindered the effect of AI applications. Meanwhile, the application of AI in all walks of life puts forward requirements for new capabilities of the future network, such as distributed training, real-time collaborative inference, local data processing, etc. , which require "native intelligence design” in future networks. This paper discusses the requirements of native intelligence in the 6th generation mobile communication system (6G) networks from the perspectives of 5G intelligent network challenges and the “ubiquitous intelligence” vision of 6G, and analyzes the technical challenges of the AI workflows in its lifecycle and the AI as a service (AIaaS) in cloud network. The progress and deficiencies of the current research on AI functional architecture in various industry organizations are summarized. The end-to-end functional architecture for native AI for 6G network and its three key technical characteristics are proposed: quality of AI services (QoAIS) based AI service orchestration for its full lifecycle, deep integration of native AI computing and communication, and integration of native AI and digital twin network. The directions of future research are also prospected.
Yao Shengshi, Wang Sixian, Dai Jincheng, Niu Kai, Xu Wenjun, Zhang Ping
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2005
The industrial Internet of things (industrial IoT, IIoT) aims at connecting everything, which poses severe challenges to existing wireless communication. To handle the demand for massive access in future industrial networks, semantic information processing is integrated into communication systems so as to improve the effectiveness and efficiency of data transmission. The semantic paradigm is particularly suitable for the purpose-oriented information exchanging scheme in industrial networks. To illustrate its applicability, typical industrial data are investigated, i. e. , time series and images. Simulation results demonstrate the superiority of semantic information processing, which achieves a better rate-utility tradeoff than conventional signal processing.
Mao Sun, Zhang Yan
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2006
In the 6th generation mobile communication system (6G) era, a large number of delay-sensitive and computation-intensive applications impose great pressure on resource-constrained Internet of things (IoT) devices. Aerial edge computing is envisioned as a promising and cost-effective solution, especially in hostile environments without terrestrial infrastructures. Therefore, this paper focuses on integrating aerial edge computing into 6G for providing ubiquitous computing services for IoT devices. This paper first presents the layered network architecture of aerial edge computing for 6G. The benefits, potential applications, and design challenges are also discussed in detail. Next, several key techniques like unmanned aerial vehicle (UAV) deployment, operation mode, offloading mode, caching policy, and resource management are highlighted to present how to integrated aerial edge computing into 6G. Then, the joint UAV deployment optimization and computation offloading method is designed to minimize the computing delay for a typical aerial edge computing network. Numerical results reveal the significant delay reduction of the proposed method compared with the other benchmark methods. Finally, several open issues for aerial edge computing in 6G are elaborated to provide some guidance for future research.
Sun Junshuai, Zhu Xinghui, Xiao Yeqiu, Cheng Ke, Zhao Shuangrui
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2007
The flexibility of the media access control (MAC) layer has always been an important concern in the existing communication architecture. To meet the more stringent requirements under large-scale connections, the MAC layer structure needs to be optimized carefully. This paper proposes a new architecture of the MAC layer to optimize the complex communication backhaul link structure, which will increase the flexibility of the system and decrease the transmission delay. Moreover, an adaptive transmission time interval (TTI) bundling with self-healing scheme is proposed to further decrease the transmission delay and improve the quality of service (QoS). The simulation results show that the average transmission delay is greatly reduced with our proposed scheme. The bit error rate (BER) and the block error rate are also improved even if the channel changes drastically.
Sun Heng, Qiao Xiuquan
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2008
The rapid development of building information modelling (BIM) and its enabling technologies has attracted extensive attention in the field of architecture, engineering and construction (AEC). By combining BIM models with the real world, the potential of BIM can be further exploited with the help of augmented reality (AR) technology. However, a BIM model usually involves a huge amount of data. Considering the limited computing capability of current mobile devices, these applications therefore suffer from significant performance problems, especially model loading and rendering problems. To this end, an AR-based multi-user BIM collaboration system, which can realise the on-demand dynamical loading of the BIM model by using the block-wise loading strategy of model transformation, thus solving the problem of model loading delay, was proposed. In addition, dynamic rendering technology is adopted to solve the problem of rendering lag. Experimental results show that the realisation of virtual-reality fusion and interaction for the BIM model and remote multi-user collaboration can effectively improve work efficiency and intelligence in the engineering field.
Zhu Xiaorong, Jing Chuanfang, Shi Jindou, Wang Yong, Ho Chifong
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2009
Mobile edge computing (MEC) networks can provide a variety of services for different applications. End-to-end performance analysis of these services serves as a benchmark for the efficient planning of network resource allocation and routing strategies. In this paper, a performance analysis framework is proposed for the end-to-end data-flows in MEC networks based on stochastic network calculus (SNC). Due to the random nature of routing in MEC networks, probability parameters are introduced in the proposed analysis model to characterize this randomness into the derived expressions. Taking actual communication scenarios into consideration, the end-to-end performance of three network data-flows is analyzed, namely, voice over Internet protocol (VoIP), video, and file transfer protocol (FTP). These network data-flows adopt the preemptive priority scheduling scheme. Based on the arrival processes of these three data-flows, the effect of interference on their performances and the service capacity of each node in the MEC networks, closed-form expressions are derived for showing the relationship between delay, backlog upper bounds, and violation probability of the data-flows. Analytical and simulation results show that delay and backlog performances of the data-flows are influenced by the number of hops in the network and the random probability parameters of interference-flow (IF).
Guo Hui, Zhao Xuehui
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2010
This paper considers a wireless powered communication network (WPC network, WPCN) based on non-orthogonal multiple access (NOMA) technology aided by intelligent reflective surfaces (IRS). WPCN mainly focuses on downlink energy transfer (ET) and uplink information transmission (IT). At the ET phase, a dedicated multi-antenna power station (PS) is equipped to supply power to users with the assistance of IRS, and at the IT phase, the IRS adjusts the phase to assist the user in applying NOMA technology to transmit information to the base station (BS), thus minimizing the impact of dynamic IRS on the system. Based on the above settings, the maximization of sum-throughput of the system under this working mode is studied. Due to the non-convexity of maximization problem of the sum-throughput of this system, block coordinate descent (BCD) technology is applied for alternative optimization of each system block by semidefinite relaxation (SDR) and particle swarm optimization (PSO) respectively. The numerical results show that compared with baseline scheme, the proposed optimization scheme can provide greater sum-throughput of the system.
Song Xin, Huang Xue, Gao Yiming, Qian Haijun
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2011
In this paper, an optimal user power allocation scheme is proposed to maximize the energy efficiency for downlink non-orthogonal multiple access (NOMA) heterogeneous networks (HetNets). Considering channel estimation errors and inter-user interference under imperfect channel state information (CSI), the energy efficiency optimization problem is formulated, which is non-deterministic polynomial (NP)-hard and non-convex. To cope with this intractable problem, the optimization problem is converted into a convex problem and address it by the Lagrangian dual method. However, it is difficult to obtain closed-form solutions since the variables are coupled with each other. Therefore, a Lagrangian and sub-gradient based algorithm is proposed. In the inner layer loop, optimal powers are derived by the sub-gradient method. In the outer layer loop, optimal Lagrangian dual variables are obtained. Simulation results show that the proposed algorithm can significantly improve energy efficiency compared with traditional power allocation algorithms.
Li Hui, Li Shanshan, Zou Borong, Chen Yannan
中国邮电高校学报(英文版), 2022, 29 (1). doi： 10.19682/j.cnki.1005-8885.2022.2012
Deep learning has recently been progressively introduced into the field of modulation classification due to its wide application in image, vision, and other areas. Modulation classification is not only the priority of cognitive radio and spectrum sensing, but also the link during signal demodulation. Combining the advantages of convolutional neural network (CNN), long short-term memory (LSTM), and residual network (ResNet), a modulation classification method based on dual-channel CNN-LSTM and ResNet is proposed to automatically classify the modulation signal more accurately. Specifically, CNN and LSTM are initially used to form a dual-channel structure to effectively explore the spatial and temporal features of the original complex signal. It solves the problem of only focusing on temporal or spatial aspects, and increases the diversity of features. Secondly, the features extracted from CNN and LSTM are fused, making the extracted features richer and conducive to signal classification. In addition, a convolutional layer is added within the residual unit to deepen the network depth. As a result, more representative features are extracted, improving the classification performance. Finally, simulation results on the radio machine learning (RadioML) 2018.01A dataset signify that the network's classification performance is superior to many classifiers in the literature.