总访问量
今日访问
在线人数
Addressing the issue of low pulse identification rates for low probability of intercept ( LPI) radar signals under low signal-to-noise ratio ( SNR) conditions, this paper aims to investigate a new method in the field of deep learning to recognize modulation types of LPI radar signals efficiently. A novel algorithm combining dual efficient network ( DEN) and non-local means ( NLM) denoising was proposed for the identification and selection of LPI radar signals. Time-domain signals for 12 radar modulation types were simulated, adding Gaussian white noise at various SNRs to replicate complex electronic countermeasure scenarios. On this basis, the noisy radar signals undergo Choi-Williams distribution ( CWD ) time-frequency transformation, converting the signals into two- dimensional (2D) time-frequency images ( TFIs). The TFIs are then denoised using the NLM algorithm. Finally, the denoised data is fed into the designed DEN for training and testing, with the selection results output through a softmax classifier. Simulation results demonstrate that at an SNR of - 8 dB, the algorithm can achieve a recognition accuracy of 97.22% for LPI radar signals, exhibiting excellent performance under low SNR conditions. Comparative demonstrations prove that the DEN has good robustness and generalization performance under conditions of small sample sizes. This research provides a novel and effective solution for further improving the accuracy of identification and selection of LPI radar signals.
In frequency division duplex ( FDD) massive multiple-input multiple-output ( MIMO) systems, a bidirectional positional attention network ( BPANet) was proposed to address the high computational complexity and low accuracy of existing deep learning-based channel state information ( CSI) feedback methods. Specifically, a bidirectional position attention module ( BPAM) was designed in the BPANet to improve the network performance. The BPAM captures the distribution characteristics of the CSI matrix by integrating channel and spatial dimension information, thereby enhancing the feature representation of the CSI matrix. Furthermore, channel attention is decomposed into two one-dimensional (1D) feature encoding processes effectively reducing computational costs. Simulation results demonstrate that, compared with the existing representative method complex input lightweight neural network ( CLNet), BPANet reduces computational complexity by an average of 19. 4% and improves accuracy by an average of 7. 1% . Additionally, it performs better in terms of running time delay and cosine similarity.
In convolutional neural networks ( CNNs), the floating-point computation in the traditional convolutional layer is enormous, and the execution speed of the network is limited by intensive computing, which makes it challenging to meet the real-time response requirements of complex applications. This work is based on the principle that the time domain convolution result equals the frequency domain point multiplication result to reduce the amount of floating- point calculations for convolution. The input feature map and the convolution kernel are converted to the frequency domain by the fast Fourier transform( FFT), and the corresponding point multiplication is performed. Then the frequency domain result is converted back to the time domain, and the output result of the convolution is obtained. In the shared CNN, the input feature map is much larger than the convolution kernel, resulting in many invalid operations. The overlap addition method is proposed to reduce invalid calculations and speed up network execution better. This work designs a hardware accelerator for frequency domain convolution and verifies its efficiency on the Xilinx Zynq UltraScale + MPSoC ZCU102 board. Comparing the calculation time of visual geometry group 16 ( VGG16 ) under the ImageNet dataset faster than the traditional time domain convolution, the hardware acceleration of frequency domain convolution is 8. 5 times.
Naive-LSTM enabled service identification of edge computing in power Internet of things
Great challenges and demands are presented by increasing edge computing services for current power Internet of things ( Power IoT) to deal with the serious diversity and complexity of these services. To improve the matching degree between edge computing and complex services, the service identification function is necessary for Power IoT. In this paper, a naive long short-term memory ( Naive-LSTM ) based service identification scheme of edge computing devices in the Power IoT was proposed, where the Naive-LSTM model makes full use of the most simplified structure and conducts discretization of the long short-term memory ( LSTM) model. Moreover, the Naive-LSTM based service identification scheme can generate the probability output result to determine the task schedule policy of Power IoT. After well learning operation, these Naive-LSTM classification engine modules in edge computing devices of Power IoT can perform service identification, by obtaining key characteristics from various service traffics. Testing results show that the Naive-LSTM based services identification scheme is feasible and efficient in improving the edge computing ability of the Power IoT.
Due to the diversity of graph computing applications, the power-law distribution of graph data, and the high compute-to-memory ratio, traditional architectures face significant challenges regarding poor flexibility, imbalanced workload distribution, and inefficient memory access when executing graph computing tasks. Graph computing accelerator, GraphApp, based on a reconfigurable processing element ( PE) array was proposed to address the challenges above. GraphApp utilizes 16 reconfigurable PEs for parallel computation and employs tiled data. By reasonably dividing the data into tiles, load balancing is achieved and the overall efficiency of parallel computation is enhanced. Additionally, it preprocesses graph data using the compressed sparse columns independently ( CSCI) data compression format to alleviate the issue of low memory access efficiency caused by the high memory access-to-computation ratio. Lastly, GraphApp is evaluated using triangle counting ( TC) and depth-first search ( DFS) algorithms. Performance analysis is conducted by measuring the execution time of these algorithms in GraphApp against existing typical graph frameworks, Ligra, and GraphBIG, using six datasets from the Stanford Network Analysis Project ( SNAP) database. The results show that GraphApp achieves a maximum performance improvement of 30.86 % compared to Ligra and 20.43 % compared to GraphBIG when processing the same datasets.
Millimeter-wave ( mmWave) and massive multiple-input multiple-output ( MIMO) are broadly recognized as key enabling technologies for the fifth generation (5G) communication systems. In this paper, a low-complexity angle- delay parameters estimation ( ADPE) algorithm was put forward for wideband mmWave systems with uniform planar arrays ( UPAs). In particular, the ADPE algorithm effectively decouples the angle-delay parameters and converts the angle-delay estimation problem into three independent subproblems. Accordingly, the ability to devise an off- grid method based on discrete Fourier transform ( DFT) with a closed-form solution for angle-delay estimation and potential path number acquisition can be realized. In actuality, only a limited number of potential paths are close to the true paths influenced by noise. Consequently, the removal of noise paths to acquire the corresponding true path gains through a sparsity adaptive path gains estimation ( APGE) algorithm is postulated. Finally, the simulation results substantiate the effectiveness of ADPE and APGE algorithms.
In this paper, a wideband high gain millimeter wave radar array antenna based on a wavy power divider was proposed. The radar antenna comprises a wavy power divider and a 10-element array antenna. By adjusting the wavy radius of the power divider, the surface current of the power divider is altered, resulting in better impedance matching with the antenna. This ultimately leads to a significant improvement in bandwidth performance. The 4 伊 10 millimeter wave radar antenna loaded with a wavy power divider exhibits an approximate enhancement of 3 GHz compared to traditional microstrip power divider antennas, and an average gain increase of 2.42 dB within the vehicle millimeter wave radar frequency band relative to the improved gradient power divider structure. The 4 伊 10 millimeter wave radar antenna loaded with wavy power divider possesses the characteristics of high gain and broad bandwidth.
A 20 GHz - 24 GHz three-stage low noise amplifier ( LNA) was implemented using the GaAs pseudomorphic high electron mobility transistor ( PHEMT) process. The schematic design and optimization of the LNA were carried out using advanced design system ( ADS). The three-stage series structure is used to increase the gain of the amplifier. Additionally, a self-biasing network and negative feedback circuit can expand the bandwidth while increasing the stability of the circuit and obtaining better input matching and noise. The test results show that the gain in the 20 GHz - 24 GHz band is greater than 20 dB, the noise figure ( NF) is 2. 1 dB, and the input and output reflection coefficients are less than - 10 dB, which meets the design requirements. The amplifier serves a wide range of applications, including wireless communications, radar systems, satellite communications, and other areas that require high-frequency amplification to enhance system performance and sensitivity.
By leveraging the high maneuverability of the unmanned aerial vehicle ( UAV) and the expansive coverage of the intelligent reflecting surface ( IRS), a multi-IRS-assisted UAV communication system aimed at maximizing the sum data rate of all users was investigated in this paper. This is achieved through the joint optimization of the trajectory and transmit beamforming of the UAV, as well as the passive phase shift of the IRS. Nevertheless, the initial problem exhibits a high degree of non-convexity, posing challenges for conventional mathematical optimization techniques in delivering solutions that are both quick and efficient while maintaining low complexity. To address this issue, a novel scheme called the deep reinforcement learning ( DRL) -based enhanced cooperative reflection network ( DCRN) was proposed. This scheme effectively identifies optimal strategies, with the long short-term memory ( LSTM) network improving algorithm convergence by extracting hidden state information. Simulation results demonstrate that the proposed scheme outperforms the baseline scheme, manifesting substantial enhancements in sum rate and superior performance.
Since the release of the first version of the 5 th generation ( 5 G) mobile networks standard: Release-15 ( Rel- 15 ) , the 3 rd Generation Partnership Project ( 3 GPP) made significant efforts in the field of indoor and outdoor wireless positioning. Notably, Release-16 ( Rel-16 ) augmented support for enhanced mobile broadband ( eMBB) and ultra-reliable low-latency communication ( uRLLC) , particularly within complex indoor settings. To further meet the diverse application needs of positioning scenarios, the 3 GPP standards for Release-17 ( Rel-17 ) and Release-18 ( Rel-18 ) propose new enhancement measures to continuously provide more accurate positioning services. In this paper, the scholarly discourse on 5 G positioning was critically examined, providing a systematic review of the 5 G positioning standards as delineated in 3 GPP’s Rel-16 and Rel-17 , and extended the discussion to the anticipated enhancements in 5 G Rel-18 , along with their underlying motivations. Through these discussions, not only a comprehensive perspective on the current development of 5 G positioning technology was provided but also forward-looking analysis and predictions for the evolution of positioning technology in the upcoming 3 GPP Release-19 ( Rel-19 ) was offered. Additionally, it serves as a reference for researchers interested in understanding the development of positioning within the standard framework in the field of 5 G indoor positioning, which holds significant meaning for promoting research and application of 5 G positioning technology.
Aiming at the problems of poor initial population quality, slow convergence, and long-running time of optical microscope algorithm ( OMA), a multiple-strategy improved OMA based on periodical variation and encircling mechanism, called MOMA, was proposed in this paper. Firstly, the good point set population initialization is introduced to obtain a uniform initial population. Secondly, the periodic mutation and encircling mechanism are successively used to improve the convergence speed. Finally, the MOMA’s running time is optimized by introducing the conversion factor and the corresponding threshold, while balancing the exploration and exploitation. Experimental and analytical comparisons are made with OMA and 7 other excellent optimizers on 21 benchmark functions. The results show that MOMA largely outperforms the original algorithm. Furthermore, by applying MOMA to the optimization experiments of the no-wait flow-shop scheduling problem ( NWFSP), MOMA can obtain the optimal completion time and the fastest convergence speed compared to modified particle swarm optimization ( PSO) using adaptive strategy, grey wolf optimizer ( GWO), golden jackal optimization ( GJO), and OMA.
As a fundamental component of intelligent transportation systems, existing urban traffic flow forecasting models tend to overlook the spatio-temporal and long-term time-dependent patterns that characterize transportation networks. Among these, the long sequence time-series forecasting ( LSTF) model is susceptible to the issue of gradient disappearance, which can be attributed to the influence of a multitude of intricate factors. Accordingly, in this paper, the standpoint of multi-feature fusion was studied, and a traffic flow forecasting network model based on feature fusion and spatio-temporal transformer ( S-T transformer) ( STFFN) was proposed. The model combined predictive recurrent neural network ( Pred RNN) and S-T transformer to dynamically capture the spatio-temporal dependence and long-term time-dependence of traffic flow, thereby achieving a certain degree of model interpretability. A novel gated residual network-2 ( GRN-2) was proposed to investigate the potential relationship between multivariate features and target values. Furthermore, a hybrid quantile loss function was devised to alleviate the gradient disappearance in LSTF problems effectively. In extensive real experiments, the rationality and effectiveness of each network of the model were demonstrated, and the superior forecasting performance was verified in comparison to existing benchmark models.
The Nakagami-Gamma ( NG) shadow fading model based on the moment-based method ( MoM) generates lower tail approximation, which is inaccuracy when the gamma random variables are replaced by the lognormal random variables. The channel parameters of composite NG shadow fading based on the method of minimizing the Kullback- Leibler ( KL) divergence were estimated and a closed-form expression for the system bit error rate ( BER) was derived in this paper. The simulation results show that the KL estimated parameters solve the lower tail approximation problem, and the replacement effect of the lognormal function by the gamma function is better than the MoM when the shading parameters are around the typical value of 4 dB - 9 dB. Moreover, the KL method has a lower mean square error ( MSE) value for the channel analysis.