It is challenging and significant to explore the impacts of non-real-time services on real-time services from the perspective of jitter. Most of current researches on jitter made too many mathematical hypotheses on networks and traffic. This paper puts forward a tandem queuing model to characterize the real communication scenario where heterogeneous services are served by IEEE 802.15.4 wireless sensor networks (WSNs), and then the packets served successfully are fed to Internet protocol (IP) networks. By analyzing the contention access processes in IEEE 802.15.4 WSNs, the authors derive the departure processes of the two types of services, i.e., the arrival processes of IP networks. The IP network is modeled as a queuing system, in which the real-time service is forwarded accompanied by the non-real-time service. Investigating the jitter of real-time services is intractable. Therefore, this paper abstracts this problem as a dynamic queuing system evolving on a dynamic time interval. Referring the transient analysis method (TAM), this paper obtains the queue length in a random time interval which is scaled by the arrival of real-time services. Queue length evolution is closely connected with the jitter. Benefiting from the derivation in probability generation domain, the jitter of real-time services is obtained.
This article investigates the significant performances of orthogonal frequency division multiplexing (OFDM)-based dual-hop system in the presence of phase noise (PN). A scenario with Rayleigh fading statistics on both hops is assumed. Amplification factor for this amplify-and-forward (AF) relay networks system is divided into two conditions, average power scaling (APS) and instantaneous power scaling (IPS). Before deriving signal-to-noise ratios (SNR) under APS and IPS, the Gaussianity of intercarrier interference (ICI) is proved firstly. The accurate closed-form expressions of end-to-end SNR cumulative distribution functions (CDF) and probability density functions (PDF) for both cases are obtained later. With the help of moment generating functions (MGF), we have closed-form asymptotic expressions of bit error rate (BER), which show that the BER of system in the presence of PN cannot exceed a fixed level even when SNR in high regime. Finally, simulations verify accuracy of the results. Conclusion analysis will provide a useful help in future application of the system.
We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.
This paper studies a multiple input multiple output (MIMO) simultaneous wireless information and power transfer (SWIPT) relay system, in which the source node (SN) send information and energy simultaneously to the relay node (RN), and the RN forward the received signal to the destination node (DN) powered by harvested energy. In particular, we consider two SWIPT receiver designs, namely power splitting (PS) and antenna switching (AS) in the relay system. For each design, iterative algorithms based on convex optimization technique are proposed to maximize the system rate. Furthermore, in order to strike a balance between computational complexity and system performance, based on the AS scheme, we propose a low complexity optimization method for PS scheme where a suboptimal PS ratio is given. Numerical results are provided to evaluate the performance of the proposed algorithm for MIMO SWIPT relay systems. It is shown that the performance of the proposed suboptimal method approaches that of the optimal PS scheme.
Speech emotion recognition (SER) in noisy environment is a vital issue in artificial intelligence (AI). In this paper, the reconstruction of speech samples removes the added noise. Acoustic features extracted from the reconstructed samples are selected to build an optimal feature subset with better emotional recognizability. A multiple-kernel (MK) support vector machine (SVM) classifier solved by semi-definite programming (SDP) is adopted in SER procedure. The proposed method in this paper is demonstrated on Berlin Database of Emotional Speech. Recognition accuracies of the original, noisy, and reconstructed samples classified by both single-kernel (SK) and MK classifiers are compared and analyzed. The experimental results show that the proposed method is effective and robust when noise exists.
In this paper, an iterative carrier recovery algorithm of Gaussian filtered minimum shift keying (GMSK) in burst-mode was designed. The data utilization rate and precision of the traditional demodulation method for recovering carrier signal in burst data packet with limited-length is poor. In order to solve this problem, this paper proposed an iterative carrier recovery algorithm. This algorithm can improve the estimated precision of carrier recovery and data utilization rate of burst data packet in a large extent by performing multiple forward and backward iterations. And the algorithm can be implemented in Simulink environment. As the communication mode of automatic identification system (AIS) is abrupt, the algorithm is especially suitable for satellite-based AIS.
An efficient solution for locating a target was proposed, which by using time difference of arrival (TDOA) measurements in the presence of random sensor position errors to increase the accuracy of estimation. The cause of position estimation errors in two-stage weighted least squares (TSWLS) method is analyzed to develop a simple and effective method for improving the localization performance. Specifically, the reference sensor is selected again and the coordinate system is rotated according to preliminary estimated target position by using TSWLS method, and the final position estimation of the target is obtained by using weighted least squares (WLS). The proposed approach exhibits a closed-form and is as efficient as TSWLS method. Simulation results show that the proposed approach yields low estimation bias and improved robustness with increasing sensor position errors and thus can easily achieve the Cramer-Rao lower bound (CRLB) easily and effectively improve the localization accuracy.
To understand website complexity deeply, a web page complexity measurement system is developed. The system measures the complexity of a web page at two levels: transport-level and content-level, using a packet trace-based approach rather than server or client logs. Packet traces surpass others in the amount of information contained. Quantitative analyses show that different categories of web pages have different complexity characteristics. Experimental results show that a news web page usually loads much more elements at more accessing levels from much more web servers within diverse administrative domains over much more concurrent transmission control protocol (TCP) flows. About more than half of education pages each only involve a few logical servers, where most of elements of a web page are fetched only from one or two logical servers. The number of content types for web game traffic after login is usually least. The system can help web page designers to design more efficient web pages, and help researchers or Internet users to know communication details.
The unforeseen mobile data explosion as well as the scarce of spectrum resource pose a major challenge to the performance of today’s cellular networks which are in urgent need of novel solutions to handle such voluminous mobile data. Long term evolution-unlicensed (LTE-U), which extends the LTE standard operating on the unlicensed band, has been proposed to improve system throughput. In LTE-U system, arriving users will contend the unlicensed spectrum resource with wireless fidelity (WiFi) users to transmit data information. Nevertheless, there is no clear consensus as to the benefits of transmission using unlicensed bands for LTE users. To this end, in this paper an analytical model is presented based on a queue system to understand the performance achieved by unlicensed based LTE system taking quality of services (QoS) and LTE-U users’ behaviors into account. To obtain the stead-state solutions of the queue system, a matrix geometric method is used to solve it. Then, the average delay and utilization of unlicensed band for the LTE-U users is derived by using the queuing model. The performance of LTE-U coexistence is evaluated with WiFi using the proposed model and provide some initial insights as to the advantage of LTE-U in practice.
A joint channel selection and power control scheme is developed for video streaming in device-to-device (D2D) communications based cognitive radio networks. In particular, physical queue and virtual queue models by applying ‘M/G/1 queue ’and ‘M/G/1 queue with vacations’ theories are built up, respectively, to evaluate the delays experienced by various video traffics. Such delays play a vital role in calculating the packet loss rate for video streaming, which reflects the video distortion. Based on the distortion model, a video distortion minimization problem is formulated, subject to the rate constraint, maximum power constraint, primary users’ tolerant interference constraint, and secondary users’ minimum data rate requirement constraint. The optimization problem turns out to be a mixed integer nonlinear programming (MINLP), which is generally nondeterministic in polynomial time. A Lagrange dual method is thus employed to reformulate the video distortion minimization problem, based on which the sub-gradient algorithm is used to determine a relaxed solution. Thereafter, applying the iterative user removal yields the optimal joint channel selection and power control solution to the original MINLP problem. Extensive simulations validate our proposed scheme and demonstrate that it significantly increases the peak signal-to-noise ratio (PSNR) compared with the existing schemes.
As the key technology of fifth generation (5G), 3-dimensional (3D) massive multi-input and multi-output (MIMO) is expected to be widely used in small cell network (SCN). In this paper, in order to investigated the tradeoff between limited size in SCN and the capacity gain from increasing antenna elements,the spatial performances of 3D massive MIMO based on a MIMO channel measurements at 6 GHz in urban microcell (UMi) scenario are studied. Enormous channel impulse responses (CIR) are collected and reconstructed, which enables us to present comparative results of the capacity and the eigenvalue spread (ES). Furthermore, the impacts of antenna element number and spacing on system performance are investigated, i.e., 32, 64, 128 elements are selected from the 512 transmitter (Tx) array with elevation interval spacing being 0.5, 1 and 2 wavelengths for each. Interestingly, the capacity gap can be obviously observed on the comparison between the 1 and 2 wavelength antenna spacing cases, which implies that correlation cannot be ignored when the antenna spacing is larger than 1 wavelength when massive antennas are equipped. The contrast results show that the capacities are enlarged with the increasing of antenna elements number, and larger antenna spacing will lead to higher channel capacity as expected. However, the capacity gains brought by the increasing of antenna spacing will descend to certain degrees as the antenna number increases. Collectively, these results will provide further insights into 3D massive MIMO utilization.
Similar to the analysis of Turbo codes, the parallel concatenated systematic polar code (PCSPC) can also be analyzed by the extrinsic information transfer (EXIT) chart. The convergence of the iterative decoding based on soft cancellation (SCAN) and belief propagation (BP) of PCSPC are analyzed by the EXIT chart. Analysis shows that SCAN decoder is more appropriate than BP decoder for this iterative decoding structure in terms of complexity. In addition, the weight coefficients of the iterative decoding structure are optimized by the simulated-EXIT (S-EXIT) chart, which improves the performance of PCSPC.
Traditional virtual private networks (VPNs) are conditional security. In order to ensure the security and confidentiality of user data transmission, a model of quantum VPN based on Internet protocol security (IPSec) protocol is proposed. By using quantum keys for key distribution and entangled particles for identity authentication in the network, a secure quantum VPN is relized. The important parameters affecting the performance of the VPN was analyzed. The quantitative relationship between the security key generation rate, the quantum bit error rate (QBER) and the transmission distance was obtained. The factors that affect the system throughput were also analyzed and simulated. Finally, the influence of the quantum noise channel on the entanglement swapping was analyzed. Theoretical analysis and simulation results show that, under a limited number of decoy states, with the transmission distance increased from 0 to 112.5 km, the secure key generation rate was reduced from 5.63×10-3 to1.22×10-5 . When the number of decoy states is fixed, the QBER increases dramatically with the increase of the transmission distance, and the maximum reaches 0.393. Analysis shows that various factors in communication have a significant impact on system throughput. The generation rate of the effective entanglement photon pairs have decisive effect on the system throughput. Therefore, in the process of quantum VPN communication, various parameters of the system should be properly adjusted to communicate within a safe transmission distance, which can effectively improve the reliability of the quantum communication system.
A system model consisting of macro and micro base stations (BS) is introduced to solve the problem of power allocation in heterogeneous dense network. In this hierarchical framework, the problem of power allocation is modeled as a stackelberg game. Based on this model, a two-stage pricing algorithm is proposed to allocate power resource to each BS. In this algorithm, a power price is allocated to each micro-BS by macro-BS and all micro-BSs are calculating respective optimal transmit power based on this price to maximize individual utility. Then a grid-based scenario is introduced to verify the proposed theory. Theoretical analysis and simulation results both validate that the proposed scheme makes performance improvement on spectral and power efficiency. Most importantly, the computaitonal complexity of the proposed scheme is greatly improved, especially in dense deployment.
The primary screening for pulmonary tuberculosis mainly relies on X-ray imaging all over the world. In recent years, the incidence of pulmonary tuberculosis has rebounded. This paper proposes a convolutional neural networks (CNN) based model on the tuberculosis detection of chest X-ray images, which is used for the automatic screening of pulmonary tuberculosis. Compared with the conventional CNN, this model can be used to detect the details of images and the areas of the disease quickly and accurately. There is an improvement in the learning speed and accuracy rate of our method, so it can better complete the work of anomaly detection and it can provide more effective auxiliary decision information for the practitioners.
Aiming at the problem of low recognition rate and poor security in the process of palmprint identity authentication, a cancelable palmprint template generating algorithm is proposed, which is based on local Gabor directional pattern with adaptive threshold by mean (mLGDP), difference local Gabor directional pattern with adaptive threshold by mean (mDLGDP) and feature fusion of them. In this method, the feature code of the image is segmented and the fea-ture vectors are extracted and binarized. Then the Bloom filter is used to achieve many-to-one mapping and the location scrambling of palmprint image. Finally, the scrambling result matrix and the user key are irreversibly transformed by the convolution operation to obtain a revocable template of the palmprint image. Both theoretical and experimental results analysis show that, in the case of key loss, the method of feature fusion can enhance the diver-sity of the original palmprint template effectively, improve the recognition rate efficiently, and have better security.
A low-than character feature embedding called radical embedding is proposed, and applied on a long-short term memory (LSTM) model for sentence segmentation of pre-modern Chinese texts. The dataset includes over 150 classical Chinese books from 3 different dynasties and contains different literary styles. LSTM-conditional random fields (LSTM-CRF) model is a state-of-the-art method for the sequence labeling problem. This model adds a component of radical embedding, which leads to improved performances. Experimental results based on the aforementioned Chinese books demonstrate better accuracy than earlier methods on sentence segmentation, especial in Tang’s epitaph texts (achieving an F1-score of 81.34%).
Caching popular files in small-cell base stations (SBSs) is considered as a promising technique to meet the demand of ever growing mobile data traffic in ultra dense networks (UDNs). Considering the limited cache capacity and dense deployment of SBSs, how to support uninterrupted and successful caching downloading for moving users is still a challenging problem. In this paper, a graph-coloring-based caching (GCC) algorithm in UDN for moving user under limited SBS storage capacities is proposed. Firstly, considering there may be downloading interruption or even failure due to the random moving of users and small coverage of SBSs, graph coloring algorithm (GCA) is employed for grouping the SBS to cache fragments of several files. Then, the problem of how to conduct caching placement on SBSs is formulated aiming to maximize the amount of data downloaded from SBSs. Finally, an efficient heuristic solution is proposed to get an optimal result. Simulation results show that the algorithm performs better than other caching strategies in prior work, in terms of reducing both backhaul traffic and user download delay.
Human activity recognition (HAR) for dense prediction is proven to be of good performance, but it relies on labeling every point in time series with the high cost. In addition, the performance of HAR model will show significant degradation when tested on the sensor data with different distribution from the training data, where the training data and the test data are usually collected from different sensor locations or sensor users. Therefore, the adaptive transfer learning framework for dense prediction of HAR is introduced to implement cross-domain transfer, where the proposed multi-level unsupervised domain adaptation (MLUDA) approach combines the global domain adaptation and the specific task adaptation to adapt the source and target domain in multiple levels. The multi-connected global domain adaptation architecture is proposed for the first time, which can adapt the output layer of the encoder and the decoder in dense prediction model. After this, the specific task adaptation is proposed to ensure alignment of each class centroid in source domain and target domain by introducing the cosine distance loss and the moving average method. Experiments on three public human activity recognition datasets demonstrate that the proposed MLUDA improves the prediction accuracy of target data by 20% compared to the source domain pre-trained model and it is more effective than the other three deep transfer learning methods with an improvement of 10% to 18% in accuracy.
Concurrent dual-band transceiver system is widely used in conditions where transceivers are required to work in
different bands at the same time. In order to miniaturize the concurrent dual-band transceiver system and reduce the
number of components in a transceiver, a novel receiver mixing structure with one frequency-divided local oscillator
is proposed. Compared to traditional mixing architecture with two local oscillators, the proposed structure reduces a
local oscillator and a bandpass filter. In addition, the output signal of proposed mixing architecture has a better
error vector magnitude (EVM) performance. The proposed mixing architecture is described in detail; the method of
local oscillator signal frequency selection and the applicable conditions of the proposed structure are derived through
math formula; and the conclusions are demonstrated by experimental results.
Computer Applied Technology
Lithium-ion batteries are the main power supply equipment in many fields due to their advantages of no memory, high energy density, long cycle life and no pollution to the environment. Accurate prediction for the remaining useful life (RUL) of lithium-ion batteries can avoid serious economic and safety problems such as spontaneous combustion. At present, most of the RUL prediction studies ignore the lithium-ion battery capacity recovery phenomenon caused by the rest time between the charge and discharge cycles. In this paper, a fusion method based on Wasserstein generative adversarial network (GAN) is proposed. This method achieves a more reliable and accurate RUL prediction of lithium-ion batteries by combining the artificial neural network (ANN) model which takes the rest time between battery charging cycles into account and the empirical degradation models which provide the correct degradation trend. The weight of each model is calculated by the discriminator in the Wasserstein GAN model. Four data sets of lithium-ion battery provided by the National Aeronautics and Space Administration (NASA) Ames Research Center are used to prove the feasibility and accuracy of the proposed method.
Narrowband Internet of things (NB-IoT) and enhanced machine-type communications (eMTC) are two new IoT-oriented solutions introduced by the 3rd generation partnership project (3GPP) in Rel-13. In order to meet the new requirements (such as long battery life, low device cost, low deployment cost, extended coverage and support for a massive number of devices) of machine-to-machine (M2M) communication, these two technologies had some
improvements on the random access (RA) mechanism compared to traditional long term evolution (LTE). For example, repetition of preamble transmission and coverage enhancement (CE) levels have been proposed to offer communication services in a wider area. In addition, NB-IoT has adopted a new spectrum allocation method and proposed a new type of preamble structure to meet the requirement of big amount of connections. We summarize
details and differences of the RA process in LTE, eMTC and NB-IoT. Afterwards, as an improvement, we propose an enhanced access protocol for NB-IoT. Finally, performance analysis and comparison are presented in terms of access success probability, average access delay, access spectrum efficiency and average number of RA attempts.
The bionics-based swarm intelligence optimization algorithm is a typical natural heuristic algorithm whose goal is to find the global optimal solution of the optimization problem. It simulates the group behavior of various animals and uses the information exchange and cooperation between individuals to achieve optimal goals through simple and effective interaction with experienced and intelligent individuals. This paper first introduces the principles of various swarm intelligent optimization algorithms. Then, the typical application of these swarm intelligence optimization algorithms in various fields is listed. After that, the advantages and defects of all swarm intelligence optimization algorithms are summarized. Next, the improvement strategies of various swarm intelligence optimization algorithms are explained. Finally, the future development of various swarm intelligence optimization algorithms is prospected.
Sparse code multiple access-based uplink grant-free transmission (SCMA-UGFT) has been proposed to realize ultra reliable and low latency communication (URLLC) in the fifth generation (5G) system. Without the process of resource request and grant, users may collide in the same resource. To compensate the potential user performance decline, resource scheduling becomes a tough issue in the SCMA-UGFT system. This article proposes a duplicated transmission-based resource scheduling (DTBRS) scheme for SCMA-UGFT system by considering the URLLC scenario. Different from the existing schemes, not only one shared basic transmission units (BTUs) are allocated to a user equipment ( UE) in the proposed DTBRS scheme for initial transmission to realize the duplicated
transmission and to guarantee the transmission reliability. Besides, according to the proposed DTBRS scheme, one or two exclusive BTUs are assigned to a UE for retransmission to avoid the re-collision. At last, each packet is given a lifetime to limit the transmission latency to meet the URLLC latency requirement. The simulation demonstrates that the DTBRS scheme can achieve a better performance than the existing state-of-the-art scheme in terms of the average packet drop rate.
In the matrix factorization (MF) based collaborative filtering recommendation method, the most critical part is to deal with the interaction between the features of users and items. The mainstream approach is to use the inner product for MF to describe the user-item relationship. However, as a shallow model, MF has its limitations in describing the relationship between data. In addition, when the size of the data is large, the performance of MF is often poor due to data sparsity and noise. This paper presents a model called PIDC, short for potential interaction data clustering based deep learning recommendation. First, it uses classifiers to filter and cluster recommended items to solve the problem of sparse training data. Second, it combines MF and multi-layer perceptron (MLP) to optimize the prediction effect, and the limitation of inner product on the model expression ability is eliminated. The proposed model PIDC is tested on two datasets. The experimental results show that compared with the existing benchmark algorithm, the model improved the recommendation effect.
Aiming at the problem of hysteresis in the human motion intention recognition algorithm based on kinematic sensors, a real-time prediction method about human lower limb motion tendency is proposed. It could be used to control exoskeleton robots, intelligent prosthes and other equipments in advance to eliminate the hysteresis of equipment movement. Firstly, the angle signals of ankle, knee and hip are segmented by the extreme points. Secondly, the multi-dimensional temporal association rules algorithm is used to analyze the angle signals to find out the relationships between signal patterns in adjacent time segments. Finally, the signal patterns at the next moment are predicted through the association rules algorithm, so as to predict the motion tendency of human lower limbs. Experimental results show that the proposed scheme achieves an average prediction accuracy of 78.3% for each signal segment, and can predict the subsequent motion of human lower limbs in average 92.24 ms.
Channel state information (CSI) is essential for downlink transmission in millimeter wave( mmWave) multiple input multiple output (MIMO) systems. Multi-panel antenna array is exploited in mmWave MIMO system due to its superior performance. Two channel estimation algorithms are proposed in this paper, named as generalized joint orthogonal matching pursuit (G-JOMP) and optimized joint orthogonal matching pursuit (O-JOMP) for multi-panel mmWave MIMO system based on the compressed sensing (CS) theory. G-JOMP exploits common sparsity structure among channel response between antenna panels of base station ( BS) and users to reduce the computational complexity in channel estimation. O-JOMP algorithm is then developed to further improve the accuracy of channel estimation by optimal panel selection based on the power of the received signal. Simulation results show that the performance of the proposed algorithms is better than that of the conventional orthogonal matching pursuit (OMP) based algorithm in multi-panel mmWave MIMO system.
Proactive dialogue generates dialogue utterance based on a conversation goal and a given knowledge graph (KG). Existing methods combine knowledge of each turn of dialogue with knowledge triples by hidden variables, resulting in the interpretability of generation results is relatively poor. An interpretable knowledge-aware path (KAP) model was proposed for knowledge reasoning in proactive dialogue generation. KAP model can transform explicit and implicit knowledge of each turn of dialogue into corresponding dialogue state matrix, thus forming the KAP for dialogue history. Based on KAP, the next turn of dialogue state vector can be infered from both the topology and semantic of KG. This vector can indicate knowledge distribution of next sentence, so it enhances the accuracy and interpretability of dialogue generation. Experiments show that KAP model’s dialogue generation is closer to actual conversation than other state-of-the-art proactive dialogue models.
Through-silicon via (TSV) is a key enabling technology for the emerging 3-dimension (3D) integrated circuits
(ICs). However, the crosstalk between the neighboring TSVs is one of the important sources of the soft faults. To
suppress the crosstalk, the Fibonacci-numeral-system-based crosstalk avoidance code ( FNS-CAC) is an effective
scheme. Meanwhile, the self-repair schemes are often used to deal with the hard faults, but the repaired results
may change the mapping between signals to TSVs, thus may reduce the crosstalk suppression ability of FNS-CAC.
A TSV self-repair technique with an improved FNS-CAC codec is proposed in this work. The codec is designed
based on the improved Fibonacci numeral system (FNS) adders, which are adaptive to the health states of TSVs.
The proposed self-repair technique is able to suppress the crosstalk and repair the faulty TSVs simultaneously. The
simulation and analysis results show that the proposed scheme keeps the crosstalk suppression ability of the original
FNS-CAC, and it has higher reparability than the local self-repair schemes, such as the signal-switching-based and
the signal-shifting-based counterparts.
In order to meet the emerging requirements for high computational complexity, low delay and energy consumption of the 5th generation wireless systems (5G) network, ultra-dense networks (UDNs) combined with multi-access edge computing ( MEC) can further improve network capacity and computing capability. In addition, the integration of green energy can effectively reduce the on-grid energy consumption of system and realize green computation. This paper studies the joint optimization of user association (UA) and resource allocation (RA) in MEC enabled UDNs under the green energy supply pattern, users need to perceive the green energy status of base stations (BSs) and choose the one with abundant resources to associate. To minimize the computation cost for all users, the optimization problem is formulated as a mixed integer nonlinear programming (MINLP) which is NP-hard. In order to solve the problem, a deep reinforcement learning ( DRL)-based association and optimized allocation (DAOA) scheme is designed to solve it in two stages. The simulation results show that the proposed scheme has good performance in terms of computationcost and time out ratio, as well achieve load balancing potentially.
Facial recognition has become the most common identity authentication technologies. However, problems such as uneven light and occluded faces have increased the hardness of liveness detection. Nevertheless, there are a few pieces of research on face liveness detection under occlusion conditions. This paper designs a face recognition technique suitable for different degrees of facial occlusion, which employs the facial datasets of near-infrared (NIR) images and visible (VIS) light images to examine the single-modality detection accuracy rate (experimental control group) and the corresponding high-dimensional features through the residual network (ResNet). Based on the idea of data fusion, we propose two feature fusion methods. The two methods extract and fuse the data of one and two convolutional layers from two single-modality detectors respectively. The fusion of high-dimensional features apply a new ResNet to get the dual-modality detection accuracy. And then, a new ResNet is applied to test the accuracy of dual-modality detection. The experimental results show that the dual-modality face liveness detection model improves face live detection accuracy and robustness compared with the single-modality. The fusion of two-layer features from the single-modality detector can also improve face detection accuracy by utilizing the above-mentioned dual-modality detector, and it doesn't increase the algorithm's complexity.
At present, there is an urgent need for blockchain interoperability technology to realize interconnection between various blockchains, data communication and value transfer between blockchains, so as to break the ‘ value silo’ phenomenon of each blockchain. Firstly, it lists what people understand about the concept of interoperability. Secondly, it gives the key technical issues of cross-chain, including cross-chain mechanism, interoperability, eventual consistency, and universality. Then, the implementation of each cross-chain key technology is analyzed, including Hash-locking, two-way peg, notary schemes, relay chain scheme, cross-chain protocol, and global identity system. Immediately after that, five typical cross-chain systems are introduced and comparative analysis is made. In addition, two examples of cross-chain programmability and their analysis are given. Finally, the current state of cross-chain technology is summarized from two aspects: key technology implementation and cross-chain application enforcement. The cross-chain technology as a whole has formed a centralized fixed mechanism, as well as a trend of modular design, and some of the solutions to mature applications were established in the relevant standards organizations, and the cross-chain technology architecture tends to be unified, which is expected to accelerate the evolution of the open cross-chain network that supports the real needs of the interconnection of all chains.
Special Topic: Data Security and Privacy Preservation in Cloud/ Fog / Edge-Enabled Internet of Thing
Internet of things ( IoT) can provide the function of product traceability for industrial systems. Emerging blockchain technology can solve the problem that the current industrial Internet of things ( IIoT) system lacks unified product data sharing services. Blockchain technology based on the directed acyclic graph (DAG) structure is more suitable for high concurrency environments. But due to its distributed architecture foundation, direct storage of product data will cause authentication problems in data management. In response, IIoT based on DAG blockchain is proposed in this paper, which can provide efficient data management for product data stored on DAG blockchain, and an authentication scheme suitable for this structure is given. The security of the scheme is based on a discrete-logarithm-based assumption put forth by Lysyanskaya, Rivest, Sahai and Wolf(LRSW) who also show that it holds for generic groups. The sequential aggregation signature scheme is more secure and efficient, and the new scheme is safe in theory and it is more efficient in engineering.
Special Topic: Cultural Computing
In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to solve the current problems in this field, starting from the mixed reality (MR) technology that the shared experiences rely on, proper software and hardware platforms were investigated and selected, a universal shared experiences solution was designed, and an experimental project based on the proposed solution was made to verify its feasibility. In the end, a proven and workable shared experiences solution was obtained. This solution included a proposed MR spatial alignment method, and it integrated the existing MR content production process and standard network synchronization functions. Furthermore, it is concluded that the introduction and reasonable use of new technologies can help the development of the digital interactive experience of cultural heritage. The shared experiences solution for the digital interactive experience of cultural heritage balances investment issues in the exhibition, display effect, and user experience. It can speed up the promotion of cultural heritage and bring the vitality of MR technology to relevant projects.
In this paper, the performance of you only look once ( YOLO) series detectors on Chinese license plate recognition (LPR) in the real intelligent transportation system (ITS) monitoring scene is investigated. Specially, a precise and efficient automatic license plate recognition ( ALPR ) system based on the YOLOv4 detector is proposed. The proposed ALPR system contains three stages including vehicle detection, license plate detection (LPD) and LPR. In vehicle detection stage, YOLOv4 detector is directly applied. In LPD stage, YOLOv4-tiny detector is exploited. In the last stage, the YOLOv4-tiny detector with attention mechanism for LPR is proposed to use. In addition, a large Chinese license plate dataset containing 10 500 images collected from all 31 provinces in the Chinese mainland is created. This Chinese license plate dataset is named Hefei University of Technology license plate version 1 (HFUT-LP v1). Particularly, HFUT-LP v1 dataset is collected in the real ITS monitoring scene. In order to compare the performance of different object detection algorithms for ALPR, a variety of object detection algorithms are used to make a comprehensive performance evaluation. Experimental results show that the proposed ALPR system achieves very high accuracy and has very fast processing speed, which is suitable for real-time LPR.
Special Topic: Artificial Intelligence of Things
To tackle the challenge of applying convolutional neural network (CNN) in field-programmable gate array (FPGA) due to its computational complexity, a high-performance CNN hardware accelerator based on Verilog hardware description language was designed, which utilizes a pipeline architecture with three parallel dimensions including input channels, output channels, and convolution kernels. Firstly, two multiply-and-accumulate (MAC) operations were packed into one digital signal processing (DSP) block of FPGA to double the computation rate of the CNN accelerator. Secondly, strategies of feature map block partitioning and special memory arrangement were proposed to optimize the total amount of off-chip access memory and reduce the pressure on FPGA bandwidth. Finally, an efficient computational array combining multiplicative-additive tree and Winograd fast convolution algorithm was designed to balance hardware resource consumption and computational performance. The high parallel CNN accelerator was deployed in ZU3EG of Alinx, using the YOLOv3-tiny algorithm as the test object. The average computing performance of the CNN accelerator is 127.5 giga operations per second (GOPS). The experimental results show that the hardware architecture effectively improves the computational power of CNN and provides better performance compared with other existing schemes in terms of power consumption and the efficiency of DSPs and block random access memory (BRAMs).
For classification problems, the traditional least squares twin support vector machine (LSTSVM) generates two nonparallel hyperplanes directly by solving two systems of linear equations instead of a pair of quadratic programming problems (QPPs), which makes LSTSVM much faster than the original TSVM. But the standard LSTSVM adopting quadratic loss measured by the minimal distance is sensitive to noise and unstable to re-sampling. To overcome this problem, the expectile distance is taken into consideration to measure the margin between classes and LSTSVM with asymmetric squared loss (aLSTSVM) is proposed. Compared to the original LSTSVM with the quadratic loss, the proposed aLSTSVM not only has comparable computational accuracy, but also performs good properties such as noise insensitivity, scatter minimization and re-sampling stability. Numerical experiments on synthetic datasets, normally distributed clustered (NDC) datasets and University of California, Irvine (UCI) datasets with different noises confirm the great performance and validity of our proposed algorithm.
When the power of the mainlobe interference received by the receiver is at the same level as the power of the sidelobe interference power, the traditional eigen-projection interference suppression method has the problems of severe beam deformation and peak shift. Aiming at these problems, a beam pattern optimization method (BPOM) was proposed, which can suppress the interference well even when the mainlobe interference power is approximately equal to the sidelobe interference power. In the method, the mainlobe interference eigenvectors are firstly determined according to the correlation criterion. Then through the eigenvalue comparison, the sidelobe interference eigenvectors whose eigenvalues are approximately equal to the mainlobe interference eigenvalues are judged. After that, a projection matrix is constructed to filter out the mainlobe and sidelobe interference. Finally, the covariance matrix is reconstructed and the weight vector for beamforming is obtained. Simulation shows that BPOM has a better output performance than the existing algorithms in case that the power of the mainlobe interference is close to that of the sidelobe interference.
The extraction and description of image features are very important for visual simultaneous localization and mapping (V-SLAM). A rotated boosted efficient binary local image descriptor ( BEBLID) SLAM ( RB-SLAM) algorithm based on improved oriented fast and rotated brief (ORB) feature description is proposed in this paper, which can solve the problems of low localization accuracy and time efficiency of the current ORB-SLAM3 algorithm. Firstly, it uses the BEBLID to replace the feature point description algorithm of the original ORB to enhance the expressiveness and description efficiency of the image. Secondly, it adds rotational invariance to the BEBLID using the orientation information of the feature points. It also selects the rotationally stable bits in the BEBLID to further enhance the rotational invariance of the BEBLID. Finally, it retrains the binary visual dictionary based on the BEBLID to reduce the cumulative error of V-SLAM and improve the loading speed of the visual dictionary. Experiments show that the dictionary loading efficiency is improved by more than 10 times. The RB-SLAM algorithm improves the trajectory accuracy by 24.75% on the TUM dataset and 26.25% on the EuRoC dataset compared to the ORB-SLAM3 algorithm.
The atmospheric duct is a vital radio wave environment. Conventional methods of forecasting the atmospheric duct
mainly include statistical analysis based on sounding observation data and mesoscale numerical model-based
prediction. The former can provide accurate duct information but is highly dependent on the acquisition of data
sets. The latter is more practical but still lacks accuracy. This paper introduces machine learning to establish a
novel meteorological parameter correction model for atmospheric duct prediction. In detail, using the weather
research and forecasting (WRF) model data and spatiotemporal characteristics as input, sounding data as label and
extreme gradient boosting (XGBoost) model for training, the meteorological parameter correction effect is the best,
i. e. , the accuracy of forecast meteorological parameters is improved by about 65.4%. Combining the mapping relationship between meteorological parameters and corrected atmospheric refractive index ( CARI ), and the
transition mechanism of CARI to duct parameters, a new duct forecasting mechanism is proposed. Due to the high
efficiency of numerical model and the accuracy of sounding data, the new duct forecasting mechanism has excellent
performance. By comparing the duct forecasting results, the forecasting accuracy of the new duct forecasting model
is significantly higher than that of the mesoscale model.
Special Topic : Digital Human
A multi-layer dictionary learning algorithm that joints global constraints and Fisher discrimination (JGCFD-MDL) for image classification tasks was proposed. The algorithm reveals the manifold structure of the data by learning the global constraint dictionary and introduces the Fisher discriminative constraint dictionary to minimize the intra-class dispersion of samples and increase the inter-class dispersion. To further quantify the abstract features that characterize the data, a multi-layer dictionary learning framework is constructed to obtain high-level complex semantic structures and improve image classification performance. Finally, the algorithm is verified on the multi-label dataset of court costumes in the Ming Dynasty and Qing Dynasty, and better performance is obtained. Experiments show that compared with the local similarity algorithm, the average precision is improved by 3.34% . Compared with the single-layer dictionary learning algorithm, the one-error is improved by 1.00% , and the average precision is improved by 0.54% . Experiments also show that it has better performance on general datasets.
With the development of deep learning (DL), joint source-channel coding (JSCC) solutions for end-to-end transmission have gained a lot of attention. Adaptive deep JSCC schemes support dynamically adjusting the rate according to different channel conditions during transmission, enhancing robustness in dynamic wireless environment. However, most of the existing adaptive JSCC schemes only consider different channel conditions, ignoring the different feature importance in the image processing and transmission. The uniform compression of different features in the image may result in the compromise of critical image details, particularly in low signal-to-noise ratio (SNR) scenarios. To address the above issues, in this paper, a dual attention mechanism is introduced and an SNR-adaptive deep JSCC mechanism with a convolutional block attention module (CBAM) is proposed, in which matrix operations are applied to features in spatial and channel dimensions respectively. The proposedsolution concatenates the pooling feature with the SNR level and passes it sequentially through the channel attention network and spatial attention network to obtain the importance evaluation result. Experiments show that the proposed solution outperforms other baseline schemes in terms of peak SNR (PSNR) and structural similarity (SSIM), particularly in low SNR scenarios or when dealing with complex image content.
Graph conjoint attention (CAT) network is one of the best graph convolutional networks (GCNs) frameworks,
which uses a weighting mechanism to identify important neighbor nodes. However, this weighting mechanism is
learned based on static information, which means it is susceptible to noisy nodes and edges, resulting in significant
limitations. In this paper, a method is proposed to obtain context dynamically based on random walk, which allows
the context-based weighting mechanism to better avoid noise interference. Furthermore, the proposed context-based
weighting mechanism is combined with the node content-based weighting mechanism of the graph attention (GAT)
network to form a model based on a mixed weighting mechanism. The model is named as the context-based and
content-based graph convolutional network (CCGCN). CCGCN can better discover important neighbors, eliminate
noise edges, and learn node embedding by message passing. Experiments show that CCGCN achieves state-of-the-
art performance on node classification tasks in multiple datasets.
Membership inference attacks on machine learning models have drawn significant attention. While current research primarily utilizes shadow modeling techniques, which require knowledge of the target model and training data, practical scenarios involve black-box access to the target model with no available information. Limited training data further complicate the implementation of these attacks. In this paper, we experimentally compare common data enhancement schemes and propose a data synthesis framework based on the variational autoencoder generative adversarial network (VAE-GAN) to extend the training data for shadow models. Meanwhile, this paper proposes a shadow model training algorithm based on adversarial training to improve the shadow model's ability to mimic the predicted behavior of the target model when the target model's information is unknown. By conducting attack experiments on different models under the black-box access setting, this paper verifies the effectiveness of the VAE-GAN-based data synthesis framework for improving the accuracy of membership inference attack. Furthermore, we verify that the shadow model, trained by using the adversarial training approach, effectively improves the degree of mimicking the predicted behavior of the target model. Compared with existing research methods, the method proposed in this paper achieves a 2% improvement in attack accuracy and delivers better attack performance.
In frequency division duplex ( FDD) massive multiple-input multiple-output ( MIMO) systems, a bidirectional positional attention network ( BPANet) was proposed to address the high computational complexity and low accuracy of existing deep learning-based channel state information ( CSI) feedback methods. Specifically, a bidirectional position attention module ( BPAM) was designed in the BPANet to improve the network performance. The BPAM captures the distribution characteristics of the CSI matrix by integrating channel and spatial dimension information, thereby enhancing the feature representation of the CSI matrix. Furthermore, channel attention is decomposed into two one-dimensional (1D) feature encoding processes effectively reducing computational costs. Simulation results demonstrate that, compared with the existing representative method complex input lightweight neural network ( CLNet), BPANet reduces computational complexity by an average of 19. 4% and improves accuracy by an average of 7. 1% . Additionally, it performs better in terms of running time delay and cosine similarity.
As one of the critical technologies for the 6th generation mobile communication system (6G) mobile
communication systems, artificial intelligence (AI) technology will provide complete automation for connecting the
virtual and physical worlds. In order to construct the future ubiquitous intelligent network, people are beginning to
rethink how mobile communication systems transmit and exploit intelligent information. This paper proposes a new
communication paradigm, called the Intellicise communication system: model-driven semantic communication.
Intellicise communication system is built on top of the traditional communication system and innovatively adds a new
feature dimension on top of the traditional source coding, which enables the communication system to evolve from
the traditional transmission of bit to the transmission of “model”. Like the semantic base (Seb) for semantic
communication, the model is considered as the new feature obtained from the joint source-channel coding. The sink
node can re-construct the original signal based on the received model and the encoded sequence. In addition, the
performance evaluation metrics and the implementation details of the Intellicise communication system are discussed
in this paper. Finally, preliminary results of model-driven image transmission in the Intellicise communication
system are presented.
As the traditional semiconductor complementary metal oxide semiconductor (CMOS) integrated circuit technology gradually approaches the limit of Moore's Law, quantum computing, as a new system computing technology,with the potential for higher computing speed and lower power consumption, is getting more and more attention from governments and research institutions around the world. For instance, the United States (US) government is adopting a bunch of bills to deploy new quantum information processing technology. The European Union's “quantum declaration" plans to realize customized quantum computers with more than 100 qubits in the next 10 years. The Chinese government also provides strong support for quantum computer research through the project of National Science and Technology Major Projects. In the business field, major companies such as International Business Machines (IBM) Corporation, Intel, Google, Alibaba, Huawei, Baidu, etc. , have joined the “quantum supremacy" competition in order to seize the initiative in the future information field. Facing the rapid development trend of quantum computing, we believe that we should refer to the classical computer industry in the early stage, form an industrial system, and develop the quantum computing industry. We should also use the scientific system engineering method to carry out the research and development of the quantum computer and establish the ecological environment of the quantum computer industry with the demand as the traction, so as to better serve the development of the national economy.
Special Topic: Optical Communication and Artificial Intelligence
Optical network plays an important role in telecommunication networks, which supports high-capacity and long-distance transmission of Internet traffic. However, as the scaling and evolving of optical networks, it faces great challenges in terms of network operation, optimization and maintenance. Artificial intelligence ( AI ) has been proved to have superiority on addressing complex problems, by mimicking cognitive skills similar with human mind. In this paper, we provide a comprehensive investigation of AI applications in optical transport network. First, we give a general AI-based control architecture for optical transport networks. Then, we discuss several typical applications of AI model and algorithms in optical networks. Different use cases are considered, including network planning, quality of transmission ( QoT ) estimation, network reconfiguration, traffic prediction, failure management and so on. In addition, we also present some potential technical challenges for AI application in
optical network for the next years.
Multi-agent System Cooperative Control
This paper studies the dynamic event-triggered leader-follower consensus of nonlinear multi-agent systems (MASs) under directed weighted graph containing a directed spanning tree, and also considers the effects of disturbances and leader of non-zero control inputs in the system. Firstly, a novel distributed control protocol is designed for uncertain disturbances and leader of non-zero control inputs in MASs. Secondly, a novel dynamic event-triggered control ( DETC) strategy is proposed, which eliminates the need for continuous communication between agents and reduces communication resources between agents. By introducing dynamic thresholds, the complexity of excluding Zeno behavior within the system is reduced. Finally, the effectiveness of the proposed theory is validated through numerical simulation.
In this paper, a novel superjunction 4H-silicon carbide (4H-SiC) trench-gate insulated-gate bipolar transistor (IGBT) featuring an integrated clamping PN diode between the P-shield and emitter (TSD-IGBT) is designed and theoretically studied. The heavily doping superjunction layer contributes to a low specific on-resistance, excellent electric field distribution, and quasi-unipolar drift current. The anode of the clamping diode is in floating contact with the P-shield. In the on-state, the potential of the P-shield is raised to the turn-on voltage of the clamping diode, which prevents the hole extraction below the N-type carrier storage layer (NCSL). Additionally, during the turn-off transient, once the clamping diode is turned on, it also promotes an additional hole extraction path. Furthermore, the potential dropped at the semiconductor near the trench-gate oxide is effectively reduced in the off-state.
For the time-frequency overlapped signals, a low-complexity single-channel blind source separation (SBSS) algorithm is proposed in this paper. The algorithm does not only introduce the Gibbs sampling theory to separate the mixed signals, but also adopts the orthogonal triangle decomposition-M (QRD-M) to reduce the computational complexity. According to analysis and simulation results, we demonstrate that the separation performance of the proposed algorithm is similar to that of the per-survivor processing (PSP) algorithm, while its computational complexity is sharply reduced.
To achieve higher energy utilization and lower generation cost for renewable sources ( e. g. , wind and solar energy), much work has been focused on demand response in smart grid (SG). Nonetheless, most existing studies consider energy trading with utility company which results in high energy loss from high voltage to low voltage and privacy leakage. Besides, there are relatively few researches devoted to electricity scheduling and price optimum among households without a third party. To cope with these issues, a novel deep deterministic policy gradient (DDPG)-based energy trading method with consortium blockchain (DETCB) is introduced. Firstly, in order to enhance system security, executing energy transaction among households is on the basis of consortium blockchain, which leads to not only anonymous trade but also public account. Moreover, the primary target from the aspect of the system is apparently the maximal social welfare, thus exploiting an iterative decision-making method combined with DDPG algorithm by non-profit controllers to obtain optimal trading prices and carry out optimal electricity allocation. To this end, security analysis demonstrates that DETCB contributes to creating a secure, stable and trustful environment. Furthermore, the excellent performance concerning social welfare, algorithm efficiency, and transaction energy sum is shown by numerical results.
Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world. It provides good chances to solve large scale scientific problems with fewer efforts. Application deployment remains an important issue in clouds. Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users. Unlike current scheduling algorithms which mostly focus on single task allocation, we propose a deadline based scheduling approach for data-intensive applications in clouds. It does not simply consider the total completion time of an application as the sum of all its subtasks’ completion time. Not only the computation capacity of virtual machine (VM) is considered, but also the communication delay and data access latencies are taken into account. Simulations show that our proposed approach has a decided advantage over the two other algorithms.
Aiming at making full use of analog to digital converter (ADC) digitalizing bit without oversaturation while keeping peak to average ratio (PAR) stable, this paper puts forward a new segmented full-digital (SFD)-automatic gain control (AGC) algorithm for a new long term evolution (LTE) communication system. Segmented digital gain control strategy is adopted to adjust the gain by only one step based on detected power status. Whether the gain needs to be adjusted is determined by current signal state derived from the change ranges of adjacent root mean square (RMS) of input signal, but not the difference between the power level of current signal and target signal. Software simulation and hardware implementing had been conducted with LTE frequency division dual (FDD) uplink signal and the results indicated that the proposed AGC algorithm can judge power status accurately and hence adjust the gain precisely in one step with a short delay, further, it can make full use of ADC digitalizing bit without oversaturation as well as keeping stable PAR. In addition, the mean error vector magnitude (EVM) was confined less than 1.6% to meet the 3rd generation partnership project (3GPP) standard well.
With the intensive deployment of users and the drastic increase of traffic load, a millimeter wave (mmWave) back-haul network was widely investigated. A typical mmWave back-haul network consists of the macro base station (MBS) and the small base stations (SBSs). How to efficiently associate users with the MBS and the SBSs for load balancing is a key issue in the network. By adding a virtual power bias to the SBSs, more users can access to the SBSs to share the load of the MBS. The bias values shall be set reasonably to guarantee the back-haul efficiency and the quality of service (QoS). An improved Q-learning algorithm is proposed to effectively adjust the bias value for each SBS. In the proposed algorithm, each SBS becomes an agent with independent learning and can achieve the best behavior, namely the optimal bias value through a series of training. Besides, an improved behavior selection mechanism is adopted to improve the learning efficiency and accelerate the convergence of the algorithm. Finally, simulations conducted in the 60 GHz band demonstrate the superior performance of the proposed algorithm in back-haul efficiency and user outage probability.
A novel adaptively iterative list decoding (ILD) approach using for Reed-Solomon (RS) codes was investigated. The proposed scheme is exploited to reduce the complexity of RS Chase algorithm (CA)via an iterative decoding attempt mode. In each decoding attempt process, a test pattern is generated by flipping the bits of least reliable positions (LRPs) within the received hard-decision (HD) vector. The ILD algorithm continues until a test pattern is successfully decoded by the underlying Berlekamp-Massey algorithm (BMA) of RS codes. Flipping within the same bits, the ILD algorithm provides the same test pattern set as the conventional RS CA, thus there is no degradation in error-rate performance. Without decoding all test patterns, the ILD algorithm can simplify the decoding complexity by its early termination. Simulation results show that the average complexity of the ILD algorithm is much lower than that of the conventional RS CA (and is similar to that of BMA decoding) at high signal-to-noise ratio (SNR)region with no less to the RS Chase decoding error-rate performance.
The ubiquity of wireless communication systems has resulted in extensive concern regarding their security issues.
A practical secure transmission scheme based on under-sampling spectrum-sparse signals is designed. Secure
transmission is initialized as the intended receiver transmits a pilot signal for the transmitter, to perform channel
sounding and clock synchronization according to the channel reciprocal principle. Then, sampling clock offset
compensation and precoding method based on channel state information (CSI) can be conducted at the transmitter's
end. Because the intended receiver adopts an under-sampling method based on active aliasing which depends very
sensitively on accurate sampling clock and CSI, the eavesdroppers can hardly intercept the signal. Closed form
expression of symbol error rate (SER) at the intended receiver's end and secrecy capacity in fading wiretap
channel has been derived. Numerical evaluation and simulations are carried out to validate the effectiveness of the
proposed strategy.
It is becoming increasingly easier to obtain more abundant supplies for hyperspectral images ( HSIs). Despite this, achieving high resolution is still critical. In this paper, a method named hyperspectral images super-resolution generative adversarial network ( HSI-RGAN ) is proposed to enhance the spatial resolution of HSI without decreasing its spectral resolution. Different from existing methods with the same purpose, which are based on convolutional neural networks ( CNNs) and driven by a pixel-level loss function, the new generative adversarial network (GAN) has a redesigned framework and a targeted loss function. Specifically, the discriminator uses the structure of the relativistic discriminator, which provides feedback on how much the generated HSI looks like the ground truth. The generator achieves more authentic details and textures by removing the place of the pooling layer and the batch normalization layer and presenting smaller filter size and two-step upsampling layers. Furthermore, the loss function is improved to specially take spectral distinctions into account to avoid artifacts and minimize potential spectral distortion, which may be introduced by neural networks. Furthermore, pre-training with the visual geometry group (VGG) network helps the entire model to initialize more easily. Benefiting from these changes, the proposed method obtains significant advantages compared to the original GAN. Experimental results also reveal that the proposed method performs better than several state-of-the-art methods.
A low complexity punctured belief propagation ( BP) detection utilizing channel puncturing for multi-user
multiple-input multiple-output (MU-MIMO) systems is proposed in this paper. This paper constructs a cycle-free fator graph by puncturing certain non-zero entries in a transformed channel matrix, and proposes an adjusted BP algorithm with a more exact a posteriori message updating equation. The proposed algorithm converges rapidly in several iterations due to the cycle-free structure in the factor graph. Nevertheless, puncturing brings distorted noise and thus leads to performance degradation. To tackle this issue, this article further designs a layered detection with the help of maximum likelihood detector ( MLD). Simulations demonstrate that the proposed detection algorithm achieves the identical performance to MLD with much lower complexity.
The manifold matrix of the received signals can be destroyed when the array is with the gain and phase errors,which will affect the performance of the traditional direction of arrival (DOA) estimation approaches. In this paper,a novel active array calibration method for the gain and phase errors based on a cascaded neural network(GPECNN) was proposed. The cascaded neural network contains two parts: signal-to-noise ratio ( SNR) classification network and two sets of error estimation subnetworks. Error calibration subnetworks are activated according to the output of the SNR classification network, each of which consists of a gain error estimation network(GEEN) and a phase error estimation network (PEEN), respectively. The disadvantage of neural network topology architecture is changing when the number of array elements varies is addressed by the proposed group calibration strategy. Moreover, due to the data characteristics of the input vector, the cascaded neural network can be applied to arrays with arbitrary geometry without repetitive training. Simulation results demonstrate that the GPECNN not only achieves a better balance between calibration performance and calibration complexity than other methods but also can be applied to arrays with different numbers of sensors or different shapes without repetitive training.
Echo cancellation plays an important role in current Internet protocol (IP) based voice interactive systems. Voice state detection is an essential part in echo cancellation. It mainly comprises two parts: double talk detection (DTD) and voice activity detection (VAD). DTD is used to detect doubletalk and prevent filter divergence in the presence of near-end speech, and VAD is used to determine the near-end voice activity and output silence indicator when near-end is silent. However, DTD straightforwardly proceeded may mistakenly declare double talk under double silent condition, coefficients update under the far-end silence condition may lead to filter divergence, and current VAD algorithms may misjudge the residual echo from the near end to be far-end voice. Therefore, a voice detection algorithm combining DTD and far-end VAD is proposed. DTD is implemented when VAD declares far-end speech, filtering and coefficients update will be halted when VAD declares far-end silence, and the far-end VAD adopted is multi-feature VAD based on short-time energy and correlation. The new algorithm can improve the accuracy of DTD, prevent filter divergence, and exclude the circumstance that far-end signal only contains residual echo from near end. Actual test results show that the voice state decision of the new algorithm is accurate, and the performance of echo cancellation is improved.
Breast cancer is the most common cancer among women worldwide. Ultrasound is widely used as a harmless test for early breast cancer screening. The ultrasound network (USNet) model is presented. It is an improved object detection model specifically for breast nodule detection on ultrasound images. USNet improved the backbone network, optimized the generation of feature maps, and adjusted the loss function. Finally, USNet trained with real clinical data. The evaluation results show that the trained model has strong nodule detection ability. The mean average precision (mAP) value can reach 0.734 9. The nodule detection rate is 95.11%, and the in situ cancer detection rate is 79.65%. At the same time, detection speed can reach 27.3 frame per second (FPS), and the
video data can be processed in real time.
To solve polynomial systems, Harrow, Hassidim, and Lloyd (HHL) proposed a quantum algorithm called HHL algorithm. Based on the HHL algorithm, Chen et al. presented an algorithm, the solving the Boolean solutions of polynomial systems (PoSSoB) algorithm. Furthermore, Ding et al. introduced the Boolean Macaulay matrix and analyzed the lower bound on the condition number. Inspired by Ding et al. 's research, several related algorithms are proposed in this paper. First, the improved PoSSoB algorithm using the Boolean Macaulay matrix is proved to have lower complexity. Second, for solving equations with errors, a quantum algorithm for the max-polynomial system solving (Max-PoSSo) problem is proposed based on the improved PoSSoB algorithm. Besides, the Max-PoSSo algorithm is extended to the learning with errors (LWE) problem and its special case, the learning parity with noise (LPN) problem, providing a quantitative criterion, the condition number, for the security of these basic problems.
In this paper, the performance of frequency synchronization in a multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system is analyzed for the purpose of carrier frequency offset (CFO) estimation and compensation. Speci?cally, a joint transmit antenna selection (ST) and receive maximum ratio combining (MRC) (ST/MRC) method is adopted, that is, only one transmit antenna with the highest channel power is selected while MRC is used at the receiver to maximize the sum of frequency synchronization metric. The mean square error (MSE) closed-form expressions of CFO estimation are derived for several antenna con?gurations. Simulations in both ?at and multipath fading channels validate the theoretical analysis.
In order to achieve the lateral control of the intelligent vehicle, use the bi-cognitive model based on cloud model and cloud reasoning, solve the decision problem of the qualitative and quantitative of the lateral control of the intelligent vehicle. Obtaining a number of experiment data by driving a vehicle, classify the data according to the concept of data and fix the input and output variables of the cloud controller, design the control rules of the cloud controller of intelligent vehicle, and clouded and fix the parameter of cloud controller: expectation, entropy and hyper entropy. In order to verify the effectiveness of the cloud controller, joint simulation platform based on Matlab/Simulink/CarSim is established. Experimental analysis shows that: driver’s lateral controller based on cloud model is able to achieve tracking of the desired angle, and achieve good control effect, it also verifies that a series of mental activities such as feeling, cognition, calculation, decision and so on are fuzzy and uncertain.
An isotropic electromagnetic (EM) lens based on Huygens’ metasurface is proposed for 28.0 GHz lens antenna design. The lens consists of a series of non-resonant and subwavelength metallic patterns etched on both sides of an ultrathin dielectric substrate. Both electric and magnetic responses are introduced to realize desired abrupt phase change and high-efficiency transmission for the secondary wavelets in the incident wavefront. Then, a substrate-integrated waveguide (SIW) fed patch antenna is combined with the lens as the primary feed to form a low-profile lens antenna system. The simulated and measured results coincide with each other, and demonstrate that the prototype realizes 8.8 dB~12.6 dB gain increment and low side-lobe levels over the bandwidth of 26.7 GHz~30.0 GHz. The novel design leads to a low-profile, light weight, and low-cost antenna solution in a wireless communication system.
A segmentation multi-dimensional crest factor reduction (SMD-CFR) algorithm is proposed for multi-service supporting digital radio over fibre (DRoF) system. Benefit from segmentation dynamic clipping threshold and clipping factor got basing on characteristics of all service bands, the SMD-CFR is able to get better peak clipping and peak to average power ratio (PAPR) reduction for multi-band combined signal. Simulation results show that SMD-CFR gets more than 2.3 reduction in PAPR for two-bands combined signal of long term evolution (LTE), which is much better than traditional one dimensional (1D)-CFR. Meanwhile, for 64-quadrature amplitude modulation (QAM) modulation and demodulation link, it has very small effect on bit error rate (BER) and error
vector magnitude (EVM), which are controlled less than 0.1% and 0.2% respectively. For hardware experiment, SMD-CFR obtains 4.5% increase in drain efficiency and about 4 dBc increase in adjacent channel leakage ratio (ACLR). These are very significant for the wide band power amplifier (PA) in multi-service supporting DRoF system.
Computer Applied Technology
In the traditional method, the software quality is measured by various metrics of the software, such as decoupling level (DL), which can be used to predict software defect. However, DL, which treats all the ?les equally, has not taken file importance into consideration. Therefore, a novel software quality metric, named as improved decoupling level (IDL), based on the importance of documents was proposed. First, the PageRank algorithm was used to calculate the importance of ?les to obtain the weights of the dependencies, and then defect prediction models was established by combining the software scale, dependencies, scores and software defects to assess the software quality. Compared to most existing module-based software quality evaluation methods, IDL has similar or even superior performance in the prediction of software quality. The results indicate that IDL measures the importance of each ?le in the software more accurately by combining the PageRank algorithm in DL, which indirectly re?ects the quality of software by predicting the bug information in software and improves the accuracy of prediction result of software bug information.
Ultra-dense network (UDN) deployment of small cells introduces novel technical challenges, one of which is that the interference levels increase considerably with the network density. This paper proposes interference suppression scheme based on compressive sensing (CS) framework for UDN. Firstly, the measurement matrix is designed by exploiting the sparsity of millimeter wave channels. CS technique is employed to transform the high dimension sparse signal into low dimension signal. Then, the interference is canceled in the compressed domain. Finally, the stagewise weak orthogonal matching pursuit (SWOMP) algorithm is used to reconstruct the useful signal after interference suppression. The analysis and simulation results demonstrate the effectiveness of the algorithm. Simulation results demonstrate that the proposed interference suppression in compressive domain yields performance gains compared to other classical interference suppression schemes. The proposed algorithm can reduce the computational complexity of interference suppression algorithm.
Industrial big data was usually multi-source, heterogeneous, and deeply intertwined. It had a wide range of data sources, high data dimensions, and strong data correlation. In order to effectively analyze and process streaming industrial big data generated by edge computing, it was very important to provide an effective real-time incremental data method. However, in the process of incremental processing, industrial big data incremental computing faced the challenges of dimensional disaster, repeated calculations, and the explosion of intermediate results. Therefore, in order to solve the above problems effectively, a QR-based tensor-train (TT) decomposition (TTD) method and a QR-based incremental TTD (QRITTD) method were proposed. This algorithm combined the incremental QR-based decomposition algorithm with an approximate singular value decomposition ( SVD) algorithm and had good scalability. In addition, the computational complexity, space complexity, and approximation error analysis were analyzed in detail. The effectiveness of the three algorithms of QRITTD, non-incremental TTD (NITTD), and TT rank-1 (TTr1) SVD (TTr1SVD)were verified by comparison. Experimental results show that the SVD QRITTD method has better performance under the premise of ensuring the same tensor size.
Special Topic: Artificial Intelligence of Things
In order to solve the problems that the feature data type are not rich enough in the data collection process about the vehicle-following task in marine scene which results in a long model convergence time and high training difficulty, a two-stage vehicle-following system was proposed. Firstly, semantic segmentation model predicts the number of pixels of the followed target, then the number of pixels of the followed target is mapped to the position feature. Secondly, deep reinforcement learning algorithm enables the control equipment to make decision action, to ensure that two moving objects remain within the safe distance. The experimental results show that the two-stage vehicle-following system has a 40% faster convergence rate than the model without position feature, and the following stability is significantly improved by adding the position feature.
As a kind of generative adversarial network (GAN), Cycle-GAN shows an apparent superiority in image style
translation. The more complicated architectures with large number of parameters and huge computational
complexities, cause a big challenge in deployment on resource-constrained platform. To make full use of the
parallelism of hardware under guaranteed image quality, this paper improves the generator network to a hardware-friendly Inception module. The optimized framework is named simplified Cycle-GAN (S-CycleGAN), with greatly
reduced parameters of convolution, while avoiding the degradation of image quality from structural compression.
Testing with the apple2organge and horse2zebra datasets, the experiment results show that the images generated by
S-CycleGAN outperform the baseline and other models. The number of parameters reduces by 19.54%, memory usage cuts down by 9.11%, theoretical amount of multiply-adds (Madds) decreases by 17.96%, and floating-point operations per second (FLOPS) diminishes by 18.91%. Finally, the S-CycleGAN was mapped on the
dynamic programmable reconfigurable array processor ( DPRAP ), which calculate the convolution and
deconvolution in a unified architecture, and support flexible runtime switching. The prototype systems are
implemented on Xilinx field programmable gate array (FPGA) XC6 VLX550 T-FF1759. The synthesized results
show that, with 150 MHz, the hardware resource consumption is reduced by 52% compared to the recent FPGA
scheme.
In this paper, a novel trench gate gallium nitride insulated gate bipolar transistor (GaN IGBT), in which the collector is divided into multiple regions to control the hole injection efficiency, is designed and theoretically studied. The incorporation of a P+/P- multi-region alternating structure in the collector region mitigates hole injection within the collector region. When the device is in forward conduction, the conductivity modulation effect results in a reduced storage of carriers in the drift region. As a result, the number of carriers requiring extraction during device turn-off is minimized, leading to faster turn-off speed. The results illustrate that the GaN IGBT with controlled hole injection efficiency (CEH GaN IGBT) exhibits markedly enhanced performance compared to conventional GaN IGBT, showing a remarkable 42.2% reduction in turn-off time and a notable 28.5% decrease in turn-off loss.
Sparse code multiple access (SCMA) is a competitive nonorthogonal access scheme for the next mobile communication. As a multiuser sharing system, SCMA adopts message passing algorithm (MPA) for decoding scheme in receiver, but its iterative method leads to high computational complexity. Therefore, a serial message passing algorithm based on variable node (VMPA) is proposed in this paper. Making some subtle alterations to message update in original MPA, VMPA can greatly reduce overall computing complexity of decoding scheme. Furthermore, considering that serial structure may increase decoding delay, a novel grouping scheme based sparse matrix is applied to VMPA. Simulation results verify that the new algorithm, termed as grouping VMPA (G-VMPA), can achieve a better tradeoff between bit error rate (BER) and computing complexity than MPA.
To improve the evolutionary algorithm performance, especially in convergence speed and global optimization ability, a self-adaptive mechanism is designed both for the conventional genetic algorithm (CGA) and the quantum inspired genetic algorithm (QIGA). For the self-adaptive mechanism, each individual was assigned with suitable evolutionary parameter according to its current evolutionary state. Therefore, each individual can evolve toward to the currently best solution. Moreover, to reduce the running time of the proposed self-adaptive mechanism based QIGA (SAM-QIGA), a multi-universe parallel structure was employed in the paper. Simulation results show that the proposed SAM-QIGA have better performances both in convergence and global optimization ability.
Hoh Xil is the national nature reserve in China, and Tibetan antelope is a research hotspot of wildlife protection in this area. In order to track the population and activity of Tibetan antelope in Hoh Xil, a quantum wireless sensor monitoring network(QWSMN) based on the quantum satellite wide-area communication networks was proposed. This network consists of quantum wireless sensors installed on the Tibetan antelope, small quantum base stations, quantum satellite signal transmitting stations, quantum satellite and quantum satellite signal receiving stations. The simulation results show that under the interference of the sandstorm, a quantum satellite signal transmitting station can cover the monitoring area of 20 106 km2, and the network throughput reaches 40 KB/ s. This network can realize large-scale monitoring of Tibetan antelope in Hoh Xil and provide theoretical basis for the construction of global wildlife monitoring network.
Unmanned aerial vehicle base stations ( UAV-BSs) can provide a fast network deployment scheme for heterogeneous networks. However, unmanned aerial vehicle (UAV) has limited capability and cannot assist the base station (BS) well. The ability of a UAV to assist the BSs is limited, and the cluster deployment relies on the leading UAV. The dispersive deployment of multiple UAVs (multi-UAVs) need a macro base station (MBS) to determine their positions to prevent collisions or interference. Therefore, a distributed cooperative deployment scheme is proposed for UAVs to solve this problem. The scheme can increase the ability of UAVs to assist users and reduce the pressure on BSs to deploy UAVs. Firstly, the randomly distributed users are pre-clustered. Then the placement problem was modeled as a circle expansion problem and a pre-clustering radius expansion algorithm was proposed. Under the constraint of users-data rates, it provides services for more users. Finally, the proposed algorithm was compared with the density-aware placement algorithm. The simulation results show that the proposed algorithm can provide services for more users and improve the coverage rate of users while ensuring the data rates.
Special Topic : Digital Human
How to protect cultural retics is of great significance to the transmission and dissemination of history and culture. Digital 3-dimensional (3D) modeling of cultural relics is an effective way to preserve them. The efficiency and complexity of cultural relic model reconstruction algorithms are significant challenges due to redundant data. To tackle the above issue, a 3D reconstruction algorithm, named COLMAP + LSH, was proposed for movable cultural relics based on salient region optimization. COLMAP + LSH algorithm introduces saliency region detection and locality-sensetive Hashing (LSH) to achieve efficient, accurate, and robust digital 3D modeling of cultural relics. Specifically, 400 cultural model data were collected through offline and online collection. COLMAP + LSH algorithm detects the salient region interactively and reduces the number of images in the salient region by feature diffusion. Additionally, COLMAP + LSH algorithm utilizes LSH to calculate the image selection scores and employs the image selection scores to reduce the redundant image. The experiments on the self-constructed cultural relics dataset show that COLMAP + LSH algorithm can efficiently achieve image feature diffusion and ensure the quality of artifact reconstruction while selecting most of the redundant image data.
Multi-agent System Cooperative Control
This paper is devoted to investigate the consensus problems for the multi-agent systems with Lurie nonlinear dynamics in directed topology. Under some assumptions, some sufficient conditions for the systems reaching leaderless consensus and tracking consensus are established by using contraction analysis theory. Compared with the existing results, there is no need to formulate the multi-agent networks in compact form. These conditions are only related to the individual agent in lower-dimensional case and the communication topology of the network. Additionally, a generalized nonlinear function is introduced. Finally, three numerical examples are demonstrated to illustrate the effectiveness of the theoretical results.
Aimed at enhancing privacy protection of location-based services (LBS) in mobile Internet environment,an improved privacy scheme of high service quality on the basis of bilinear pairings theory and k-anonymity is proposed. In circular region of Euclidian distance, mobile terminal evenly generates some false locations, from which half optimal false locations are screened out according to position entropy, location and mapping background information. The anonymity obtains the effective guarantee, so as to realize privacy protection. Through security analyses, the scheme is proved not only to be able to realize such security features as privacy, anonymity and non-forgeability, but also able to resist query tracing attack. And the result of simulation shows that this scheme not only has better evenness in selecting false locations, but also improves efficiency in generating and selecting false nodes.
This letter proposes a low-complexity ‘harvest-and-forward’ relay strategy in simultaneous wireless information and power transfer (SWIPT) relay channels. In the first phase of relay transmission, the relay’s antennas are divided into two subsets. The signals received by the antennas in one subset are converted to energy, and the signals received by the antennas in the other subset are combined. In the second phase, the relay forwards the combined signal using all antennas with the harvested energy. A low complexity antenna selection (AS) algorithm is given to maximize the achievable rate over fading channels. The simulation results show that the achievable rate of this strategy is close to that of the two-stage strategy where a two-state procedure is proposed to determine the optimal ratio of received signal power split for energy harvesting, and the optimized antenna set engaged in information forwarding. The proposed strategy has better performance than the two-stage strategy when the relay is equipped with medium-scale antennas, and the performance gap between two strategies grows with the increase of the number of the relay’s antennas. The computational complexity of the proposed strategy is O(N2) (N is the number of relay antennas), which is obviously lower than that of the two-stage strategy (O(3N3)).
Two-party certificateless authenticated key agreement(CL-AKA) protocol is a hot topic in the field of wireless communication security. An improved two-party CL-AKA protocol with enhanced security is proposed,which is of provable security and unforgeability in the extended Canetti-Krawczyk (eCK) security model based on the hardness assumption of the computational Diffie Hellman (CDH) problem. Compared with other similar protocols, it is more efficient and can satisfy security properties such as free of the centralized management of certificate and key, free of bilinear pairings, two-party authentication, resistant to unknown key-share attack, key compromise impersonation attacks, the man-in-the-middle-attack of key generation center (KGC), etc. These properties make the proposed protocol have better performance and adaptability for military communication.
Unmanned aerial vehicles (UAVs) are increasingly applied in various mission scenarios for their versatility, scalability and cost-effectiveness. In UAV mission planning systems (UMPSs), an efficient mission planning strategy is essential to meet the requirements of UAV missions. However, rapidly changing environments and unforeseen threats pose challenges to UMPSs, making efficient mission planning difficult. To address these challenges, knowledge graph technology can be utilized to manage the complex relations and constraints among UAVs, missions, and environments. This paper investigates knowledge graph application in UMPSs, exploring its modeling, representation, and storage concepts and methodologies. Subsequently, the construction of a specialized knowledge graph for UMPS is detailed. Furthermore, the paper delves into knowledge reasoning within UMPSs, emphasizing its significance in timely updates in the dynamic environment. A graph neural network (GNN)-based approach is proposed for knowledge reasoning, leveraging GNNs to capture structural information and accurately predict missing entities or relations in the knowledge graph. For relation reasoning, path information is also incorporated to improve the accuracy of inference. To account for the temporal dynamics of the environment in UMPS, the influence of timestamps is captured through the attention mechanism. The effectiveness and applicability of the proposed knowledge reasoning method are verified via simulations.
Addressing the issue of low pulse identification rates for low probability of intercept ( LPI) radar signals under low signal-to-noise ratio ( SNR) conditions, this paper aims to investigate a new method in the field of deep learning to recognize modulation types of LPI radar signals efficiently. A novel algorithm combining dual efficient network ( DEN) and non-local means ( NLM) denoising was proposed for the identification and selection of LPI radar signals. Time-domain signals for 12 radar modulation types were simulated, adding Gaussian white noise at various SNRs to replicate complex electronic countermeasure scenarios. On this basis, the noisy radar signals undergo Choi-Williams distribution ( CWD ) time-frequency transformation, converting the signals into two- dimensional (2D) time-frequency images ( TFIs). The TFIs are then denoised using the NLM algorithm. Finally, the denoised data is fed into the designed DEN for training and testing, with the selection results output through a softmax classifier. Simulation results demonstrate that at an SNR of - 8 dB, the algorithm can achieve a recognition accuracy of 97.22% for LPI radar signals, exhibiting excellent performance under low SNR conditions. Comparative demonstrations prove that the DEN has good robustness and generalization performance under conditions of small sample sizes. This research provides a novel and effective solution for further improving the accuracy of identification and selection of LPI radar signals.
An improved multi-task learning recommendation algorithm—fast two-stage multi-task recommendation model boosted feature selection (Fast TMRM) is proposed based on auto-encoders in this paper. Compared to previous work, Fast TMRM improves the convergence speed and accuracy of training. In addition, Fast TMRM builds on previous work to introduce the auto-encoder to encode the important feature combination vector. That is how it can be used for the training of multi-task learning, which helps to improve the training efficiency of the model by nearly 67%. Finally, the nearest neighbor search is used to restore important feature expression.
Predicting user states in future and rendering visual feedbacks accordingly can effectively reduce the visual experienced delay in the tactile Internet (TI). However, most works omit the fact that different parts in an image may have distinct prediction requirements, based on which different prediction models can be used in the predicting process, and then it can further improve predicting quality especially under resources-limited environment. In this paper, a hybrid prediction scheme is proposed for the visual feedbacks in a typical TI scenario with mixed visuo-haptic interactions, in which haptic traffic needs sufficient wireless resources to meet its stringent communication requirement, leaving less radio resources for the visual feedback. First, the minimum required number of radio resources for haptic traffic is derived based on the haptic communication requirements, and wireless resources are allocated to the haptic and visual traffics afterwards. Then, a grouping strategy is designed based on the deep neural network (DNN) to allocate different parts from an image feedback into two groups to use different prediction models, which jointly considers the prediction deviation thresholds, latency and reliability requirements, and the bit sizes of different image parts. Simulations show that, the hybrid prediction scheme can further reduce the visual experienced delay under haptic traffic requirements compared with existing strategies.
Special Topic: Data Security and Privacy Preservation in Cloud/ Fog / Edge-Enabled Internet of Thing
The traditional ciphertext policy attribute-based encryption (CP-ABE) has two problems:one is that the access policy must be embedded in the ciphertext and sent, which leads to the disclosure of user爷 s privacy information, the other is that it does not support collaborative decryption, which cannot meet the actual demand of conditional collaborative decryption among multiple users. In order to deal with the above two problems at the same time, a fine-grained cooperative access control scheme with hidden policies (FCAC-HP) is proposed based on the existing CP-ABE schemes combined with blockchain technology. In FCAC-HP scheme, users are grouped by group identifier so that only users within the same group can cooperate. In the data encryption stage, the access policy is encrypted and then embedded in the ciphertext to protect the privacy information of the access policy. In the data access stage, the anonymous attribute matching technology is introduced so that only matched users can decrypt ciphertext data to improve the efficiency of the system. In this process, a smart contract is used to execute the
verification algorithm to ensure the credibility of the results. In terms of security, FCAC-HP scheme is based on the prime subgroup discriminative assumption and is proved to be indistinguishable under chosen plaintext attack (CPA) by dual system encryption technology. Experimental verification and analysis show that FCAC-HP scheme improves computational efficiency while implementing complex functions.
Recent breakthroughs in artificial intelligence (AI) give rise to a plethora of intelligent applications and services
based on machine learning algorithms such as deep neural networks (DNNs). With the proliferation of Internet of
things (IoT) and mobile edge computing, these applications are being pushed to the network edge, thus enabling a
new paradigm termed as edge intelligence. This provokes the demand for decentralized implementation of learning
algorithms over edge networks to distill the intelligence from distributed data, and also calls for new communication-efficient designs in air interfaces to improve the privacy by avoiding raw data exchange. This paper provides a
comprehensive overview on edge intelligence, by particularly focusing on two paradigms named edge learning and
edge inference, as well as the corresponding communication-efficient solutions for their implementations in wireless
systems. Several insightful theoretical results and design guidelines are also provided.
Special Topic: Cultural Computing
In the long history of more than 1 500 years, Dunhuang murals suffered from various deteriorations causing irreversible damage such as falling off, fading, and so on. However, the existing Dunhuang mural restoration methods are time-consuming and not feasible to facilitate cultural issemination and permanent inheritance. Inspired by cultural computing using artificial intelligence, gated-convolution-based dehaze net (GD-Net) was proposed for Dunhuang mural refurbishment and comprehensive protection. First, a neural network with gated convolution was applied to restore the falling off areas of the mural to ensure the integrity of the mural content. Second, a dehaze network was applied to enhance image quality to cope with the fading of the mural. Besides, a Dunhuang mural dataset was presented to meet the needs of deep learning approach, containing 1 180 images from the Cave 290 and Cave 112 of the Mogao Grottoes. The experimental results demonstrate the effectiveness and superiority of GD-Net.
Layer 2 network technology is extending beyond its traditional local area implementation and finding wider acceptance in provider’s metropolitan area networks and large-scale cloud data center networks. This is mainly due to its plug-and-play capability and native mobility support. Many efforts have been put to increase the bisection bandwidth in layer 2 network, which has been constrained by the spanning tree protocol (STP) that layer 2 network uses for preventing looping. The recent trend is to incorporate layer 3’s routing approach into layer 2 network so that multiple paths can be used for forwarding traffic between any source-destination (S-D) node pair. Equal cost multipath (ECMP) is one such example. However, ECMP may still be limited in generating multiple paths due to its shortest path (lowest cost) requirement. In this paper, we consider a non-shortest-path routing approach, called equal preference multipath (EPMP) based on ordered semi group theory, which can generate more paths than ECMP. In EPMP routing, all the paths with different traditionally-defined costs, such as hops, bandwidth, etc., can be determined equally now and thus they become equal candidate paths. By the comparative tests with ECMP, EPMP routing not only generates more paths, provides 15% higher bisection bandwidth, but also identifies bottleneck links in a hierarchical network when different traffic patterns are applied. EPMP is more flexible in controlling the number and length of multipath generation. Simulation results indicate the effectiveness of the proposed algorithm. It is a good reference for non-blocking running of big datacenter networks.
Complementary metal oxide semiconductor ( CMOS) aging mechanisms including bias temperature instability
( BTI) pose growing concerns about circuit reliability. BTI results in threshold voltage increases on CMOS
transistors, causing delay shifts and timing violations on logic circuits. The amount of degradation is dependent on
the circuit workload, which increases the challenge for accurate BTI aging prediction at the design time. In this
paper, a BTI prediction method for logic circuits based on statistical static timing analysis (SSTA) is proposed,
especially considering the correlation between circuit workload and BTI degradation. It consists of a training phase,
to discover the relationship between circuit scale and the required workload samples, and a prediction phase, to
present the degradations under different workloads in Gaussian probability distributions. This method can predict
the distribution of degradations with negligible errors, and identify 50% more BTI-critical paths in an affordable
time, compared with conventional methods.
Traditional simultaneous localization and mapping ( SLAM) mostly performs under the assumption of an ideal static environment, which is not suitable for dynamic environments in the real world. Dynamic real-time object-aware SLAM ( DRO-SLAM) is proposed in this paper, which is a visual SLAM that can realize simultaneous localizing and mapping and tracking of moving objects indoor and outdoor at the same time. It can use target recognition, oriented fast and rotated brief (ORB) feature points, and optical flow assistance to track multi-target dynamic objects and remove them during dense point cloud reconstruction while estimating their pose. By verifying the algorithm effect on the public dataset and comparing it with other methods, it can be obtained that the proposed algorithm has certain guarantees in real-time and accuracy, it also provides more functions. DRO-SLAM can provide the solution to automatic navigation which can realize lightweight deployment, provide more vehicles, pedestrians and other environmental information for navigation.
Opportunistic routing (OR) could adapt to dynamic wireless sensor networks (WSNs) because of its inherent broadcast nature. Most of the existing OR protocols focus on the variations of propagation environment which are caused by channel fading. However, a few works deal with the dynamic scenario with mobile nodes. In this paper, a mobile node (MN) aware OR (MN-OR) is proposed and applied to a WSN in the high-speed railway scenario where the destination node is deployed inside a high speed moving train, and the MN-OR not only considers the mobility of node but also utilizes the candidate waiting time induced by the timer-based coordination scheme. Specifically, to reduce the number of duplicate transmissions and mitigate the delay of information transmission, a new selection strategy of the candidate forwarders is presented. In addition, two priority assignment methods of the candidate forwarders are proposed for the general relay nodes (GRNs) and the rail-side nodes (RSNs) according to their different routing requirements. Extensive simulation results demonstrate that the proposed MN-OR protocol can achieve better network performances compared with some existing routing schemes such as the well-known Ad-hoc on-demand distance vector routing (AODV) and the extremely opportunistic routing (ExOR) protocols.
With the rapid growth of satellite traffic, the ability to forecast traffic loads becomes vital for improving data transmission efficiency and resource management in satellite networks. To precisely forecast the short-term traffic loads in satellite networks, a forecasting algorithm based on principal component analysis and a generalized regression neural network (PCA-GRNN) is proposed. The PCA-GRNN algorithm exploits the hidden regularity of satellite networks and fully considers both the temporal and spatial correlations of satellite traffic. Specifically, it selects optimal time series of spatio-temporally correlated historical traffic from satellites as forecasting inputs and applies principal component analysis to reduce the input dimensions while preserving the main features of the data. Then, a generalized regression neural network is utilized to perform the final short-term load forecasting based on the obtained principal components. The PCA-GRNN algorithm is evaluated based on real-world traffic traces, and the results show that the PCA-GRNN method achieves a higher forecasting accuracy, has a shorter training time and is more robust than other state-of-the-art algorithms, even for incomplete traffic datasets. Therefore, the PCA-GRNN algorithm can be regarded as a preferred solution for use in real-time traffic forecasting for realistic satellite networks.
Mobility prediction is one of the promising technologies for improving quality of service (QoS) and network resource utilization. In future heterogeneous networks (HetNets), the network topology will become extremely complicated due to the widespread deployment of different types of small-cell base stations (SBSs). For this complex network topology, traditional mobility prediction methods may cost unacceptable overhead to maintain high prediction accuracy. This problem is studied in this paper, and the hierarchical mobility prediction scheme (HMPS) is proposed for the future HetNets. By dividing the entire process into two prediction stages with different granularity, the tradeoff between prediction accuracy and computation complexity is investigated. Before performing prediction of user mobility, some frequently visited locations are identified from the user’s trajectory, and each location represents an important geographic area (IGA). In the coarse-grained prediction phase, the next most possible location to be visited is predicted at the level of the possible geographic areas by using a second-order Markov chain with fallback. Then, the fine-grained prediction of user position is performed based on hidden Markov model (HMM) from temporal and spacial dimensions. Simulation results demonstrate that, compared with the existing prediction methods, the proposed HMPS can achieve a good compromise between prediction accuracy and complexity.
Fine-grained few-shot learning is a difficult task in image classification. The reason is that the discriminative
features of fine-grained images are often located in local areas of the image, while most of the existing few-shot learning image classification methods only use top-level features and adopt a single measure. In that way, the local features of the sample cannot be learned well. In response to this problem, ensemble relation network with multi-level measure (ERN-MM) is proposed in this paper. It adds the relation modules in the shallow feature space to compare the similarity between the samples in the local features, and finally integrates the similarity scores from the feature spaces to assign the label of the query samples. So the proposed method ERN-MM can use local details and global information of different grains. Experimental results on different fine-grained datasets show that the proposed method achieves good classification performance and also proves its rationality.
In the classification problem, deep kernel extreme learning machine (DKELM) has the characteristics of efficient
processing and superior performance, but its parameters optimization is difficult. To improve the classification
accuracy of DKELM, a DKELM algorithm optimized by the improved sparrow search algorithm (ISSA), named as
ISSA-DKELM, is proposed in this paper. Aiming at the parameter selection problem of DKELM, the DKELM
classifier is constructed by using the optimal parameters obtained by ISSA optimization. In order to make up for the
shortcomings of the basic sparrow search algorithm (SSA), the chaotic transformation is first applied to initialize the
sparrow position. Then, the position of the discoverer sparrow population is dynamically adjusted. A learning
operator in the teaching-learning-based algorithm is fused to improve the position update operation of the joiners.
Finally, the Gaussian mutation strategy is added in the later iteration of the algorithm to make the sparrow jump out
of local optimum. The experimental results show that the proposed DKELM classifier is feasible and effective, and
compared with other classification algorithms,the proposed DKELM algorithm aciheves better test accuracy.
Rough set theory is an important tool to solve uncertain problems. Attribute reduction, as one of the core issues of rough set theory, has been proven to be an effective method for knowledge acquisition. Most of heuristic attribute reduction algorithms usually keep the positive region of a target set unchanged and ignore boundary region information. So, how to acquire knowledge from the boundary region of a target set in a multi-granulation space is an interesting issue. In this paper, a new concept, fuzziness of an approximation set of rough set is put forward firstly. Then the change rules of fuzziness in changing granularity spaces are analyzed. Finally, a new algorithm for attribute reduction based on the fuzziness of 0.5-approximation set is presented. Several experimental results show that the attribute reduction by the proposed method has relative better classification characteristics compared with various classification algorithms.
High efficiency video coding (HEVC) transform algorithm for residual coding uses 2-dimensional (2D) 4×4 transforms with higher precision than H.264’s 4×4 transforms, resulting in increased hardware complexity. In this paper, we present a shared architecture that can compute the 4×4 forward discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT) of HEVC using a new mapping scheme in the video processor array structure. The architecture is implemented with only adders and shifts to an area-efficient design. The proposed architecture is synthesized using ISE14.7 and implemented using the BEE4 platform with the Virtex-6 FF1759 LX550T field programmable gate array (FPGA). The result shows that the video processor array structure achieves a maximum operation frequency of 165.2 MHz. The architecture and its implementation are presented in this paper to demonstrate its programmable and high performance.
As different requirements on mobility support will be introduced by diversified communication scenarios in the fifth generation (5G), on demand mobility management is put forward to simplify signaling process, reduce terminal power consumption, improve network efficiency and so on. In order to enable on demand mobility management in 5G networks, a mobility driven network slicing (MDNS) was proposed, which takes individual mobility support requirements into account while customizing networks for different mobile services. Within the MDNS framework, the actual levels of required mobility support are determined by a mobility description system, and network slice templates with the corresponding mobility management schemes are defined by a network slice description function. By instantiating the network slices, each mobile terminal could be directed to the network slice with the most appropriate mobility management scheme. Based on this, a prototype was implemented to validate the feasibility of MDNS framework, i.e. creating multiple network slices with different mobility management schemes. In addition, the performance evaluation on average cost of processing a mobility event is conducted for the proposed MDNS framework and the long term evolution (LTE) system, and operating benefits are analyzed including efficiency and scalability.
To achieve secure communication in wireless sensor networks (WSNs), where sensor nodes with limited computation capability are randomly scattered over a hostile territory, various key pre-distribution schemes (KPSs) have been proposed. In this paper, a new KPS is proposed based on symplectic geometry over finite fields. A fixed dimensional subspace in a symplectic space represents a node, all 1-dimensional subspaces represent keys and every pair of nodes has shared keys. But this naive mapping does not guarantee a good network resiliency. Therefore, it is proposed an enhanced KPS where two nodes have to compute a pairwise key, only if they share at least q common keys. This approach enhances the resilience against nodes capture attacks. Compared with the existence of solution, the results show that new approach enhances the network scalability considerably, and achieves good connectivity and good overall performance.
To address the problems of the present TeraHertz medium access control (MAC) protocols such as not updating the time slot requests numbers in time, unreasonable superframe structures and not merging time slot requests from the same pair of nodes, high throughput low delay medium access control (HLMAC), a novel MAC protocol for TeraHertz ultra-high data-rate wireless networks is proposed. It reduces the data access delay largely with a new superframe structure, from which nodes can get time slot allocation information immediately. The network throughput is also improved with the help of updating time slot requests number and merging time slot requests from the same pair of nodes. The theoretical analysis verifies the effectiveness of HLMAC, and the simulation results show that HLMAC improves the network throughput by 65.7% and decreases the access delay by 30%, as compared to energy and spectrum-aware medium access control (ES-MAC).
In orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) systems, the relationship between the input of the synthesis filter bank (SFB) and the output of the analysis filter bank (AFB) is much more complicated than OFDM due to its special prototype filter. By analyzing the trans-multiplexer response characteristics, an equivalent trans-multiplexer matrix is proposed to describe the relationship between the input and the output. With the equivalent matrix, the output can be easily computed using matrix multiplication with the input. Moreover, with the inverse of the equivalent trans-multiplexer matrix, imaginary interference can be eliminated using the precoding method. The simulation results show the correctness of the equivalent trans-multiplexer matrix.
A new semi-serial fusion method of multiple feature based on learning using privileged information (LUPI) model was put forward. The exploitation of LUPI paradigm permits the improvement of the learning accuracy and its stability, by additional information and computations using optimization methods. The execution time is also reduced, by sparsity and dimension of testing feature. The essence of improvements obtained using multiple features types for the emotion recognition (speech expression recognition), is particularly applicable when there is only one modality but still need to improve the recognition. The results show that the LUPI in unimodal case is effective when the size of the feature is considerable. In comparison to other methods using one type of features or combining them in a concatenated way, this new method outperforms others in recognition accuracy, execution reduction, and stability.
With the advantages of low cost and high mobility, unmanned aerial vehicles (UAVs) have played an important
role in wireless communication. We investigate a UAV-enabled mobile relaying system, where a UAV is used as a
mobile relay to assist in the communication from a ground source to a ground destination since their direct link is
blocked. Unlike the conventional mobile relay in which the UAV adopts straight line or circular trajectory, we
consider a more general elliptical trajectory and use realistic channel models. By assuming that the UAV employs
time-division duplexing (TDD) based decode-and-forward (DF) relaying, we jointly optimize the time allocations
and the UAV trajectory to formulate the maximization of spectrum efficiency (SE). The simulation results show that
the trajectory we designed achieves a higher SE than the optimal circular trajectory and static relaying.
The traditional fault diagnosis method of industrial equipment has low accuracy and poor applicability. This paper proposes a equipment fault diagnosis method based on random stochastic adaptive particle swarm optimization (RSAPSO). The entire model is validated by using the data of healthy bearings collected by Case Western Reserve University. Different gradient descent algorithms and standard particle swarm optimization (PSO) algorithms in a back propagation (BP) network are compared experimentally. The results show that the RSAPSO algorithm has a higher accuracy of weight threshold updating than the gradient descent algorithm and does not easily fall into a local optimum. Compared with PSO, it has a faster optimization speed and higher accuracy. Finally, the RSAPSO algorithm is validated with the data of bearings collected from the laboratory rotating machinery test bench and motor data collected from the tower reflux pump. The average recognition rate of the four kinds of bearing data constructed is 97.5% , and the average recognition rate of the two kinds of motor data reaches 100% , which prove the universality of the method.
With the rapid development of wireless communication technology and the explosive growth of mobile data traffic, more and more users are eager to get faster and better internet access. In order to meet the needs of users, energy and spectrum utilization are becoming more and more important as new challenges in wireless communication networks. In recent years, reconfigurable intelligent surface ( RIS ) technology has been proposed in a programmable intelligent way to improve the performance and quality of wireless communication systems. In addition, the RIS performs better in terms of energy efficiency than other technologies. Therefore, the RIS has become research hotspot rapidly because of its unique wireless communication ability. This paper aims to review the RIS, including channel model, design for transmitter and receiver, information theory, and the latest development of RIS-assisted multiple-input multiple-output (MIMO) systems. The applications of RISs in physical layer security, device to device (D2D) and cell coverage extension are also introduced in detail. In addition, we discuss major research challenges related to the RIS. Finally, the potential research directions are proposed.
Knowledge tracking (KT) algorithm, which can model the cognitive level of learners, is a fundamental artificial intelligence approach to solve the personalized learning problem in the field of education. The recently presented separated self-attentive neural knowledge tracing (SAINT) algorithm has got a great improvement on predictingthe accuracy of students' answers in comparison with the present other methods. However there is still potential to enhance its performance for it fails to effectively utilize temporal features. In this paper, an optimization algorithm for SAINT based on Ebbinghaus' law of forgetting was proposed which took temporal features into account. The proposed algorithm used forgetting law-based data binning to discretize the time information sequences, so as to obtain the temporal featuresin accordance with people's forgetting pattern. Then the temporal features were used as input in the decoder of SAINT model to improve its accuracy. Ablation experiments and comparison experiments were performed on the EdNet dataset in order to verify the effectiveness of the proposed model. Seen in the experimental results,it achieved higher area under curve (AUC) values than the other present representative knowledge tracing algorithms. It demonstrates that temporal featuresare necessary for KT algorithms if it can be properly dealt with.
In this work, β-Ga2O3 thin films were grown on SiO2 substrate by atomic layer deposition (ALD) and annealed in N2 atmosphere to enhance the crystallization quality of the thin films, which
were verified from X-rays diffraction (XRD). Based on the grown β-Ga2O3 thin films, vertical metal-semiconductor-metal (MSM) interdigital photodetectors (PDs) were fabricated and investigated. The
PDs have an ultralow dark current of 1.92 pA, ultra-high photo-to-dark current
ratio (PDCR) of 1.7×106, and ultra-high detectivity of 4.25×1014 Jones at a bias voltage of 10 V under 254 nm deep ultraviolet (DUV). Compared
with the horizontal MSM PDs under the same process, the PDCR and detectivity of
the fabricated vertical PDs are increased by 1 000 times and 100 times,
respectively. In addition, the vertical PDs possess a high responsivity of
34.24 A/W and an external quantμm efficiency of 1.67×104%, and also
exhibit robustness and repeatability, which indicate excellent performance.
Then the effects of electrode size and external irradiation conditions on the
performance of the vertical PDs continued to be investigated.
To solve the privacy leakage problem of truck trajectories in intelligent logistics, this paper proposes a Quadtree-based Personalized Joint location Perturbation (QPJLP) algorithm using location generalization and local differential privacy techniques. Firstly, a flexible position encoding mechanism based on the spatial quadtree indexing is designed, and the length of the encoding can be adjusted freely according to data availability. Secondly, to meet the privacy needs of different locations of users, location categories are introduced to classify locations as sensitive and ordinary locations. Finally, the truck invokes the corresponding mechanism in the QPJLP algorithm to locally perturb the code according to the location category, allowing the protection of non-sensitive locations to be reduced without weakening the protection of sensitive locations, thereby improving data availability. Simulation experiments demonstrate that the proposed algorithm effectively meets the personalized trajectory privacy requirements while also exhibiting good performance in trajectory proportion estimation and Top-K classification.
In millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, because of the high hardware cost and high power consumption, the traditional fully digital beamforming (DBF) cannot be implemented easily. Meanwhile, analog beamforming which is implemented with phase shifters has high availability but suffers poor performance. Considering the advantages of above two, a potential solution is to design an appropriate hybrid analog and digital beamforming structure, where the available iterative optimization algorithm can get performance close to fully digital processing, but solving this sparse optimization problem faces with a high computational complexity. The key challenge of seeking out hybrid beamforming (HBF) matrices lies in leveraging the trade-off between the spectral efficiency performance and the computational complexity. In this paper, we propose an asymptotically unitary hybrid precoding (AUHP) algorithm based on antenna array response (AAR) properties to solve the HBF optimization problem. Firstly, we get the optimal orthogonal analog and digital beamforming matrices relying on the channel’s path gain in absolute value by taking into account that the AAR matrices are asymptotically unitary. Then, an improved simultaneously orthogonal matching pursuit (SOMP) algorithm based on recursion is adopted to refine the hybrid combining. Numerical results demonstrate that our proposed AUHP algorithm enables a lower computational complexity with negligible spectral efficiency performance degradation.
This paper provides a formalized definition of the application problem of compound condition query (CCQ) and a formal method of applying requirements elicitation based on trace information space derived from trace algebra. With the formalized process of solving the application problem of CCQ, formal requirements specification of application of CCQ is given, a formalized and automatic mapping of the results of requirements elicitation to the formal requirements specification is performed, the software system model and the application code are developed. Through a sample application of comprehensive query on housing information, the feasibility of formalized and automatic software development for the application problem of CCQ is proved. The result has important implications for the other problems regarding formalization and automatic software development.
Threshold proxy re-encryption( PRE) authorizes the data access right of data subject to multiple proxies, who authorize the right again to delegatee to accomplish the end-to-end data encryption process from storage to authorization. Based on threshold PRE algorithm, in order to build a complete trusted data storage and authorization system, the four protocols, which are data access protocol, authorization proxy protocol, authorization proxy cancellation protocol and data reading authorization protocol, are defined completely. On that basis, an efficient data searching method is constructed by specifying the data delegatee. At last, to ensure the right to know of data, the audit log is processed with trusted data right confirmation based on distributed ledger technology. Meanwhile, a parallel data right confirmation processing method is defined based on hierarchical derivation algorithm of public and private key. In the end, the performance evaluation analysis of the protocol are given. Trusted data access and authorization protocol is convenient to build a complete data processing system on the premise of protecting data privacy based on public cloud storage system or distributed storage system.
Special Topic: Optical Communication and Artificial Intelligence
As the core technology of optical networks virtualization, virtual optical network embedding ( VONE) enables multiple virtual network requests to share substrate elastic optical network ( EON) resources simultaneously and hence has been applicated in edge computing scenarios. In this paper, we propose a reinforced virtual optical network embedding ( R-VONE ) algorithm based on deep reinforcement learning ( DRL) to optimize network embedding policies automatically. The network resource attributes are extracted as the environment state for model training, based on which DRL agent can deduce the node embedding probability. Experimental results indicate that R-VONE presents a significant advantage with lower blocking probability and higher resource utilization.
The research purpose of this paper is focused on investigating the performance of extra-large scale massive multiple-input multiple-output ( XL-MIMO) systems with residual hardware impairments. The closed-form expression of the achievable rate under the match filter (MF) receiving strategy was derived and the influence of spatial non-stationarity and residual hardware impairments on the system performance was investigated. In order to maximize the signal-to-interference-plus-noise ratio ( SINR ) of the systems in the presence of hardware impairments, a hardware impairments-aware minimum mean squared error (HIA-MMSE) receiver was proposed. Furthermore, the stair Neumann series approximation was used to reduce the computational complexity of the HIA-MMSE receiver, which can avoid matrix inversion. Simulation results demonstrate the tightness of the derived
analytical expressions and the effectiveness of the low complexity HIA-MMSE (LC-HIA-MMSE) receiver.
Spectrum sensing is an essential ability to detect spectral holes in cognitive radio (CR) networks. The critical challenge to spectrum sensing in the wideband frequency range is how to sense quickly and accurately. Compressive sensing(CS) theory can be employed to detect signals from a small set of non-adaptive, linear measurements without fully recovering the signal. However, the existing compressive detectors can only detect some known deterministic signals and it is not suitable for the time-varying amplitude signal, such as spectrum sensing signals in CR networks. First, a model of signal detect is proposed by utilizing compressive sampling without signal recovery, and then the generalized likelihood ratio test (GLRT) detection algorithm of the time-varying amplitude signal is
derived in detail. Finally, the theoretical detection performance bound and the computation complexity are analyzed. The comparison between the theory and simulation results of signal detection performance over Rayleigh and Rician channel demonstrates the validity of the performance bound. Compared with the reconstructed spectrum sensing detection algorithm, the proposed algorithm greatly reduces the data volume and algorithm complexity for the signal with random amplitudes.
In the research of green communication, considering the base station (BS) power allocation from the perspective of energy efficiency (EE) is meaningful for heterogeneous cellular networks (HCNs) optimization. The EE of two-tier HCNs was analyzed and a new method for the network EE optimization was proposed by adjusting the small BS transmitting power. First, the HCNs ware modeled by homogeneous Poisson point processes (PPPs), and the coverage probability of BSs in each tier was derived. Second, according to the definition of EE, and the closed-form of EE was given by deriving the total power consumption and total throughput of HCNs respectively. At last, the analytical performance of the EE of HCNs on the small BS transmission power was analyzed, and a small BS power optimization algorithm was proposed to maximize the EE. Simulation results show that, the transmission power of small BS has a significant impact on the EE of HCNs. Furthermore, by optimizing the transmission power of small BS, the EE of HCNs can be improved effectively.
As a kind of cryptocurrency, bitcoin has attracted much attention with its decentralization. However, there is two problems in the bitcoin transactions: the account security and transaction privacy. In view of the above problems, a new partially blind threshold signature scheme is proposed, which can both enhance the security of bitcoin account and preserve the privacy of transaction. Firstly, transaction amounts are encrypted by employing the homomorphic Paillier cryptosystem, and output address is disturbed by using one-time public key. Then the encrypted or disrupted transaction information is signed by multiple participants who are authorized by using threshold secret sharing. Compared with partially blind fuzzy signature scheme, the proposed scheme can fully preserve the transaction privacy. Furthermore, performance analysis shows that the proposed scheme is secure and effective in practical applications.
This paper puts forward a user clustering and power allocation algorithm for non-orthogonal multiple access (NOMA) based device-to-device (D2D) cellular system. Firstly, an optimization problem aimed at maximizing the sum-rate of the system is constructed. Since the optimization problem is a mixed-integer non-convex optimization, it is decomposed into two subproblems, namely user clustering and power allocation subproblem. In the subproblem of user clustering, the clustering algorithms of cellular user and D2D pair are proposed respectively. In the power allocation subproblem, the gradient assisted binary search (GABS) algorithm and logarithmic approximation in successive convex approximation (SCA) are used to optimize the power of subchannel (SC) and D2D transmitted power respectively. Finally, an efficient joint iterative algorithm is proposed for the original mixed inter non-convex non-deterministic polynomial (NP)-hard problem. The simulation results show that the proposed algorithm can effectively improve the total system rate and the larger the ratio of cellular users (CUs) to total users, the larger the total system rate.
A downlink covert communication model that consists of a base station and two legitimate users was considered. In addition to the general signals shared by the two users, the base station will send the covert signals only to one user in a certain time without wanting the other to detect this covert communication behavior. In order to achieve covert communication, two information transmission schemes are designed based on transmission antenna selection (TAS) with the help of artificial noise (AN) transmitted by the user receiving the covert signals, denoted as TAS-Ι and TAS-Πrespectively. Considering the best detection performance of the user only receiving the general signals, under the two schemes, the detection error probabilities and their average values, the connection probabilities, the system covert throughputs are separately calculated. In addition, on the premise of meeting the system’s covert conditions, an optimization scheme is proposed to maximize the covert system throughput. Finally, the simulation
results show that the proposed system can realize covert communication successfully, and the system covert performance under TAS-Ι is better than that under TAS-Π.
Driving in the complex traffic safely and efficiently is a difficult task for autonomous vehicle because of the stochastic characteristics of engaged human drivers. Deep reinforcement learning (DRL), which combines the abstract representation capability of deep learning (DL) and the optimal decision making and control capability of reinforcement learning (RL), is a good approach to address this problem. Traffic environment is built up by combining intelligent driver model (IDM) and lane-change model as behavioral model for vehicles. To increase the stochastic of the established traffic environment, tricks such as defining a speed distribution with cutoff for traffic cars and using various politeness factors to represent distinguished lane-change style, are taken. For training an
artificial agent to achieve successful strategies that lead to the greatest long-term rewards and sophisticated maneuver, deep deterministic policy gradient (DDPG) algorithm is deployed for learning. Reward function is designed to get a trade-off between the vehicle speed, stability and driving safety. Results show that the proposed approach can achieve good autonomous maneuvering in a scenario of complex traffic behavior through interaction with the environment.
With the development and application of information technology, the problem of personal privacy leakage is becoming more and more serious. Most attribute-based broadcast encryption (ABBE) schemes focus on data security, while ignoring the protection of the personal privacy of users in access structure and identity. To address this problem, a privacy preserving ABBE scheme is proposed, which ensures the data confidentiality and protects personal privacy as well. In addition, the authenticity of encrypted data can be verified. It is proved that the proposed scheme achieves full security by dual system encryption.
Network traffic classification, which matches network traffic for a specific class of different granularities, plays a vital role in the domain of network administration and cyber security. With the rapid development of network communication techniques, more and more network applications adopt encryption techniques during communication, which brings significant challenges to traditional network traffic classification methods. On the one hand, traditional methods mainly depend on matching features on the application layer of the ISO/OSI reference model, which leads to the failure of classifying encrypted traffic. On the other hand, machine learning-based methods require human-made features from network traffic data by human experts, which renders it difficult for them to deal with complex network protocols. In this paper, the convolution attention network (CAT) is proposed to overcom those difficulties. As an end-to-end model, CAT takes raw data as input and returns classification results automatically, with engineering by human experts. In CAT, firstly, the importance of different bytes with an attention mechanism of network traffic is achieved. Then, convolution neural network (CNN) is used to learn features automatically and feed the output into a softmax function to get classification results. It enables CAT to learn enough information from network traffic data and ensure the classified accuracy. Extensive experiments on the public encrypted network traffic dataset ISCX2016 demonstrate the effectiveness of the proposed model.
It is a critical challenge for quantum machine learning to classify the datasets accurately. This article develops a quantum classifier based on the isolated quantum system (QC-IQS) to classify nonlinear and multidimensional datasets. First, a model of QC-IQS is presented by creating parameterized quantum circuits (PQCs) based on the decomposing of unitary operators with the Hamiltonian in the isolated quantum system. Then, a parameterized quantum classification algorithm (QCA) is designed to calculate the classification results by updating the loss function until it converges. Finally, the experiments on nonlinear random number datasets and Iris datasets are designed to demonstrate that the QC-IQS model can handle and generate accurate classification results on different kinds of datasets. The experimental results reveal that the QC-IQS is adaptive and learnable to handle different types of data. Moreover, QC-IQS compensates the issue that the accuracy of previous quantum classifiers declines when dealing with diverse datasets. It promotes the process of novel data processing with quantum machine learning and has the potential for more comprehensive applications in the future.
Special Topic: Artificial Intelligence of Things
Pedestrian attribute recognition is often considered as a multi-label image classification task. In order to make full use of attribute-related location information, a saliency guided sel-attention network ( SGSA-Net) was proposed to weakly supervise attribute localization, without annotations of attribute-related regions. Saliency priors were integrated into the spatial attention module ( SAM ). Meanwhile,channel-wise attention and spatial attention were introduced into the network. Moreover, a weighted binary cross-entropy loss ( WCEL) function was employed to handle the imbalance of training data. Extensive experiments on richly annotated pedestrian ( RAP) and pedestrian attribute ( PETA) datasets demonstrated that SGSA-Net outperformed other state-of-the-art methods.
Aiming at the failure of traditional medium and long-term traffic flow forecasting methods to effectively process
long-time series data and extract the spatio-temporal characteristics between road nodes, a com-bined prediction
model attention based on convolutional neural network (CNN) and transformer ( ACNN-Trans ) network is
proposed. The spatial features between nodes on complex roads are extracted by CNN. The embedded attention
mechanism is adaptively pay attention to the results of feature extraction. The spatial features of the node traffic flow
are dug by the embedded attention mechanism. The traffic flow sequence is calculated by the transformer network.
The temporal correlation of long-term series data is captured by the relative position weight between the data.
Finally, the 30-min and 60-min traffic flow predictions are respectively modeled by the extracted temporal and
spatial features. The results show that the prediction results of the ACNN-Trans combined model are better than other models on two different real datasets. Comparing to the baseline models, the root mean square error (RMSE)
index of the prediction results is reduced by an average of 34.58%, which verifies the effectiveness of ACNN-Trans
model.
Cloud computing makes it possible for users to share computing power. The framework of multiple data centers gains a greater popularity in modern cloud computing. Due to the uncertainty of the requests from users, the loads of CPU(Center Processing Unit) of different data centers differ. High CPU utilization rate of a data center affects the service provided for users, while low CPU utilization rate of a data center causes high energy consumption. Therefore, it is important to balance the CPU resource across data centers in modern cloud computing framework. A virtual machine (VM)migration algorithm was proposed to balance the CPU resource across data centers. The simulation results suggest that the proposed algorithm has a good performance in the balance of CPU resource across data centers and reducing energy consumption.
Complex Network Identification and Control
This paper proposes a novel method for the parameter optimization of complex networks established through coarsening and phase space reconstruction. Firstly, we identify the change-points of the time series based on the cumulative sum ( CUSUM) control chart method. Then, we optimize the coarse-graining parameters and phase space embedding dimension based on the evolution analysis of the global topology index ( betweenness) at the mutation point. Finally, we conduct a simulation analysis based on real-time data of Chinese copper spot prices. The results show that the delay of the copper spot prices in Chinese spot market is 1 day, and the optimal embedding dimension of the phase space reconstruction is 3. The acceptance level of the investors towards the small fluctuations in copper spot prices is 0.2 times than the average level of price fluctuations, which means that an average price fluctuation of 0.2 times is the optimal coarse-graining parameter.
Ciphertext-policy attribute-based searchable encryption (CP-ABSE) can achieve fine-grained access control for data sharing and retrieval, and secure deduplication can save storage space by eliminating duplicate copies. However, there are seldom schemes supporting both searchable encryption and secure deduplication. In this paper, a large universe CP-ABSE scheme supporting secure block-level deduplication are proposed under a hybrid cloud mechanism. In the proposed scheme, after the ciphertext is inserted into bloom filter tree (BFT), private cloud can perform fine-grained deduplication efficiently by matching tags, and public cloud can search efficiently using homomorphic searchable method and keywords matching. Finally, the proposed scheme can achieve privacy under chosen distribution attacks block-level (PRV-CDA-B) secure deduplication and match-concealing (MC) searchable security. Compared with existing schemes, the proposed scheme has the advantage in supporting fine-grained access control, block-level deduplication and efficient search, simultaneously.
Achterbahn-128 is a stream cipher proposed by Gammel et al. and submitted to the eSTREAM project. Though many attacks have been published, no recovery attack better than Naya-Plasencia‘s results with 256 bit keystream limitation. Similar approach is shown and found a specific parity check and decimation. Then an improved distinguisher is constructed for Achterbahn-128 to recover the key with only O(255) keystream bit and O(2102) time
complexity. Furthermore, this result is much more effective than the former.
In this paper, deep learning technology was utilited to solve the railway track recognition in intrusion detection problem. The railway track recognition can be viewed as semantic segmentation task which extends image processing to pixel level prediction. An encoder-decoder architecture DeepLabv3 + model was applied in this work due to its good performance in semantic segmentation task. Since images of the railway track collected from the video surveillance of the train cab were used as experiment dataset in this work, the following improvements were made to the model. The first aspect deals with over-fitting problem due to the limited amount of training data. Data augmentation and transfer learning are applied consequently to rich the diversity of data and enhance model robustness during the training process. Besides, different gradient descent methods are compared to obtain the optimal optimizer for training model parameters. The third problem relates to data sample imbalance, cross entropy (CE) loss is replaced by focal loss (FL) to address the issue of serious imbalance between positive and negative sample. Effectiveness of the improved DeepLabv3 + model with above solutions is demonstrated by experiment results with different system parameters.
By leveraging the high maneuverability of the unmanned aerial vehicle ( UAV) and the expansive coverage of the intelligent reflecting surface ( IRS), a multi-IRS-assisted UAV communication system aimed at maximizing the sum data rate of all users was investigated in this paper. This is achieved through the joint optimization of the trajectory and transmit beamforming of the UAV, as well as the passive phase shift of the IRS. Nevertheless, the initial problem exhibits a high degree of non-convexity, posing challenges for conventional mathematical optimization techniques in delivering solutions that are both quick and efficient while maintaining low complexity. To address this issue, a novel scheme called the deep reinforcement learning ( DRL) -based enhanced cooperative reflection network ( DCRN) was proposed. This scheme effectively identifies optimal strategies, with the long short-term memory ( LSTM) network improving algorithm convergence by extracting hidden state information. Simulation results demonstrate that the proposed scheme outperforms the baseline scheme, manifesting substantial enhancements in sum rate and superior performance.
The rapid development of location-based social networks (LBSNs) has provided an unprecedented opportunity for better location-based services through point-of-interest (POI) recommendation. POI recommendation is personalized, location-aware, and context depended. However, extreme sparsity of user-POI matrix creates a severe challenge. In this paper we propose a textual-geographical-social aware probabilistic matrix factorization method for POI recommendation. Our model is textual-geographical-social aware probabilistic matrix factorization called TGS-PMF, it exploits textual information, geographical information, social information, and incorporates these factors effectively. First, we exploit an aggregated latent Dirichlet allocation (LDA) model to learn the interest topics of users and infer the interest POIs by mining textual information associated with POIs and generate interest relevance score. Second, we propose a kernel estimation method with an adaptive bandwidth to model the geographical correlations and generate geographical relevance score. Third, we build social relevance through the power-law distribution of user social relations to generate social relevance score. Then, our exploit probabilistic matrix factorization model (PMF) to integrate the interest, geographical, social relevance scores for POI recommendation. Finally, we implement experiments on a real LBSN check-in dataset. Experimental results show that TGS-PMF achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.
User interactive behaviors play a dual role during the hypertext transfer protocol (HTTP) video service: reflection and influence. However, they are seldom taken into account in practices. To this end, this paper puts forward the user interactive behaviors, as subjective factors of quality of experience (QoE) from viewer level, to structure a comprehensive multilayer evaluation model based on classic network quality of service (QoS) and application QoS. First, dual roles of user behaviors are studied and the characteristics are extracted where the user experience is correlated with user interactive behaviors. Furthermore, we categorize QoE factors into three dimensions and build the metric system. Then we perform the subjective tests and investigate the relationships among network path quality, user behaviors, and QoE. Ultimately, we employ the back propagation neural network (BPNN) to validate our analysis and model. Through the simulation experiment of mathematical and BPNN, the dual effects of user interaction behaviors on the reflection and influence of QoE in the video stream are analyzed, and the QoE metric system and evaluation model are established.
To understand the utilizability of TV white spaces (TVWS), a comprehensive overview of the outdoor and indoor network design over TVWS is given. The related challenges are analyzed. The potential approaches to overcoming these challenges are discussed. The open research issues are investigated. The result shows that: in the indoor scenario, the white space ratio is on average 18.4% higher than that in the outdoor scenario, which corresponds to 7.7 vacant TV channels. Both network design includes 7 key components: TV spectrum identification, access point (AP) discovery, AP association, spectrum allocation, band width adaptation, interface control and disruption handling. Due to building penetration loss, the indoor TVWS identification and AP placement should be carefully considered in the indoor scenario.
Wireless ultra-dense network (UDN) is one of the important technologies to solve the burst of throughput demand in the forthcoming fifth generation (5G) cellular networks. Reusing spectrum resource for the backhaul of small base stations (SBSs) is a hotspot research because of lower cost and rapid implementation with macro base stations (MBSs) in recent years. In heterogeneous UDN, the problem of spectrum allocation for wireless backhaul is investigated. In particular, two different spectrum resource reusing strategies for wireless backhaul are proposed in heterogeneous UDN with the limited bandwidth condition. Using a stochastic geometry-based heterogeneous UDN model, the success probabilities that mobile users communicate with SBSs or MBSs are derived under two different spectrum resource reusing strategies. In addition, the network throughput’s analytical expressions and the optimal ratio of spectrum allocation are derived. Numeral results are provided to evaluate the performance of the proposed strategies at throughput. Thus, the effectiveness of the strategy that mobile users can only communicate with SBSs is validated.
To improve the anti-noise ability of fuzzy local information C-means clustering, a robust entropy-like distance driven fuzzy clustering with local information is proposed. This paper firstly uses Jensen-Shannon divergence to induce a symmetric entropy-like divergence. Then the root of entropy-like divergence is proved to be a distance measure, and it is applied to existing fuzzy C-means (FCM) clustering to obtain a new entropy-like divergence driven fuzzy clustering, meanwhile its convergence is strictly proved by Zangwill theorem. In the end, a robust fuzzy clustering by combing local information with entropy-like distance is constructed to segment image with noise. Experimental results show that the proposed algorithm has better segmentation accuracy and robustness against noise than existing state-of-the-art fuzzy clustering-related segmentation algorithm in the presence of noise.
To meet the demands of large-scale user access with computation-intensive and delay-sensitive applications,
combining ultra-dense networks (UDNs) and mobile edge computing (MEC)are considered as important solutions.
In the MEC enabled UDNs, one of the most important issues is computation offloading. Although a number of work
have been done toward this issue, the problem of dynamic computation offloading in time-varying environment,
especially the dynamic computation offloading problem for multi-user, has not been fully considered. Therefore, in
order to fill this gap, the dynamic computation offloading problem in time-varying environment for multi-user is
considered in this paper. By considering the dynamic changes of channel state and users queue state, the dynamic
computation offloading problem for multi-user is formulated as a stochastic game, which aims to optimize the delay
and packet loss rate of users. To find the optimal solution of the formulated optimization problem, Nash Q-learning
(NQLN) algorithm is proposed which can be quickly converged to a Nash equilibrium solution. Finally, extensive
simulation results are presented to demonstrate the superiority of NQLN algorithm. It is shown that NQLN algorithm
has better optimization performance than the benchmark schemes.
Special Topic: Cultural Computing
In the context of interdisciplinary research, using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts, and provide quantitative support and evidence for humanistic studies. Based on the novel A Dream of Red Mansions, the automatic extraction and classification of those sentiment terms in it were realized, and detailed analysis of large-scale sentiment terms was carried out. Bidirectional encoder representation from transformers (BERT) pretraining and fine-tuning model was used to construct the sentiment classifier of A Dream of Red Mansions. Sentiment terms of A Dream of Red Mansions are divided into eight sentimental categories, and the relevant people in sentences are extracted according to specific rules. It also tries to visually display the sentimental interactions between Twelve Girls of Jinling and Jia Baoyu along with the development of the episode. The overall F1 score of BERT-based sentiment classifier reached 84-89%. The best single sentiment score reached 91-15%. Experimental results show that the classifier can satisfactorily classify the text of A Dream of Red Mansions, and the text classification and interactional analysis results can be mutually verified with the text interpretation of A dream of Red Mansions by literature experts.
With the rapid development of location-based networks, point-of-interest (POI) recommendation has become an important means to help people discover interesting and attractive locations, especially when users travel out of town. However, because users only check-in interaction is highly sparse, which creates a big challenge for POI recommendation. To tackle this challenge, we propose a joint probabilistic generative model called geographical temporal social content popularity (GTSCP) to imitate user check-in activities in a process of decision making, which effectively integrates the geographical influence, temporal effect, social correlation, content information and popularity impact factors to overcome the data sparsity, especially for out-of-town users. Our proposed the GTSCP supports two recommendation scenarios in a joint model, i.e., home-town recommendation and out-of-town recommendation. Experimental results show that GTSCP achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.
Low earth orbit (LEO) satellite network provides global coverage and supports a wide range of services. However, due to the rapid changes and energy-limitation of satellites, how to meet the demand of the quality of service (QoS) from ground traffic and prolong the lifetime of LEO satellite network is the research emphasis of the investigator. Hence, a routing algorithm which takes into account the multi-QoS requirements and satellite energy consumption (QER) of LEO satellite network is proposed. Firstly, the satellite intimacy degree (SID) and the path health degree (PHD) are introduced to obtain the path evaluation function according to the energy consumption and queue state of the satellite. Then, the distributed routing QER is established through the path evaluation function and the idea of genetic algorithm (GA), which enables each satellite to adjust traffic and realizes the network load balancing. Simulation results show that QER performs well in terms of end-to-end delay, delay jitter, and system throughput.
Aiming at the coexistence of cellular network and wireless fidelity (WiFi) network, a coalitional game-based
WiFi offloading algorithm in heterogeneous networks is proposed. Firstly, this paper defines the user fairness utility
function that comprehensively considers the user communication rate, cost and delay. Then, the coalitional game
model including two types of coalitions is constructed. To control the transfer of users among coalitions, a coalition
transfer criterion that simultaneously improves the user's individual utility and the total system utility is proposed.
In addition, this paper presents a channel allocation scheme that ensures full utilization of system resources to
maximize the total utility of the system. The simulation results show that the proposed offloading algorithm can
reasonably utilize the resources of cellular network and WiFi network, and improve the utility of users and system.
The task of multimodal sentiment classification aims to associate multimodal information, such as images and texts with appropriate sentiment polarities. There are various levels that can affect human sentiment in visual and textual modalities. However, most existing methods treat various levels of features independently without having effective method for feature fusion. In this paper, we propose a multi-level fusion classification (MFC) model to predict the sentiment polarity based on the fusing features from different levels by exploiting the dependency among them. The proposed architecture leverages convolutional neural networks ( CNNs) with multiple layers to extract levels of features in image and text modalities. Considering the dependencies within the low-level and high-level features, a bi-directional (Bi) recurrent neural network (RNN) is adopted to integrate the learned features from different layers in CNNs. In addition, a conflict detection module is incorporated to address the conflict between modalities. Experiments on the Flickr dataset demonstrate that the MFC method achieves comparable performance compared with strong baseline methods.
Short-term load forecasting (STLF) plays a crucial role in the smart grid. However, it is challenging to capture the long-time dependence and the nonlinear relationship due to the comprehensive fluctuations of the electrical load. In this paper, an STLF model based on gated recurrent unit and multi-head attention (GRU-MA) is proposed to address the aforementioned problems. The proposed model accommodates the time series and nonlinear relationship of load data through gated recurrent unit (GRU) and exploits multi-head attention (MA) to learn the decisive features and long-term dependencies. Additionally, the proposed model is compared with the supportvector regression (SVR) model, the recurrent neural network and multi-head attention (RNN-MA) model, the long short-term memory and multi-head attention (LSTM-MA ) model, the GRU model, and the temporal convolutional network (TCN) model using the public dataset of the Global Energy Forecasting Competition 2014 (GEFCOM2014). The results demonstrate that the GRU-MA model has the best prediction accuracy.
This paper investigates the performance of the method used to reduce the decoding complexity of rateless codes through the deletion of the received symbols with low reliability. In the decoder, the received symbols whose absolute value of logarithm likelihood ratio (LLR) is lower than the threshold are removed, together with their corresponding edges, and thus not involved in the decoding process. The relationship between the deletion probability and the likelihood ratio deletion threshold is derived. The average mutual information per received symbol is analyzed in the case of deletion. The required number of symbols for the decoder to keep the same performance as regular decoding decreases since the average mutual information per symbol increases with the deletion, thus reducing the decoding complexity. This paper analyzes the reduction of decoding computations and the consequent transmission efficiency loss from the perspective of mutual information. The simulation results of decoding performance are consistent with those of the theoretical analysis, which show that the method can effectively reduce the decoding complexity at the cost of a slight loss of transmission efficiency.
With the popularity of variety delay-sensitive services, how to guarantee the delay requirements for mobile users (MUs) is a great challenge for downlink beamformer design in green cloud radio access networks (C-RANs). In this paper, we consider the problem of the delay-aware downlink beamforming with discrete rate adaptation to minimize the power consumption of C-RANs. We address the problem via a mixed integer nonlinear program (MINLP), and then reformulate the MINLP problem as a mixed integer second-order cone program (MI-SOCP), which is a convex program when the integer variables are relaxed as continuous ones. Based on this formulation, a deflation algorithm, whose computational complexity is polynomial, is proposed to derive the suboptimal solution. The simulation results are presented to validate the effectiveness of our proposed algorithm.
Computer Applied Technology
In recent years, with the development of smart devices, mobile users can use them to sense the environment. In order to improve the data quality and achieve maximum profits, incentive mechanism is needed to motivate users to participate. In this paper, reputation mechanism, participant selection, task allocation and joint pricing in mobile crowdsourcing system are studied. A user reputation evaluation method is proposed, and a participant selection algorithm (PSA) based on user reputation is proposed. Besides, a social welfare maximization algorithm (SWMA) is proposed, which achieves task pricing with maximizing the interests of all parties, including both task publishers and mobile users. The social welfare maximization problem is divided into local optimization sub-problems which can be solved by double decomposition. It is proved that the algorithm converges to the optimal solution. Results of simulations verify that algorithms PSA and SWMA are effective.
In the mobile crowd sensing (MCS) network environment, it is very important to establish an evolutionary process that can dynamically depict the trust degree of task participants. To address this issue, this paper proposes a dynamic trust evaluation model for task participants. Firstly, according to the security requirements and trust strategy of the perceived tasks, the attribute reduction algorithm (ARA) based on rough set is used to obtain the multi-attribute indexes that affect the participants' trust information. Removing the redundant attributes can avoid the lag of trust evaluation and reduce the time cost. Secondly, the grey correlation analysis method is used to solve the correlation degree between the target sequence and the comparison sequence on the trust attributes by integrating the multi-attribute decision-making method, which avoids the distortion of the trust evaluation caused by human subjective factors and improves the quality of the perceived data. Finally, a dynamic trust evaluation model for participants in complex sensing network environment is established. The simulation results show that the proposed model can not only dynamically depict the trust degree of participants in real time, but also have higher accuracy and less time cost.
In the industrial fields, the mechanical equipment will inevitably wear out in the process of operation. With the accumulation of losses, the probability of equipment failure is increasing. Therefore, if the remaining useful life (RUL) of the equipment can be accurately predicted, the equipment can be maintained in time to avoid the downtime caused by equipment failure and greatly improve the production efficiency of enterprises. This paper aims to use independently recurrent neural network (IndRNN) to learn health degradation of turbofan engine and make accurate predictions of its RUL, which not only effectively solves the problem of gradient explosion and vanishing, but also increases the interpretability of neural networks. IndRNN can be used to process longer time series which matches the scene with high frequency sampling sensor in industrial practical applications. The results demonstrate that IndRNN for RUL estimation significantly outperforms traditional approaches, as well as convolutional neural network (CNN) and long short-term memory network (LSTM) for RUL estimation.
Special Topic: Data Security and Privacy Preservation in Cloud/ Fog / Edge-Enabled Internet of Thing
In order to perform multi-dimensional data aggregation operations efficiently in edge computing-based Internet of things (IoT) systems, a new efficient privacy-preserving multi-dimensional data aggregation (EPMDA) scheme is proposed in this paper. EPMDA scheme is characterized by employing the homomorphic Paillier encryption and SM9 signature algorithm. To improve the computation efficiency of the Paillier encryption operation, EPMDA scheme generates a pre-computed modular exponentiation table of each dimensional data,and the Paillier encryption operation can be implemented by using only several modular multiplications. For the multi-dimensional data, the scheme concatenates zeros between two adjacent dimensional data to avoid data overflow in the sum operation of ciphertexts. To enhance security, EPMDA scheme sets random number at the high address of the exponent. Moreover, the scheme utilizes SM9 signature scheme to guarantee device authentication and data integrity. The performance evaluation and comparison show that EPMDA scheme is more efficient than the existing multi-dimensional data aggregation schemes.
Joint sparse recovery (JSR) in compressed sensing (CS) is to simultaneously recover multiple jointly sparse vectors from their incomplete measurements that are conducted based on a common sensing matrix. In this study, the focus is placed on the rank defective case where the number of measurements is limited or the signals are significantly correlated with each other. First, an iterative atom refinement process is adopted to estimate part of the atoms of the support set. Subsequently, the above atoms along with the measurements are used to estimate the remaining atoms. The estimation criteria for atoms are based on the principle of minimum subspace distance. Extensive numerical experiments were performed in noiseless and noisy scenarios, and results reveal that iterative subspace matching pursuit (ISMP) outperforms other existing algorithms for JSR.
This paper proposes a multi-access and multi-user semantic communication scheme based on semantic matching and intent deviation to address the increasing demand for wireless users and data. The scheme enables flexible management of long frames, allowing each unit of bandwidth to support a higher number of users. By leveraging semantic classification, different users can independently access the network through the transmission of long concatenated sequences without modifying the existing wireless communication architecture. To overcome the potential disadvantage of incomplete semantic database matching leading to semantic intent misunderstanding, the scheme proposes using intent deviation as an advantage. This allows different receivers to interpret the same semantic information differently, enabling multiplexing where one piece of information can serve multiple users with distinct purposes. Simulation results show that at a bit error rate (BER) of 0.1, it is possible to reduce the transmission by approximately 20 semantic basic units.
With the boom of wireless devices, the number of wireless users under wireless local area networks (WLANs) has increased dramatically. However, the standard backoff mechanism in IEEE 802.11 adopts fixed initial contention window (CW) size without considering changes of network load, which leads to a high collision probability and low channel utilization in bursty arrivals. In this paper, a novel CW dynamic adjustment scheme is proposed to achieve high throughput performance in dense user environment. In the proposed scheme, the initial CW size is dynamically adjusted to optimum according to the measured packet collision probability. Simulation results show that the proposed scheme can significantly improve the throughput performance.
Delegated proof-of-stake ( DPOS) consensus mechanism is widely adopted in blockchain platforms, but problems exist in its current applications. In order to explore the security risks in the voting attack of the DPOS consensus mechanism, an extensive game model between nodes was constructed, and it was concluded that the DPOS consensus mechanism relies too much on tokens, and the possibility of node attacks is very high. In order to solve the problems of frequent changes of DPOS consensus mechanism nodes, inactive node voting, excessive reliance on tokens, and malicious nodes, a dynamic, credible, and attack-evading DPOS consensus mechanism was proposed. In addition, the Python simulation results show that the improved Bayesian voting algorithm is effective in calculating node scores.
The application of the artificial intelligence (AI) technology in the 5th generation mobile communication system
(5G) networks promotes the development of the mobile communication network and its application in vertical
industries, however, the application models of "patching" and "plug-in" have hindered the effect of AI
applications. Meanwhile, the application of AI in all walks of life puts forward requirements for new capabilities of
the future network, such as distributed training, real-time collaborative inference, local data processing, etc. ,
which require "native intelligence design” in future networks. This paper discusses the requirements of native
intelligence in the 6th generation mobile communication system (6G) networks from the perspectives of 5G
intelligent network challenges and the “ubiquitous intelligence” vision of 6G, and analyzes the technical challenges
of the AI workflows in its lifecycle and the AI as a service (AIaaS) in cloud network. The progress and deficiencies
of the current research on AI functional architecture in various industry organizations are summarized. The end-to-end functional architecture for native AI for 6G network and its three key technical characteristics are proposed:
quality of AI services (QoAIS) based AI service orchestration for its full lifecycle, deep integration of native AI
computing and communication, and integration of native AI and digital twin network. The directions of future
research are also prospected.
With the development of wireless network and electronic technologies, the wireless sensor network (WSN) has been widely used in many applications. One of the most important applications is wireless medical sensor network (WMSN), which makes modern health-care more popular. However, most of the sensor data transmitted in the WMSN is patient-related information. The sensor data are important and should be confidential. In addition, the attackers may also maliciously modify these sensor data. Therefore, both security and privacy are two very important issues in WMSN. A user authentication protocol and data security transmission mechanism based on bilinear pairing is proposed to protect data security and privacy. The proposed protocol enables the medical staff to monitor the health status of health care workers and provide timely and comprehensive health care information to the patient. Finally, through security and performance analysis, it can be found that the proposed authentication and key agreement protocol can resist common attacks such as impersonation attack, replay attack, online or offline password guessing attack, and stolen verifier attack. At the same time, this agreement is also in line with the performance of WMSN application environment.
This study proposes a hybrid model of speech recognition parallel algorithm based on hidden Markov model (HMM) and artificial neural network (ANN). First, the algorithm uses HMM for time-series modeling of speech signals and calculates the voice to the HMM of the output probability score. Second, with the probability score as input to the neural network, the algorithm gets information for classification and recognition and makes a decision based on the hybrid model. Finally, Matlab software is used to train and test sample data. Simulation results show that using the strong time-series modeling ability of HMM and the classification features of neural network, the proposed algorithm possesses stronger noise immunity than the traditional HMM. Moreover, the hybrid model enhances the individual flaws of the HMM and the neural network and greatly improves the speed and performance of speech recognition.
Joint user pairing and power allocation approach is investigated to meet the rate requirement of enhanced mobile broadband (eMBB) slicing and delay constraint of ultra-reliable low-latency communication (URLLC) slicing simultaneously in downlink non-orthogonal multiple access ( NOMA) system. For maximizing the proportional fairness among mobile terminals, a two-step algorithm is proposed. For a given user sets, the optimal user pairing sets and the factor of the power allocation in a group were obtained to ensure the quality of service (QoS) and the isolation between different types of slicings. Simulation results show that the proposed joint algorithm can provide better throughput than orthogonal multiple access (OMA).
Facial expression recognition (FER) is a vital application of image processing technology. In this paper, a FER model based on the residual network is proposed. The proposed model introduces the idea of the DenseNet, in which the outputs of the residual blocks are not simply added but are linked to the channel dimension. In addition, transfer learning is used to reduce training costs and accelerate training speed. The accuracy and robustness of the proposed FER model were tested by K-fold cross-validation. Experimental results show that the proposed method has competitive performances on FER2013, FER plus (FERPlus), and the real-world affective faces database (RAF-DB).
This study uses atomic layer deposition (ALD) to grow Ga2O3 films on SiO2 substrates and investigates the influence of film thickness and annealing temperature on film quality. Schottky diode devices are fabricated based on the grown Ga2O3 films, and the effects of annealing temperature, electrode size, and electrode spacing on the electrical characteristics of the devices are studied. The results show that as the film thickness increases, the breakdown voltage of the fabricated devices also increases. A Schottky diode with a thickness of 240 nm can achieve a reverse breakdown voltage of 300 V. The film quality significantly improves as the annealing temperature of the film increases. At a voltage of 5 V, the current of the film annealed at 900°C is 64 times that of the film annealed at 700°C. The optimum annealing temperature for Ohmic contact electrodes is 450°C. At 550°C, the Ohmic contact metal tends to burn, and the performance of the device is reduced. Reducing the electrode spacing increases the forward current of the device but decreases the reverse breakdown voltage. Increasing the Schottky contact electrode size increases the forward current, but the change is not significant, and there is no significant change in the reverse breakdown voltage. The device also performs well at high temperatures, with a reverse breakdown voltage of 220 V at 125°C.
This article put forward a resource allocation scheme aimming at maximizing system throughput for devide-to-device(D2D) communications underlying cellular network. Firstly, user closeness is defined and calculated through social information including friendship, interest similarity and communication strength to represent the willingness of user to share the spectrum resource with others. Then a social-aware resource allocation problem is formulated to maximize the system throughput while guaranteeing the quality of service(QoS) requirements of both the admissible D2D pairs and then the power of both CUs and D2D pairs is efficiently allocated. Finally CUs and D2D pairs are matched to reuse the spectrum resource in consideration of both user closeness and physical conditions. Simulation results certify the effectiveness of the proposed scheme which significantly enhances the system throughput compared with the existing algorithms.
With the rapid development of vehicle-based applications, entertainment videos have gained popularity for passengers on public vehicles. Therefore, how to provide high quality video service for passengers in typical public transportation scenarios is an essential problem. This paper proposes a quality of experience (QoE)-based video segments caching (QoE-VSC) strategy to guarantee the smooth watching experience of passengers. Consequently, this paper considers a jointly caching scenario where the bus provides the beginning segments of a video, and the road side unit (RSU) offers the remaining for passengers. To evaluate the effectiveness, QoE hit ratio is defined to represent the probability that the bus and RSUs jointly provide passengers with desirable video segments successfully. Furthermore, since passenger volume change will lead to different video preferences, a deep reinforcement learning (DRL) network is trained to generate the segment replacing policy on the video segments cached by the bus server. And the training target of DRL is to maximize the QoE hit ratio, thus enabling more passengers to get the required video. The simulation results prove that the proposed method has a better performance than baseline methods in terms of QoE hit ratio and cache costs.
In gesture recognition, static gestures, dynamic gestures and trajectory gestures are collectively known as multi-modal gestures. To solve the existing problem in different recognition methods for different modal gestures, a unified recognition algorithm is proposed. The angle change data of the finger joints and the movement of the centroid of the hand were acquired respectively by data glove and Kinect. Through the preprocessing of the multi-source heterogeneous data, all hand gestures were considered as curves while solving hand shaking, and a uniform hand gesture recognition algorithm was established to calculate the Pearson correlation coefficient between hand gestures for gesture recognition. In this way, complex gesture recognition was transformed into the problem of a simple comparison of curves similarities. The main innovations: 1) Aiming at solving the problem of multi-modal gesture recognition, an unified recognition model and a new algorithm is proposed; 2) The Pearson correlation coefficient for the first time to construct the gesture similarity operator is improved. By testing 50 kinds of gestures, the experimental results showed that the method presented could cope with intricate gesture interaction with the 97.7% recognition rate.
In order to improve the efficiency of tasks processing and reduce the energy consumption of new energy vehicle (NEV), an adaptive dual task offloading decision-making scheme for Internet of vehicles is proposed based on information-assisted service of road side units (RSUs) and task offloading theory. Taking the roadside parking space recommendation service as the specific application Scenario, the task offloading model is built and a hierarchical self-organizing network model is constructed, which utilizes the computing power sharing among nodes, RSUs and mobile edge computing (MEC) servers. The task scheduling is performed through the adaptive task offloading decision algorithm, which helps to realize the available parking space recommendation service which is energy-saving and environmental-friendly. Compared with these traditional task offloading decisions, the proposed scheme takes less time and less energy in the whole process of tasks. Simulation results testified the effectiveness of the proposed scheme.
Aiming at the accuracy and error correction of cloud security situation prediction, a cloud security situation
prediction method based on grey wolf optimization (GWO) and back propagation (BP) neural network is proposed. Firstly, the adaptive disturbance convergence factor is used to improve the GWO algorithm, so as to improve the convergence speed and accuracy of the algorithm. The Chebyshev chaotic mapping is introduced into the position update formula of GWO algorithm, which is used to select the features of the cloud security situation prediction data and optimize the parameters of the BP neural network prediction model to minimize the prediction output error. Then, the initial weights and thresholds of BP neural network are modified by the improved GWO algorithm to increase the learning efficiency and accuracy of BP neural network. Finally, the real data sets of Tencent cloud platform are predicted. The simulation results show that the proposed method has lower mean square error (MSE) and mean absolute error (MAE) compared with BP neural network, BP neural network based on genetic algorithm (GA-BP), BP neural network based on particle swarm optimization (PSO-BP) and BP neural network based on GWO algorithm (GWO-BP). The proposed method has better stability, robustness and prediction accuracy.
Special Topic: Artificial Intelligence of Things
In order to improve the service quality of radio frequency identification (RFID) systems, multiple objectives should be comprehensively considered. An improved brain storm optimization algorithm GABSO, which incorporated adaptive learning operator and golden sine operator into the original brain storm optimization (BSO) algorithm, was proposed to solve the problem of RFID network planning (RNP). GABSO algorithm introduces learning operator and golden sine operator to achieve a balance between exploration and development. Based on GABSO algorithm, an optimization model is established to optimize the position of the reader. The GABSO algorithm was tested on the RFID model and dataset, and was compared with other methods. The GABSO algorithm's tag coverage was increased by 9.62% over the Cuckoo search (CS) algorithm, and 7.70% over BSO. The results show that the GABSO algorithm could be successfully applied to solve the problem of RNP.
Special Topic: Optical Communication and Artificial Intelligence
Polyhedron protection realizes link protection by constructing a pre-assigned structure and allocates backup resources on a fixed polyhedron structure based on the maximum number of working resources. Taking into account both protection success rate and resource redundancy, this paper dynamically combines different polyhedron structures to allocate backup resources according to the link load, and proposes a genetic algorithm based dynamic combination of polyhedron structures (GA-DCPS) to reduce the resource consumption in the network while ensuring the protection success rate. GA-DCPS aims to minimize the consumption of wavelength resources, and uses the genetic strategy to find the polyhedron combination with the least redundancy to allocate backup resources while ensuring the success rate of service protection. Compared to using the fixed polyhedron structure with 1:m backup resource allocation, GA-DCPS can reduce resource redundancy by about 15% while ensuring complete protection against double-link failures.
Complex Network Identification and Control
Traditional loss-based transports cannot meet the strict requirements of low latency and high throughput in data center networks (DCNs). Thus data center transmission control protocol (DCTCP) is proposed to better manage the congestion control in DCNs. To provide insight into improving the stability of the DCN, this paper focuses on the Hopf bifurcation analysis of a fluid model of DCTCP, and investigates the stability of the network. The round-trip time (RTT), being an effective congestion signal, is selected as the bifurcation parameter. And the network turns unstable and generates periodic solutions when the parameter is larger than the given critical value, which is given by explicit algorithms. The analytical results reveal the existence of Hopf bifurcation. Numerical simulations are performed to make a comparative analysis between the fluid model and the simplified model of DCTCP. The influence of other parameters on the DCN stability is also discussed.
The continuous phase modulation (CPM) technique is widely used in range telemetry due to its high spectral
efficiency and power efficiency. However, the demodulation performance of the traditional maximum likelihood
sequence detection (MLSD) algorithm significantly deteriorates in non-ideal synchronization or fading channels. To
address this issue, this work proposes a convolutional neural network (CNN) called the cascade parallel crossing
network (CPCNet) to enhance the robustness of CPM signals demodulation. The CPCNet model employs a multiple
parallel structure and feature fusion to extract richer features from CPM signals. This approach constructs feature
maps at different levels, resulting in a more comprehensive training of the model and improved demodulation
performance. Simulation results show that under Gaussian channel, the proposed CPCNet achieves the same bit
error rate (BER) performance as MLSD method when there is no timing error, but with 1/4 symbol period timing
error, the proposed method has 2 dB demodulation gain compared with CNN and convolutional long short-term
memory deep neural network (CLDNN). In addition, under Rayleigh channel, the BER of the proposed method is
reduced by 5% -87% compared to that of MLSD in the wide signal-to-noise ratio (SNR) region.
The diversity in the phone placements of different mobile users’ dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data. To solve this problem, a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed. First, an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information. Then, the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization. Finally, residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results. Experimental results show that this method can achieve a recognition accuracy reaching 89.86%, which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process. The recognition accuracy of the proposed method is effective and satisfactory.
Hybrid beamforming (HBF) technology becomes one of the key technologies in the millimeter wave (mmWave) mobile backhaul systems, for its lower complexity and low power consumption compared to full digital beamforming (DBF). Two structures of HBF exist in the mmWave mobile backhaul system, namely, the fully connected structures (FCS) and partially connected structures (PCS). However, the existing methods cannot be applied to both structures. Moreover, the ideal phase shifter is considered in some current HBF methods, which is not realistic. In this paper, a HBF algorithm for both structures based on the discrete phase shifters is proposed in the mmWave mobile backhaul systems. By using the principle of alternating minimization, the optimization problem of HBF is decomposed into a DBF optimization problem and an analog beamforming (ABF) optimization problem. Then the least square (LS) method is enabled to solve the optimization model of DBF. In addition, the achievable data rate for both structures with closed-form expression which can be used to convert the optimization model into a single-stream beamforming optimization model with per antenna power constraint is derived. Therefore, the ABF is easily solved. Simulation results show that the performance of the proposed HBF method can approach the full DBF by using a lower resolution phase shifter.
Pedestrian trajectory prediction plays an important role in bothadvanced driving assistance system (ADAS) and autonomous vehicles. An algorithm for pedestrian trajectory prediction in crossing scenario is proposed. To obtain features of pedestrian motion, we develop a method for data labelling and pedestrian body orientation regression. Using the hierarchical features as domain of discourse, fuzzy logic rules are built to describe the transition between
different pedestrian states and motion models. With derived probability of each type of motion model we further predict the pedestrian trajectory in the next 1.5 s using switching Kalman filter (KF). The proposed algorithm is further verified in our dataset, and the result indicates that the proposed algorithm successfully predicts pedestrian' s crossing behavior 0.4 s earlier before pedestrian moves. Meanwhile, the precision of predicted trajectory surpasses
other methods including interacting multi-model KF and dynamic Bayesian network (DBN).
In order to improve the learning speed and reduce computational complexity of twin support vector hypersphere
(TSVH), this paper presents a smoothed twin support vector hypersphere (STSVH) based on the smoothing
technique. STSVH can generate two hyperspheres with each one covering as many samples as possible from the
same class respectively. Additionally, STSVH only solves a pair of unconstraint differentiable quadratic
programming problems (QPPs) rather than a pair of constraint dual QPPs which makes STSVH faster than the
TSVH. By considering the differentiable characteristics of STSVH, a fast Newton-Armijo algorithm is used for
solving STSVH. Numerical experiment results on normally distributed clustered datasets ( NDC) as well as
University of California Irvine (UCI) data sets indicate that the significant advantages of the proposed STSVH in terms of efficiency and generalization performance.
The concept of dense small cell has been recently emerged as a promising architecture that can signi?cantly improve spectrum efficiency and system capacity. However, it brings frequent handover for user equipment (UE). Furthermore, this will bring a great deal of signaling overhead to the core network. Virtual technology has been received widespread attention for solving this problem. Its essence is to form virtual cells by clustering various terminals properly. The local mobility management proposed recently is based on the virtual technology. Therefore, the formation process of virtual cells is the basis for the research in local mobility management. So clustering scheme for dense small cell network has been studied in this paper, and a maximum benefit merging algorithm based on undirected weighted graph has been proposed. There are X2 interfaces between the cluster head and each of cluster members within the same cluster. The cluster heads manage the handover among cluster members acting as the local anchors. The proposed clustering scheme is useful for local mobility management. The simulation results show that the proposed clustering algorithm can decrease the signaling overhead more than 70% and 20% compared with the non-clustering algorithm and other clustering algorithms respectively.
Aiming at the problem that more popular network application and more complicated network traffic bring big challenge to current Trojan detecting technique, communication behavior of remote access Trojan (RAT) is analyzed, traffic features’ different performance in different communication sub-periods is discussed, and an integrated Trojan detecting model based on period feature statistics is presented. Feature statistics based on sub-periods and whole session (WS)respectively can increase the gap and classification ability of traffic features. The weighted integrated classifier can take full use of each base classifier’s advantage and compensate for each other’s weaknesses, therefore can strong system’s detecting and generalization capability. Experiment result shows that this system can recognize Trojan traffics from many kinds of normal traffic effectively.
Soft margin support vector machine (SVM) with hinge loss function is an important classification algorithm, which has been widely used in image recognition, text classification and so on. However, solving soft margin SVM with hinge loss function generally entails the sub-gradient projection algorithm, which is very time-consuming when processing big training data set. To achieve it, an efficient quantum algorithm is proposed. Specifically, this algorithm implements the key task of the sub-gradient projection algorithm to obtain the classical sub-gradients in each iteration, which is mainly based on quantum amplitude estimation and amplification algorithm and the controlled rotation operator. Compared with its classical counterpart, this algorithm has a quadratic speedup on the number of training data points. It is worth emphasizing that the optimal model parameters obtained by this algorithm are in the classical form rather than in the quantum state form. This enables the algorithm to classify new data at little cost when the optimal model parameters are determined.
The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data, which impairs the effectiveness of deep learning based counting methods. Given that it is challenging to produce the detailed background information in the generated bumblebee images using current data augmentation methods, in this paper, a joint multi-scale convolutional neural network and multi-channel attention based generative adversarial network (MMGAN) is proposed. MMGAN generates the bumblebee image in accordance with the corresponding density map marking the bumblebee positions. Specifically, the multi-scale convolutional neural network ( CNN) module utilizes multiple convolution kernels to completely extract features of different scales from the input bumblebee image and density map. To generate various targets in the generated image, the multi-channel attention module builds numerous intermediate generation layers and attention maps. These targets are then stacked to produce a bumblebee image with a specific number of bumblebees. The proposed model obtains the greatest performance in bumblebee image generating tasks, and such generated bumblebee images considerably improve the efficiency of deep learning based counting methods in bumblebee counting applications.
Special Topic : Digital Human
Few-shot named entity recognition (NER) aims to identify named entities in new domains using a limited amount of annotated data. Previous methods divided this task into entity span detection and entity classification, achieving good results. However these methods are limited by the imbalance between the entity and non-entity categories due to the use of sequence labeling for entity span detection. To this end, a point-proto network ( PPN) combining pointer and prototypical networks was proposed. Specifically, the pointer network generates the position of entities in sentences in the entity span detection stage. The prototypical network builds semantic prototypes of entity types and classifies entities based on their distance from these prototypes in the entity classification stage. Moreover, the low-rank adaptation ( LoRA) fine-tuning method, which involves freezing the pre-trained weights and injecting a trainable decomposition matrix, reduces the parameters that need to be trained and saved. Extensive experiments on the few-shot NER Dataset (Few-NERD) and Cross-Dataset demonstrate the superiority of PPN in this domain.
To improve the accuracy of the network security situation, a security situation automatic prediction model based on accumulative data preprocess and support vector machine (SVM) optimized by covariance matrix adaptive evolutionary strategy (CMA-ES) is proposed. The proposed model adopts SVM which has strong nonlinear ability. Also, the hyper parameters for SVM are optimized through the CMA-ES which owns good performance in finding optimization automatically. Considering the irregularity of network security situation values, we accumulate the original sequence, so that the internal rules of discrete data can be revealed and it is easy to model. Simulation experiments show that the proposed model has faster convergence-speed and higher prediction accuracy than other extant prediction models.
In order to overcome the poor generalization ability and low accuracy of traditional network traffic prediction methods, a prediction method based on improved artificial bee colony (ABC) algorithm optimized error minimized extreme learning machine (EM-ELM) is proposed. EM-ELM has good generalization ability. But many useless neurons in EM-ELM have little influences on the final network output, and reduce the efficiency of the algorithm. Based on the EM-ELM, an improved ABC algorithm is introduced to optimize the parameters of the hidden layer nodes, decrease the number of useless neurons. Network complexity is reduced. The efficiency of the algorithm is improved. The stability and convergence property of the proposed prediction method are proved. The proposed prediction method is used in the prediction of network traffic. In the simulation, the actual collected network traffic is used as the research object. Compared with other prediction methods, the simulation results show that the proposed prediction method reduces the training time of the prediction model, decreases the number of hidden layer nodes. The proposed prediction method has higher prediction accuracy and reliable performance. At the same time, the performance indicators are improved.
Paxos is a well-known distributed algorithm that provides strong consistency. However, the original Paxos has several shortcomings, including those of slow elections, redundant communications and excessive traffic of the coordinator node. In order to tackle the above dificiencies,the design of advanced edition of Paxos(Adv Paxos) was proposed, which is a new distributed consensus algorithm that is derived from Basic Paxos. This paper analyzes the behavior of each character of the original algotithm during each of its phases. By optimizing the behavior of the proposer and acceptor, a series of behavioral optimization measures was proposed, which included distance related waiting mechanisms, optimization of the number of proposals, self-learning and a reduction in broadcast communications. Through theoretical analysis and experimentation, it is demonstrated that the new algorithm has a lower probability of livelock without a reduction in reliability, faster agreement reaching speeds, lower communication costs among server clusters and higher percentage of successful proposals.
Special Topic: Cultural Computing
In the recent decade, many approaches of rough line drawing simplification were proposed, but they are not well summarized yet, especially from the perspective of Chinese cultural computing. In this paper, a comprehensive review of existing line drawing simplification methods was presented, including their algorithms, advantages/ disadvantages, inputs/ outputs, datasets and source codes, etc. For raster line drawings, related implification work was discussed according to four main categories: fitting-based methods, tracing-based methods, field-based methods, and learning-based methods. For vector line drawings, a deep investigation was introduced for two major steps of simplification: stroke grouping and stroke merging. Finally, conclusions were given, key challenges and future directions of line drawing simplification for Chinese traditional art were thoroughly discussed.
In order to solve the problem of low accuracy of traditional fixed window width kernel density estimation (KDE)
in radar cross section (RCS) statistical characteristics analysis, an improved Epanechnikov KDE (K-KDE)
algorithm was proposed to analyze the statistical characteristics of the engine's backward RCS. Firstly, the K-nearest
neighbor method was used to calculate the dynamic window width of the K-KDE, and the Euclidean distance of
each adjacent sample was used to judge the local density of the sample, and then the window width of the kernel
function was adjusted by the distance between the sample point and the nearest neighbor to complete the KDE.
Secondly, based on the K-KDE and the traditional KDE algorithm, the cumulative probability density function
(CPDF) of four RCS random distribution sample points subject to fixed parameters was calculated. The results
showed that the root mean square error of the K-KDE was reduced by 31.2%, 38.8%, 38.1% and 31.9%
respectively compared with the KDE. Finally, the K-KDE combined with the second generation statistical analysis
models were used to analyze the statistical characteristics of the engine backward RCS.
In heterogeneous networks (HetNets), it is desirable to offload users from macro cells to small cells to achieve load balancing. However, the offloaded users suffer a strong inter-tier interference. To guarantee the performance of the offloaded users, the interference from macro cells should be carefully managed. In this paper, we jointly optimize load balancing and interference coordination in multi-antenna HetNets. Different from previous works, instead of almost blank subframes (ABS) on which the macro cells waste time resource, the macro cells suppress the interference to the offloaded users by zero-forcing beamforming (ZFBF) on interference nulling subframes (INS). Considering user association cannot be conduct frequently, we derive the long-term throughput of users over Rayleigh fading channels while previous works focused on instantaneous rate. From the perspective of the spectrum efficiency and user fairness, we formulate a long-term network-wide utility maximization problem. By decomposing the problem into two subproblems, we propose an efficient joint load balancing and interference coordination strategy. Simulation results show that our proposal can achieve good system performance gains over counterparts in term of the network utility, cell edge throughput and average throughput.
Information fusion is a key step in multimodal biometric systems. The feature-level fusion is more effective than the score-level and decision-level method owing to the fact that the original feature set contains richer information about the biometric data. In this paper, we present a multiset generalized canonical discriminant projection (MGCDP) method for feature-level multimodal biometric information fusion, which maximizes the correlation of the intra-class features while minimizes the correlation of the between-class. In addition, the serial MGCDP (S-MGCDP) and parallel MGCDP (P-MGCDP) strategy were also proposed, which can fuse more than two kinds of biometric information, so as to achieve better identification effect. Experiments performed on various biometric databases shows that MGCDP method outperforms other state-of-the-art feature-level information fusion approaches.
Attacks on web servers are part of the most serious threats in network security fields. Analyzing logs of web attacks is an effective approach for malicious behavior identification. Traditionally, machine learning models based on labeled data are popular identification methods. Some deep learning models are also recently introduced for analyzing logs based on web logs classification. However, it is limited to the amount of labeled data in model training. Web logs with labels which mark specific categories of data are difficult to obtain. Consequently, it is necessary to follow the problem about data generation with a focus on learning similar feature representations from the original data and improve the accuracy of classification model. In this paper, a novel framework is proposed, which differs in two important aspects: one is that long short-term memory (LSTM) is incorporated into generative adversarial networks (GANs) to generate the logs of web attack. The other is that a data augment model is proposed by adding logs of web attack generated by GANs to the original dataset and improved the performance of the classification model. The results experimentally demonstrate the effectiveness of the proposed method. It improved the classification accuracy from 89.04% to 95.04%.
Despite convolutional neural network ( CNN) is mature in many domains, the understanding of the directions where the parameters of the CNNs are learned towards, falls behind, and researches on the functions that the convolutional networks (ConvNets) learns are difficult to be explored. A method is proposed to guide ConvNets to learn towards the expected direction. First, for the sake of facilitating network converging, a novel feature enhancement framework, namely enhancement network (EN), is devised to learn parameters according to certain rules. Second, two types of hand-crafted rules, namely feature-sharpening (FS) and feature-amplifying (FA) are proposed to enable effective ENs, meanwhile are embedded into the CNN for the end-to-end learning. Specifically, the former is a tool sharpening convolutional features and the latter is the one amplifying convolutional features linearly. Both tools aim at the same spot achieving a stronger inductive bias and more straightforward loss functions. Finally, the experiments are conducted on the mixed National Institute of Standards and Technology (MNIST) and cooperative institute for Alaska research 10 (CIFAR10) dataset. Experimental results demonstrate that ENs make a faster convergence by formulating hand-crafted rules.
A critical issue in mobile crowdsensing (MCS) involves selecting appropriate users from a number of participants to guarantee the completion of a sensing task. Users may upload unnecessary data to the sensing platform, leading to redundancy and low user selection efficiency. Furthermore, using exact values to evaluate the quality of the user-union will further reduce selection accuracy when users form a union. This paper proposes a user selection method based on user-union and relative entropy in MCS. More specifically, a user-union matching scheme based on similarity calculation is constructed to achieve user-union and reduce data redundancy effectively. Then, considering the interval-valued influence, a user-union selection strategy with the lowest relative entropy is proposed. Extensive testing was conducted to investigate the impact of various parameters on user selection. The results obtained are encouraging and provide essential insights into the different aspects impacting the data
redundancy and interval-valued estimation of MCS user selection.
Great challenges and demands are presented by increasing edge computing services for current power Internet of things ( Power IoT) to deal with the serious diversity and complexity of these services. To improve the matching degree between edge computing and complex services, the service identification function is necessary for Power IoT. In this paper, a naive long short-term memory ( Naive-LSTM ) based service identification scheme of edge computing devices in the Power IoT was proposed, where the Naive-LSTM model makes full use of the most simplified structure and conducts discretization of the long short-term memory ( LSTM) model. Moreover, the Naive-LSTM based service identification scheme can generate the probability output result to determine the task schedule policy of Power IoT. After well learning operation, these Naive-LSTM classification engine modules in edge computing devices of Power IoT can perform service identification, by obtaining key characteristics from various service traffics. Testing results show that the Naive-LSTM based services identification scheme is feasible and efficient in improving the edge computing ability of the Power IoT.
To detect uncorrectable frames and terminate the decoding procedure early, a probability stopping criterion for iterative analog decoding of low density parity check (LDPC) codes is proposed in this paper. By using probabilities of satisfied checks to detect uncorrectable frames and terminate decoding, the proposed criterion could be applied to analog decoders without much structure modifications. Simulation results show that the proposed criterion can reduce the average number of iterations and achieve a better balance in bit error ratio (BER) performance and decoding complexity than other stopping criteria using extrinsic information.
Owing to the unreliability of wireless link and the resource constraints of embedded devices in terms of energy, processing power, and memory size in low power and lossy networks (LLNs), network congestion may occur in an emergency and lead to significant packet loss and end-to-end delay. To mitigate the effect of network congestion, this paper proposes a centralized congestion control routing protocol based on multi-metrics (CCRPM). It combines the residual energy of a node, buffer occupancy rate, wireless link quality, and the current number of sub-nodes for the candidate parent to reduce the probability of network congestion in the process of network construction. In addition, it adopts a centralized way to determine whether the sub-nodes of the congested node need to be switched based on the traffic analysis when network congestion occurs. Theoretical analysis and extensive simulation results show that compared with the existing routing protocol, the performance of CCRPM is improved significantly in reducing the probability of network congestion, prolonging average network lifetime, increasing network throughput, and decreasing end-to-end delay.
Using relay in the wireless communication network is an efficient way to ensure the data transmission to the distant receiver. In this paper, a dynamic power control approach (DPC) is proposed for the amplify-and-forward (AF) relay-aided downlink transmission scenario based on deep reinforcement learning (DRL) to reduce the co-channel interference caused by spectrum sharing among different nodes. The relay works in a two-way half-duplex (HD) mode. Specifically, the power control of the relay is modeled as a Markov decision process (MDP) and the sum rate maximization of the network is formulated as a DRL problem. Simulation results indicate that the proposed method can significantly improve the system sum rate.
A 12-bit 2.6 GS/ s radio frequency digital to analog converter (RF DAC) based on 1 um GaAs heterojunction
bipolar transistor (HBT) process is presented. The DAC integrates a 4:1 multiplexer to reduce the data rate of
input ports, which greatly facilitates the application. DAC core adopts 4 +8 segmented current steering structure. R-2R ladder network is used for 8 least significant bit (LSB) to realize binary current weighting and thermometer
coding is used for 4 most significant bit (MSB). Return-to-zero (RZ) technology is used to expand the effective
bandwidth of DAC output to the third Nyquist band. The proposed DAC has a better output power flatness and
spurious-free dynamic range (SFDR). Compared to traditional DAC, measured results demonstrate that the output
power of this RZ DAC is increased by 33 dB and the SFDR is enhanced by 27 dB near the second Nyquist band.
In heterogeneous wireless networks with time-varying channels, the video rate is usually adjusted based on the network bandwidth to guarantee ultra-low latency video transmission under an end-to-end target delay constraint. However, the target delay with a fixed value according to historical experience cannot guarantee the quality of video continuously since wireless network bandwidth changes rapidly, especially when the network deteriorates. An alternative scheme is to dynamically set the target delay according to the network status within an acceptable delay range. However, this scheme cannot be ensured in heterogeneous wireless networks with time-varying channels. Thus, to address this issue, a multi-objective optimization algorithm for joint optimization of rate control and target delay is proposed, where the target delay and video rate are jointly adjusted dynamically. Second, to reduce the optimization complexity due to the multi-objective and multi-parameter characteristics, multi-objective optimization algorithm be decomposed and solved by optimizing each independent sub-problem. Finally, the proposed algorithm is verified on a semi-physical simulation platform. Experiments show that the frame loss rate is reduced from 6.65% to 2.06%, and a peak signal-to-noise ratio (PSNR) gain of 18.32% is obtained when the network performance is low.
Data island and information opacity are two major problems in collaborative administration. Blockchain has the potential to provide a trustable and transparent environment encouraging data sharing among administration members. However, the blockchain only stores Hash values and transactions in blocks which makes it unable to store big data and trace their changes. In this paper, a labor arbitration scheme based on blockchain was proposed to share labor arbitration data. In the system, a collaborative administration scheme that provides a big data storage model combined blockchain and interplanetary file system ( IPFS) is designed. It can store big data and share these data among different parties. Moreover, a file version control mechanism based on blockchain is designed to manage the data changes in IPFS network. It creates a tracing chain that consists of many IPFS objects to track changes of stored data. The relationship of previous and current IPFS objects recorded by blockchain can describe the changes of administration data and trace the data operations. The proposed platform is used in Rizhao City in China, and the experiment result shows collaborative administration scheme achieves traceability with high throughput and is more efficient than traditional hypertext transfer protocol ( HTTP) way to share data.
Special Topic: Data Security and Privacy Preservation in Cloud/ Fog / Edge-Enabled Internet of Thing
With the development of Internet of things (IoT), more and more intelligent terminal devices outsource data to cloud servers (CSs). However, the CS is not fully trusted, and the heterogeneity among different domains makes it difficult for third-party auditor (TPA) to conduct an efficient integrity auditing of outsourced data. Therefore, the cross-domain data cloud storage auditing scheme based on certificateless cryptography is proposed, which can effectively avoid the big burden of certificate management or key escrow problems in identity-based cryptography. At the same time, TPA can effectively audit the integrity of outsourced data in different domains. Formal security proof and analysis show that the cloud storage auditing scheme satisfies the security and privacy requirements. Performance analysis demonstrates that the efficiency is acceptable.
Special Topic: Optical Communication and Artificial Intelligence
In the current research on intensity-modulation and direct-detection optical orthogonal frequency division multiplexing ( IMDD-OOFDM ) system, effective channel compensation is a key factor to improve system performance. In order to improve the efficiency of channel compensation, a deep learning-based symbol detection algorithm is proposed in this paper for IMDD-OOFDM system. Firstly, a high-speed data streams symbol synchronization algorithm based on a training sequence is used to ensure accurate symbol synchronization. Then the traditional channel estimation and channel compensation are replaced by an echo state network (ESN) to restore the transmitted signal. Finally, we collect the data from the system experiment and calculate the signal-to-noise ratio (SNR). The analysis of the SNR optimized by the ESN proves that the ESN-based symbol detection algorithm is effective in compensating nonlinear distortion.
In intelligent education, most student-oriented learning path recommendation algorithms are based on either collaborative filtering methods or a 0-1 scoring cognitive diagnosis model. Unfortunately, they fail to provide a detailed report about the students'mastery of knowledge and skill and explain the recommendation results. In addition, they are unable to offer realistic learning path recommendations based on students'learning progress. Knowledge graph based memory recommendation algorithm (KGM-RA) was proposed to solve these problems. On the one hand, KGM-RA can provide more accurate diagnosis information by continuously fitting the students' knowledge and skill proficiency vector (SKSV) in a multi-level scoring cognitive diagnosis model. On the other hand, it also proposes the forgetting recall degree (FRD) according to the statistical results of the human forgetting phenomenon. It also calculates closeness centrality in the knowledge graph to achieve the recommended recall effect consistent with the human forgetting phenomenon. Experiments show that the KGM-RA can obtain the actual learning path recommendations for students, provides the adjustable ability of FRD, and has better reliability and interpretability.
The traffic congestion occurs frequently in urban areas, while most existing solutions only take effects after congesting. In this paper, a congestion warning method is proposed based on the Internet of vehicles (IOV) and community discovery of complex networks. The communities in complex network model of traffic flow reflect the local aggregation of vehicles in the traffic system, and it is used to predict the upcoming congestion. The real-time information of vehicles on the roads is obtained from the IOV, which includes the locations, speeds and orientations of vehicles. Then the vehicles are mapped into nodes of network, the links between nodes are determined by the correlations between vehicles in terms of location and speed. The complex network model of traffic flow is hereby established. The communities in this complex network are discovered by fast Newman (FN) algorithm, and the congestion warnings are generated according to the communities selected by scale and density. This method can detect the tendency of traffic aggregation and provide warnings before congestion occurs. The simulations show that the method proposed in this paper is effective and practicable, and makes it possible to take action before traffic congestion.
Face recognition has been a hot-topic in the field of pattern recognition where feature extraction and classification play an important role. However, convolutional neural network (CNN) and local binary pattern (LBP) can only extract single features of facial images, and fail to select the optimal classifier. To deal with the problem of classifier parameter optimization, two structures based on the support vector machine (SVM) optimized by artificial bee colony (ABC) algorithm are proposed to classify CNN and LBP features separately. In order to solve the single feature problem, a fusion system based on CNN and LBP features is proposed. The facial features can be better represented by extracting and fusing the global and local information of face images. We achieve the goal by fusing the outputs of feature classifiers. Explicit experimental results on Olivetti Research Laboratory (ORL) and face recognition technology (FERET) databases show the superiority of proposed approaches.
The training efficiency and test accuracy are important factors in judging the scalability of distributed deep learning. In this dissertation, the impact of noise introduced in the mixed national institute of standards and technology database (MNIST) and CIFAR-10 datasets is explored, which are selected as benchmark in distributed deep learning. The noise in the training set is manually divided into cross-noise and random noise, and each type of noise has a different ratio in the dataset. Under the premise of minimizing the influence of parameter interactions in distributed deep learning, we choose a compressed model (SqueezeNet) based on the proposed flexible communication method. It is used to reduce the communication frequency and we evaluate the influence of noise on distributed deep training in the synchronous and asynchronous stochastic gradient descent algorithms. Focusing on the experimental platform TensorFlowOnSpark, we obtain the training accuracy rate at different noise ratios and the training time for different numbers of nodes. The existence of cross-noise in the training set not only decreases the test accuracy and increases the time for distributed training. The noise has positive effect on destroying the scalability of distributed deep learning.
The new applications surge with the rapid evolution of the mobile communications. The explosive growth of the data traffic aroused by the new applications has posed great computing pressure on the local side. It is essential to innovate the computation offloading methods to alleviate the local computing burden and improve the offloading efficiency. Mobile edge computing (MEC) assisted by reflecting intelligent surfaces (RIS)-based unmanned aerial vehicle (UAV) is a promising method to assist the users in executing the computation tasks in proximity at low cost. In this paper, we propose an energy-efficient MEC system assisted by RIS-based UAV, where the UAV with RIS mounted relays the computation tasks to the MEC server. The energy efficiency maximization problem is formulated by jointly optimizing the UAV's trajectory, the transmission power of all users, and the phase shifts of the reflecting elements placed on the UAV. Considering that the optimization problem is non-convex, we propose a deep deterministic policy gradient (DDPG)-based algorithm. By combining the DDPG algorithm with the energy efficiency maximization problem, the optimization problem can be resolved. Finally, the numerical results are illustrated to show the performance of the system and the superiority compared with the benchmark schemes.
Software module clustering is to divide the complex software system into many subsystems to enhance the intelligibility and maintainability of software systems. To increase convergence speed and optimize clustering solution, density PSO-based (DPSO) software module clustering algorithm is proposed. Firstly, the software system is converted into complex network diagram, and then the particle swarm optimization (PSO) algorithm is improved. The shortest path method is used to initialize the swarm and the probability selection approach is used to update the particle positions. Furthermore, density-based modularization quality (DMQ) function is designed to evaluate the clustering quality. Five typical open source projects are selected as benchmark programs to verify the efficiency of
the DPSO algorithm. Hill climbing (HC) algorithm, genetic algorithm (GA), PSO and DPSO algorithm are compared in the modularization quality (MQ) and DMQ. The experimental results show that the DPSO is more stable and more convergent than other traditional three algorithms. The DMQ standard is more reasonable than MQ standard in guiding software module clustering.
Computer Applied Technology
Test case prioritization (TCP) technique is an efficient approach to improve regression testing activities. With the continuous improvement of industrial testing requirements, traditional single-objective TCP is limited greatly, and multi-objective test case prioritization (MOTCP) technique becomes one of the hot topics in the field of software testing in recent years. Considering the problems of traditional genetic algorithm (GA) and swarm intelligence algorithm in solving MOTCP problems, such as falling into local optimum quickly and weak stability of the algorithm, a MOTCP algorithm based on multi-population cooperative particle swarm optimization (MPPSO) was proposed in this paper. Empirical studies were conducted to study the influence of iteration times on the proposed MOTCP algorithm, and compare the performances of MOTCP based on single-population particle swarm optimization (PSO) and MOTCP based on non-dominated sorting genetic algorithm II (NSGA-II) with the MOTCP algorithm proposed in this paper. The results of experiments show that the test case prioritization algorithm based on MPPSO has stronger global optimization ability, is not easy to fall into local optimum, and can solve the MOTCP problem better than test case prioritization algorithm based on the single-population PSO and NSGA-II.
Moving data from cloud to the edge network can effectively reduce traffic burden on the core network, and edge collaboration can further improve the edge caching capacity and the quality of service ( QoS). However, it is difficult for various edge caching devices to cooperate due to the lack of trust and the existence of malicious nodes. In this paper,blockchain which has the distributed and immutable characteristics is utilized to build a trustworthy collaborative edge caching scheme to make full use of the storage resources of various edge devices. The collaboration process is described in this paper, and a proof of credit (PoC) protocol is proposed, in which credit and tokens are used to encourage nodes to cache and transmit more content in honest behavior. Untrusted nodes will pay for their malicious actions such as tampering or deleting cached data. Since each node chooses strategy independently to maximize its benefits in an environment of mutual influence, a non-cooperative game model is designed to study the caching behavior among edge nodes. The existence of Nash equilibrium (NE) is proved in this game, so the edge server (ES) can choose the optimal caching strategy for all collaborative devices, including itself, to obtain the maximum rewards. Simulation results show that the system can save mining overhead as well as organize a trusted collaborative edge caching effectively.
Complex Network Identification and Control
This paper aims at solving the linear-quadratic optimal control problems ( LQOCP) for time-varying descriptor systems in a real Hilbert space. By using the Moore-Penrose inverse theory and space decomposition technique, the descriptor system can be rewritten as a new differential-algebraic equation (DAE), and then some novel sufficient conditions for the solvability of LQOCP are obtained. Especially, the methods proposed in this work are simpler and easier to verify and compute, and can solve LQOCP without the range inclusion condition. In addition, some numerical examples are shown to verify the results obtained.
Terahertz and Microwave Microsystem
This article presents the design
and performance of a single-pole double-throw (SPDT) switch operating in 50–110 GHz. The switch is fabricated in a 100-nm GaN high-electron-mobility transistors(HEMT) technology. To realize high-power capability, the dimensions of
GaN HEMTs are selected by simulation verification. To enhance the isolation, an
improved structure of shunt HEMT with two ground holes is employed. To extend
the operation bandwidth, the SPDT switch with multi section resonant units is
proposed and analyzed. To verify the SPDT switch design, a prototype operating
in 50–110 GHz is fabricated. The measured results show that the fabricated SPDT
switch monolithic microwave integrated circuit (MMIC) achieves an input 1 dB
compression point (P1dB) of 38 dBm at 94 GHz, and an isolation within the range of 33 dB to 54 dB
in 50–110 GHz. The insertion loss of the switch is less than 2.1 dB, while the voltage standing wave ratios (VSWR) of the input port and output port are both less than 1.8 in the
operation bandwidth. Based on the measured results, the presented SPDT switch
MMIC demonstrates high power capability and high isolation compared with other
reported millimeter-wave SPDT MMIC designs.
In order to study the relationship between the non-spherical atmospheric charged particles and satellite-ground quantum links attenuation. The relationship among the particle concentration, equivalent radius, charge density of the charged particle, the attenuation coefficient and entanglement of the satellite-ground quantum link can be established first according to the extinction cross section and spectral distribution function of the non-spherical atmospheric charged particles. The quantitative relationship between atmospheric visibility and communication fidelity of satellite-ground quantum link were analyzed then. Simulation results show that the ellipsoid, Chebyshev atmospheric charged particle influences on attenuation of the satellite-ground quantum link increase progressively. When the equivalent particle radius is 0.2 μm and the particle concentration is 50 μg/m3, the attenuation coefficient and entanglement of the satellite-ground quantum link is 9.21 dB/km, 11.46 dB/km and 0.453, 0.421 respectively; When the atmospheric visibility reduces from 8 km to 2 km, the communication fidelity of satellite-ground quantum link decreases from 0.52 to 0.08. It is shown that the non-spherical atmospheric charged particles and atmospheric visibility influence greatly on the performance of the satellite-ground quantum link communication system. Therefore, it is necessary to adjust the parameters of the quantum-satellite communication system according to the visibility values of the atmosphere and the shapes of the charged particles in the atmosphere to improve reliability of the satellite-ground quantum link.
To improve the security of the color image encryption scheme,a color image encryption scheme based on chaotic systems is proposed. Firstly, the proposed scheme sets the color image as a three-dimensional matrix which is scrambled by affine transformation. Second, the Logistic chaotic sequence applied to generate the control parameter and auxiliary key is used to encrypt the three-dimensional matrix. Here, we mainly focus on two methods for
encryption processes. One is to generate a chaotic sequence by Logistic map and Henon map, which is used to perform XOR operation with the scrambled components R‘, G‘, B‘ respectively. The other one is to adopt a binary Logistic sequence to select the pixel position for the scrambled components R‘, G‘, B‘ image, and then applying the Henon map and Logistic map with the auxiliary key to perform the replacement encryption. Based on this, an encrypted image is synthesized. Simulation results show that the proposed image encryption scheme can implement better encryption and achieve higher security performance.
A geometry-based stochastic scattering model (GBSSM) based on geometrical multiple rings and ellipses is proposed for wideband multiple-input multiple-output (MIMO) mobile-to-mobile (M2M) fading channels. The proposed GBSSM is deployed with cross-polarized antennas and can be applied for the line-of-sight (LOS) and non-LOS (NLOS) scenarios by considering the single-bounced (SB) and double-bounced (DB) components. The channel realization is much more straightforward and concise to study the channel characteristics compared with the too complicated analytical solutions available so far. Based on the proposed GBSSM and realized channel, the channel characteristics and parameters at 2 GHz and 5 GHz with 100 MHz bandwidth are further investigated. The results can be used in the link and system level simulations in mobile-to-mobile radio systems.
Robust minimum class variance twin support vector machine (RMCV-TWSVM) presented previously gets better classification performance than the classical TWSVM. The RMCV-TWSVM introduces the class variance matrix of positive and negative samples into the construction of two hyperplanes. However, it does not consider the total structure information of all the samples, which can substantially reduce its classification accuracy. In this paper, a new algorithm named structural regularized TWSVM based on within-class scatter and between-class scatter (WSBS-STWSVM) is put forward. The WSBS-STWSVM can make full use of the total within-class distribution information and between-class structure information of all the samples. The experimental results illustrate high classification accuracy and strong generalization ability of the proposed algorithm.
In this paper, the resource allocation optimization for the simultaneous wireless information and power transfer (SWIPT) full-duplex (FD) relaying networks is investigated, in which the power-constrained relay scavenges energy from the source signal and assists information transmission by FD operation. Taking into account non-linear energy harvesting (EH) hardware circuit characteristics of the relay, the information rate maximization problem is developed by jointly optimizing the time-switching (TS) factor and transmission powers of the source in two different phases. However, the formulated optimization is highly non-convex and difficult to solve. To cope with this problem, the primary problem is decomposed into two sub-problems with respect to the TS factor and transmission powers. After solving these two sub-problems, the final sub-optimal solutions can be obtained by alternating search. Simulation results prove that the optimization of both TS factor and transmission powers can effectively enhance the information rate for the considered networks.
It is of great value and significance to model the interests of microblog user in terms of business and sociology. This paper presents a framework for mining and analyzing personal interests from microblog text with a new algorithm which integrates term frequency-inverse document frequency (TF-IDF) with TextRank. Firstly, we build a three-tier category system of user interest based on Wikipedia. In order to obtain the keywords of interest, we preprocess the posts, comments and reposts in different categories to select the keywords which appear both in the category system and microblogs. We then assign weight to each category and calculate the weight of keyword to get TF-IDF factors. Finally we score the ranking of each keyword by the TextRank algorithm with TF-IDF factors. Experiments on real Sina microblog data demonstrate that the precision of our approach significantly outperforms other existing methods.
As a key technology of the fifth generation (5G) wireless communications, sparse code multiple access (SCMA) system has quite high frequency utilization, but its message passing algorithm (MPA) decoder still has high time complexity. By the aid of the proposed multi-level dynamic threshold, the complexity of the MPA decoding can be greatly reduced with little error performance loss. In order to reduce a great deal of insignificant computational amounts of message update, we compare the multi-level symbols probability products with optimized multi-level thresholds step by step before they are used for message update calculation. The dynamic threshold configuration refers to the three factors: the level of input symbol probabilities, signal noise ratio (SNR) and the number of iterations. Especially, in the joint iterative MPA-Turbo decoding procedure, since most encoded bits have good convergence, the input thresholds can avoid more unnecessary computational overhead of message update and reduce the decoding time more significantly. The simulation results show that the proposed multi-level dynamic thresholds considerably reduce the decoding delay in both additive white Gaussian noise (AWGN) channel and frequency selective fading channel.
Color image enhancement is an active research field in image processing. Currently, many image enhancement methods are capable of enhancing the details of the color image. However, these methods only process the red, green and blue (RGB) color channels separately, which leads to color distortion easily. In order to overcome this problem, the paper presents an approach to integrate the quaternion theory into the traditional guided filter to obtain a quaternion guided filter (QGF). This method makes full use of the color information of an image to realize the holistic processing of RGB color channels. So as to preserve color information while enhancing details, this paper proposes a color image detail enhancement algorithm based on the QGF. Experimental results show that the proposed algorithm is effective in the applications of the color image detail enhancement, and enables image’s edges to be more prominent and texture clearer while avoiding color distortion. Compared with the existing image enhancement methods, the proposed method achieves better enhancement performance in terms of the visual quality and the objective evaluating indicators.
For complex networks, their effectiveness and invulnerability are extremely important. With the development of complex networks, how to evaluate the effectiveness and invulnerability of these networks becomes an important research topic. The relationship among many influencing factors is very complicated, so it is essential to confirm the weighting coefficient of these influencing factors. Principal component analysis (PCA) is proposed to evaluate the performance of complex networks. It can improve one-sidedness of the single evaluation index and select different evaluation models according to different complex networks, which make the evaluation result more accurate. Performance of complex networks can be predicted according to comprehensive evaluation model. To verify the rationality and validity of this method, several small-world networks with different probability values and scale-free network are chosen to evaluate the network performance. Finally, simulation results show that PCA can be applied to performance evaluation of complex networks.
Special Topic: Artificial Intelligence of Things
In order to improve robustness and efficiency of the radio frequency identification (RFID) network, a random mating mayfly algorithm (RMMA) was proposed. Firstly, RMMA introduced the mechanism of random mating into the mayfly algorithm (MA), which improved the population diversity and enhanced the exploration ability of the algorithm in the early stage, and find a better solution to the RFID nework planning (RNP) problem. Secondly, in RNP, tags are usually placed near the boundaries of the working space, so the minimum boundary mutation strategy was proposed to make sure the mayflies which beyond the boundary can keep the original search direction, as to enhance the ability of searching near the boundary. Lastly, in order to measure the performance of RMMA, the algorithm is then benchmarked on three well -known classic test functions, and the results are verified by a comparative study with particle swarm optimization (PSO), grey wolf optimization (GWO), and MA. The results show that the RMMA algorithm is able to provide very competitive results compared to these well-known meta-heuristics, RMMA is also applied to solve RNP problems. The performance evaluation shows that RMMA achieves higher coverage than the other three algorithms. When the number of readers is the same, RMMA can obtain lower interference and get a better load balance in each instance compared with other algorithms. RMMA can also solve RNP problem stably and efficiently when the number and position of tags change over time.
Caching popular content in the storage of small cells is deemed as an efficient way to decrease latency, offload backhaul and satisfy user’s demands. In order to investigate the performance of cache-enabled small cell networks, coverage probability is studied in both single-point transmission and cooperative multipoint (CoMP) transmission scenarios. Meanwhile, the caching distribution modeled as Zipf and uniform distribution are both considered. Assuming that small base stations (SBSs) are distributed as a homogeneous Poisson point process (HPPP), the closed-form expressions of coverage probability are derived in different transmission cases. Simulation results show that CoMP transmission achieves a higher coverage probability than that of single-point transmission. Furthermore, Zipf distribution-based caching is more preferable than uniform distribution-based caching in terms of coverage probability.
In order to improve the visibility and contrast of low-light images and better preserve the edge and details of images, a new low-light color image enhancement algorithm is proposed in this paper. The steps of the proposed algorithm are described as follows. First, the image is converted from the red, green and blue (RGB) color space to the hue , saturation and value (HSV) color space, and the histogram equalization (HE) is performed on the value component. Next, non-subsampled shearlet transform (NSST) is used on the value component to decompose the image into a low frequency sub-band and several high frequency sub-bands. Then, the low frequency sub-band and high frequency sub-bands are enhanced respectively by Gamma correction and improved guided image filtering (IGIF), and the enhanced value component is formed by inverse NSST transform. Finally, the image is converted back to the RGB color space to obtain the enhanced image. Experimental results show that the proposed method not only significantly improves the visibility and contrast, but also better preserves the edge and details of images.
With the expansion of wind speed data sets, decreasing model training time is of great significance to the time cost of wind speed prediction. And imperfection of the model evaluation system also affect the wind speed prediction. To address these challenges, a hybrid method based on feature extraction, nested shared weight long short-term memory (NSWLSTM) network and Gaussian process regression (GPR) was proposed. The feature extraction of wind speed promises the best performance of the model. NSWLSTM model reduces the training time of long short-term memory (LSTM) network and improves the prediction accuracy. Besides, it adopted a method combined NSWLSTM with GPR (NSWLSTMGPR) to provide the probabilistic prediction of wind speed. The probabilistic prediction can provide information that deviates from the predicted value, which is conducive to risk assessment and optimal scheduling. The simulation results show that the proposed method can obtain high-precision point prediction, appropriate prediction interval and reliable probabilistic prediction results with shorter training time on the wind speed prediction.
The industrial Internet of things (industrial IoT, IIoT) aims at connecting everything, which poses severe
challenges to existing wireless communication. To handle the demand for massive access in future industrial
networks, semantic information processing is integrated into communication systems so as to improve the
effectiveness and efficiency of data transmission. The semantic paradigm is particularly suitable for the purpose-oriented information exchanging scheme in industrial networks. To illustrate its applicability, typical industrial data
are investigated, i. e. , time series and images. Simulation results demonstrate the superiority of semantic
information processing, which achieves a better rate-utility tradeoff than conventional signal processing.
The analysis of dissolved gas in oil can provide an important basis for transformer fault diagnosis. In order to improve the accuracy of transformer fault diagnosis, a method based on the relational teacher-student network (R-TSN) is proposed by analyzing the relationship between the dissolved gas in the oil and the fault type. R-TSN replaces the original hard labels with soft labels, and uses it to measure the similarity between different samples in the space, to a certain extent, it can obtain the hidden feature information in the samples, and clarify the classification boundary. Through the identification experiment, the effect of R-TSN diagnosis model is analyzed, and the influence of the compound fault of discharge and thermal on the diagnosis model is studied. This paper compares R-TSN with support vector machines (SVMs), decision trees and multilayer perceptron models in transformer fault diagnosis. Experimental results show that R-TSN has better performance than the above methods. After adding compound faults in the sample set, the accuracy rate can still reach 86.0%.
While solving unimodal function problems, conventional meta-heuristic algorithms often suffer from low accuracy and slow convergence. Therefore, in this paper, a novel meta-heuristic optimization algorithm, named proton-electron swarm (PES), is proposed based on physical rules. This algorithm simulates the physical phenomena of like-charges repelling each other while opposite charges attracting in protons and electrons, and establishes a mathematical model to realize the optimization process. By balancing the global exploration and local exploitation ability, this algorithm achieves high accuracy and avoids falling into local optimum when solving target problem. In order to evaluate the effectiveness of this algorithm, 23 classical benchmark functions were selected for comparative experiments. Experimental results show that, compared with the contrast algorithms, the proposed algorithm cannot only obtain higher accuracy and convergence speed in solving unimodal function problems, but also maintain strong optimization ability in solving multimodal function problems.
In the post-quantum era, the password-based authentication key exchange (PAKE) protocol on lattice has the
characteristics of convenience and high efficiency, however these protocols cannot resist online dictionary attack that is a common method used by attackers. A lattice-based two-factor ( biometric and password) authentication key exchange (TFAKE) protocol based on key consensus (KC) is proposed. The protocol encapsulates the hash value of biometric information and password through a splittable encryption method, and compares the decapsulated information with the server's stored value to achieve the dual identity authentication. Then the protocol utilizes the asymmetric hash structure to simplify the calculation steps, which increases the calculation efficiency. Moreover, KC algorithm is employed in reducing data transmission overhead. Compared with the current PAKE protocol, the proposed protocol has the characteristics of hybrid authentication and resisting online dictionary attack. And it reduces the number of communication rounds and improves the efficiency and the security of protocol application.
The controlled quantum key agreement (CQKA) protocol requires a controller to oversee the process of all participants negotiating a key, which can satisfy the needs of certain specific scenarios. Existing CQKA protocols are mostly two-party or three-party, and they do not entirely meet the actual needs. To address this problem, this paper proposes new CQKA protocols based on Bell states and Bell measurements. The new CQKA protocols can be successfully implemented for any N-party, not just two-party. Furthermore, the security and efficiency analyses demonstrate that the new CQKA protocols are not only secure but also more efficient in terms of quantum bit.
Special Topic : Digital Human
As a subtask of open domain event extraction ( ODEE), new event type induction aims to discover a set of unseen event types from a given corpus. Existing methods mostly adopt semi-supervised or unsupervised learning to achieve the goal, which uses complex and different objective functions for labeled and unlabeled data respectively. In order to unify and simplify objective functions, a reliable pseudo-labeling prediction (RPP) framework for new event type induction was proposed. The framework introduces a double label reassignment ( DLR) strategy for unlabeled data based on swap-prediction. DLR strategy can alleviate the model degeneration caused by swap-predication and further combine the real distribution over unseen event types to produce more reliable pseudo labels for unlabeled data. The generated reliable pseudo labels help the overall model be optimized by a unified and simple objective. Experiments show that RPP framework outperforms the state-of-the-art on the benchmark.
The Nakagami-Gamma ( NG) shadow fading model based on the moment-based method ( MoM) generates lower tail approximation, which is inaccuracy when the gamma random variables are replaced by the lognormal random variables. The channel parameters of composite NG shadow fading based on the method of minimizing the Kullback- Leibler ( KL) divergence were estimated and a closed-form expression for the system bit error rate ( BER) was derived in this paper. The simulation results show that the KL estimated parameters solve the lower tail approximation problem, and the replacement effect of the lognormal function by the gamma function is better than the MoM when the shading parameters are around the typical value of 4 dB - 9 dB. Moreover, the KL method has a lower mean square error ( MSE) value for the channel analysis.
In case of machine learning, the problem of class imbalance is always troubling, i. e. one class of the samples has a larger magnitude than the other classes. This problem brings a preference of the classifier to the majority class, which leads to worse performance of the classifier on the minority class. We proposed an improved boosting tree (BT) algorithm for learning imbalanced data, called cost BT. In each iteration of the cost BT, only the weights of the misclassified minority class samples are increased. Meanwhile, the error rate in the weight formula of the base classifier is replaced by 1 minus F-measure. In this study, the performance of the cost BT algorithm is compared with other known methods on 9 public data sets. The compared methods include the decision tree and
random forest algorithm, and both of them were combined with the sampling techniques such as synthetic minority oversampling technique (SMOTE), Borderline-SMOTE, adaptive synthetic sampling approach (ADASYN) and one sided selection. The cost BT algorithm performed better than the other compared methods in F-measure, G-mean and area under curve (AUC). In 6 of the 9 data sets, the cost BT algorithm has a superior performance to the other published methods. It can promote the prediction performance of the base classifiers by increasing the proportion of the minority class in the whole samples with only increasing the weights of the misclassified minority class samples in each iteration of the BT. In addition, computing the weights of the base classifiers with F-measure is helpful to the ensemble decisions.
Flocking of multi-agent system with high-frequency feedback robust control is put forward in detail. The
controller that has high-frequency feedback robust control is designed; it is proved that the velocity error is bounded
and no collision occurs between the multi-agent under the action of the high-frequency feedback robust control by
employing the boundedness theorem and the Lyapunov stability theory. Flocking of multi-agent system can be
formed under the action of the high-frequency feedback robust control. Flocking of multi-agent system with high-frequency feedback robust control is realized in numerical simulation. Compared with the state diagram and velocity
error diagram of the original flocking of multi-agent system in the simulation, flocking of multi-agent system with
high-frequency feedback robust control has better stability.
Dynamic geometry software, as a piece of computer-assisted instruction (CAI) software, is closely and deeply associated with mathematics, and is widely applied to mathematics teaching activities in primary and secondary schools. Meanwhile, web technology also has become an important technology for assisting education and teaching. This paper expounds a web-based dynamic geometry software development process, and analyses specific requirements regarding graphical application programming interface (API) required by dynamic geometry software. With experiments and comparison on the two different hypertext markup language (HTML) 5 graphical API technologies, i. e. , scalable vector graphics (SVG) and Canvas, on different apparatuses and browsers, we draw the conclusion that it is more suitable to adopt Canvas as the graphical API technology for the web-based dynamic geometry software, thus further proposed the principles and methods for an object-oriented Canvas design. The dynamic geometry software based on the newly-designed Canvas has technical advantages and educational value, well incorporating aesthetic education into mathematics education.
Due to high spectral efficiency and power efficiency, the continuous phase modulation ( CPM) technique with constant envelop is widely used in range telemetry. How to improve the bit error rate (BER) performance of CPM and keep a reasonable computational complexity is the key of the entire telemetry system and the focus of research and engineering design. In this paper, a reduced-state noncoherent maximum likelihood sequence detection (MLSD) method for CPM is proposed. In the proposed method, the criterion of noncoherent MLSD is derived for CPM when the carrier phase is unknown. A novel Viterbi algorithm (VA) with modified state vector is designed to simplify the implementation of noncoherent MLSD. Both analysis and numerical results show that the proposed method reduces the computational complexity significantly and does not need accurate carrier phase recovery, which overcomes the shortage of traditional MLSD method. Additionally, the proposed method exceeds the traditional MLSD method when carrier phase deviation exists.
Special Topic: Cultural Computing
An Avatar-like robot in a virtual museum environment was designed to perform the function of telepresence and teleoperation, and make the three-dimensional (3D) effect through a binocular camera and a virtual reality (VR) head-mounted display (HMD). This robot supports users to participate in the exhibition remotely in a new and interactive way in multiple scenarios. The results show that the system has good usability and is worth further optimizing.
To provide preferential protection for users while keeping good service utility, a preferential private recommendation framework ( named as PrefER) is proposed. In this framework, a preferential budget allocation scheme is designed and implemented at the system side to provide multilevel protection. And users' preference is utilized at the user side to improve recommendation performance without increasing users' burden. This framework is generic enough to be employed with other schemes. Recommendation accuracy based on the MovieLens dataset by the collaborative filtering schemes and PrefER are compared and analyzed. The experimental results show that PrefER can provide preferential privacy protection with the improvement of recommendation accuracy.
Event extraction (EE) is a significant part of natural language information extraction, and it is widely adopted in
other natural language processing (NLP) tasks such as question answering and machine reading comprehension.
With the development of the NLP field, numerous datasets and approaches for EE are promoted, raising the need
for a comprehensive review. In this paper, the resources for EE are reviewed, and then the numerous neural
network models currently employed in EE tasks are classified into three types: Word sequence-based methods,
graph-based neural network methods, and external knowledge-based approaches. And then the methods are
compared and contrasted in detail, and their flaws and difficulties are analyzed with existing research in this survey.
Finally, the future research tendency is discussed for EE.
For the task of content retrieval, analysis and generation of film and television scene images in the field of
intelligent editing, fine-grained emotion recognition and prediction of images is of great significance. In this paper,
the fusion of traditional perceptual features, art features and multi-channel deep learning features are used to reflect
the emotion expression of different levels of the image. In addition, the integrated learning model with stacking
architecture based on linear regression coefficient and sentiment correlations, which is called the LS-stacking
model, is proposed according to the factor association between multi-dimensional emotions. The experimental
results prove that the mixed feature and LS-stacking model can predict well on the 16 emotion categories of the self-
built image dataset. This study improves the fine-grained recognition ability of image emotion by computers, which
helps to increase the intelligence and automation degree of visual retrieval and post-production system.
目前,大多数构建的知识图谱无论其规模如何,大多有不完备性问题。这种不完备性会对基于知识图谱的应用产生负面影响。作为知识图谱补充的重要方法,链接预测近年来已成为热门研究课题。本文提出了一种基于半监督学习和模型汤思想的链接预测模型性能增强方案,通过对模型架构进行微小改变,有效提高了几种主流链接预测模型的性能。这一创新方案主要包括两个部分:(1)使用半监督学习策略预测图中的潜在事实三元组,(2)创造性地结合半监督学习和模型汤,进一步提高最终模型的性能,而不增加显著的计算开销。我们通过实验证实了该方案在各种链接预测模型上的有效性,特别是在具有密集关系的数据集上。对于测试的模型中整体性能最佳的模型CompGCN,在经过增强方案后,在FB15K-237数据集上的Hits@1指标提高了14.7%,在WN18RR数据集上提高了7.8%。同时,我们观察到增强方案中的半监督学习策略对于多类链接预测模型有显著改进,并且模型汤的引入带来的性能改进与具体的测试模型有关,某些模型的性能得到了改善,而其他模型的性能基本保持不变。
Content-centric networking (CCN) proposes a content-centric paradigm which changes the waist hourglass from Internet protocol (IP) to content chunk. In this paper, based on content chunks, an optimization model of minimizing the total delay time in information centric networking (ICN) is established, and branch-and-bound method and greedy (BG) algorithm is proposed to get the content placement method. As the multipath is natural supported in CCN, chunk-based content placement can decline delay time obviously, even it would increase the calculation amount which can be solved easily by the node’s capacity. Simulation results indicate that the chunk-based content placement scheme is better than the single-based cache policy on the network total delay time, and the best number of each content chunk split is decided by the link density and the number of the nodes in the network.
Due to the difficulty of deploying Internet protocol (IP) multicast on the Internet on a large scale, overlay multicast has been considered as a promising alternative to develop the multicast communication in recent years. However, the existing overlay multicast solutions suffer from high costs to maintain the state information of nodes in the multicast forwarding tree. A stateless overlay multicast scheme is proposed, in which the multicast routing information is encoded by a bloom filter (BF) and encapsulated into the packet header without any need for maintaining the multicast forwarding tree. Our scheme leverages the node heterogeneity and proximity information in the physical topology and hierarchically constructs the transit-stub overlay topology by assigning geometric coordinates to all overlay nodes. More importantly, the scheme uses BF technology to identify the nodes and links of the multicast forwarding tree, which improves the forwarding efficiency and decreases the false-positive forwarding loop. The analytical and simulation results show that the proposal can achieve high forwarding efficiency and good scalability.
To help the people choose a proper medical treatment organizer, this paper proposes an opposition raiding wolf pack optimization algorithm using random search strategy ( ORRSS-WPOA) for an adaptive shrinking region. Firstly, via the oppositional raiding method (ORM), each wolf has bigger probability of approaching the leader wolf, which makes the exploration of the wolf pack enhanced as a whole. In another word, the wolf pack is not easy to fall into local optimum. Moreover, random searching strategy (RSS) for an adaptive shrinking region is adopted to strengthen exploitation, which enables any wolf to be more likely to find the optimum in some a given region, so macroscopically the wolf pack is easier to find the global optimal in the given range. Finally, a fitness function was designed to judge the appropriateness between a certain patient and a hospital. The performance of the ORRSS-WPOA was comprehensively evaluated by comparing it with several other competitive algorithms on ten classical benchmark functions and the simulated fitness function aimed to solve the problem mentioned above. Under the same condition, our experimental results indicated the excellent performance of ORRSS-WPOA in terms of solution quality and computational efficiency.
Hybrid analog-digital beamforming is recognized as a promising solution for a practical implementation of massive multiple-input multiple-output (MIMO) systems based on millimeter-wave (mmWave) technology. In view of the overwhelming hardware cost and excessive power consumption and the imperfection of thechannel state information (CSI), a robust hybrid beamforming design is proposed for the mmWave massive MIMO systems, where the robustness is defined with respect to imperfect knowledge or error of the CSI) at the transmitter due to limited feedback and/or imperfect channel estimation. Assuming the errors of the CSI are bounded, the optimal hybrid beamforming design with robustness is formulated to a mean squared error (MSE) minimization problem. An iterative semidefinite programming (SDP) based algorithm is proposed to obtain the beamforming matrices. Simulation results show that the proposed robust design can provide more than 4 dB performance gain compared to that of non-robust design.
Terahertz and Microwave Microsystem
The
method of terahertz (THz) resonance with a high-quality (high-Q) factor offers
a vital physical mechanism for metasurface sensors and other high-Q factor
applications. However, it is challenging to excite the resonance with a high-Q
factor in metasurfaces with proper sensitivity as well as figure of merit (FOM) values. Here, an all-dielectric metasurface composed of two asymmetrical
rectangular blocks is suggested. Quartz and silicon are the materials applied
for the substrate and cuboids respectively. The distinct resonance governed by bound
states in the continuum (BIC) is excited by forming an asymmetric cluster by a novel
hybrid method of
cutting and moving the cuboids.
The investigation focuses on analyzing the transmission spectra of the metasurface
under different variations in structural parameters and the loss of silicon refractive
index. When the proposed defective metasurface
serves as a transmittance sensor, it shows a Q factor of 1.08×104 and achieves a FOM up to 4.8×106, which is obtained under the
asymmetric parameter equalling 1 μm. Simultaneously, the proposed defective metasurface
is sensitive to small changes in refractive index. When the thickness of the
analyte is 180 μm, the sensitivity reaches a maximum value of 578 GHz / RIU.
Hence, the proposed defective metasurface exhibits an extensive number of
possible applications in the filters, biomedical diagnosis, security
screening, and so on.
The explosive increase of smart devices and mobile traffic results in heavy burden on backhaul and core network, intolerable network latency and degraded service to the end-users. As a complement to core network, edge network contributes to relieving network burden and improving user experience. To investigate the problem of optimizing the total consumption in an edge-core network, the system consumption minimization problem is formulated considering the energy consumption and delay. Given that the formulated problem is a mixed nonlinear integer programming (MNIP), a low-complexity workload allocation algorithm is subsequently proposed based on interior-point method. The proposed algorithm has an extremely short running time in practice. Finally, simulation results show that edge network can significantly complement core network with much reduced backhaul energy consumption and delay.
Wireless sensor networks (WSNs) are emerging as essential and popular ways of providing pervasive computing environments for various applications. Unbalanced energy consumption is an inherent problem in WSNs, characterized by multi-hop routing and a many-to-one traffic pattern. This uneven energy dissipation can significantly reduce network lifetime. In multi-hop sensor networks, information obtained by the monitoring nodes need to be routed to the sinks, the energy consumption rate per unit information transmission depends on the choice of the next hop node. In an energy-aware routing approach, most proposed algorithms aim at minimizing the total energy consumption or maximizing network lifetime. In this paper, we propose a novel energy aware hierarchical cluster-based (NEAHC) routing protocol with two goals: minimizing the total energy consumption and ensuring fairness of energy consumption between nodes. We model the relay node choosing problem as a nonlinear programming problem and use the property of convex function to find the optimal solution. We also evaluate the proposed algorithm via simulations at the end of this paper.
Text classification is a classic task innatural language process (NLP). Convolutional neural networks (CNNs) have demonstrated its effectiveness in sentence and document modeling. However, most of existing CNN models are applied to the fixed-size convolution filters, thereby unable to adapt different local interdependency. To address this problem, a deep global-attention based convolutional network with dense connections (DGA-CCN) is proposed. In the framework, dense connections are applied to connect each convolution layer to each of the other layers which can accept information from all previous layers and get multiple sizes of local information. Then the local information extracted by the convolution layer is reweighted by deep global-attention to obtain a sequence representation with more valuable information of the whole sequence. A series of experiments are conducted on five text classification benchmarks, and the experimental results show that the proposed model improves upon the state of-the-art baselines on four of five datasets, which can show the effectiveness of our model for text classification.
Blockchain technology is used in edge computing ( EC) systems to solve the security problems caused by single point of failure ( SPOF) due to data loss, task execution failure, or control by malicious nodes. However, the disadvantage of blockchain is high latency, which contradicts the strict latency requirements of EC services. The existing single-level sharded blockchain system ( SLSBS) cannot provide different quality of service for different tasks. To solve these problems, a multi-level sharded blockchain system ( MLSBS) based on genetic algorithm ( GA) is proposed. The shards are classified according to the delay of the service, and the parameters such as the shard size of different shards are different. Using the GA, the MLSBS obtains the optimal resource allocation strategy that achieves maximum security. Simulation results show that the proposed scheme outperforms SLSBS.
Special Topic: Optical Communication and Artificial Intelligence
New energy power generation equipment has the characteristics of diurnal, perturbative, seasonal, and periodic power generation, which makes new power optical communication network ( POCN ) more dynamic and changeable. This is directly reflected in the dynamics of the link risk and service importance of the POCN. In this paper, aiming at the problem of the dynamic importance of service in POCN, and the resulting power optical communication network reliability decline problem, a new energy POCN dynamic routing intelligence algorithm based on service importance prediction is proposed. Based on the short-term power generation of new energy power station, the importance of each service and the risk degree of each link are predicted. Link weights are dynamically adjusted, and k-shortest path ( KSP) algorithm is used to calculate route results. When network resources are insufficient, low-importance services can give way to prevent a large number of high-importance services from being blocked. Simulation results show that compared with the traditional KSP algorithm, the prediction-based dynamic routing intelligent ( P-DRI) algorithm can reduce the service blocking probability by 55.59% , and reduce the average importance of blocking services by 44.77% at the cost of about 6.17% of the calculation delay.
For the problems of estimation accuracy, inconsistencies and robustness in mobile robot simultaneous localization and mapping (SLAM), a novel SLAM based on improved Rao-Blackwellized H∞ particle filter (IRBHF-SLAM) algorithm is proposed. The iterated unscented H∞ filter (IUHF) is utilized to accurately calculate the importance density function, repeatedly correcting the state mean and the covariance matrix by the iterative update method. The laser sensor’s observation information is introduced into sequential importance sampling routine. It can avoid the calculation of Jacobian matrix and linearization error accumulation; meanwhile, the robustness of the algorithm is enhanced. IRBHF-SLAM is compared with FastSLAM2.0 and the unscented FastSLAM (UFastSLAM) under different noises in simulation experiments. Results show the algorithm can improve the estimation accuracy and stability. The improved approach, based on the robot operation system (ROS), runs on the Pioneer3-DX robot equipped with a HOKUYO URG-04LX (URG) laser range finder. Experimental results show the improved algorithm can reduce the required number of particles and the operating time; and create online 2 dimensional (2-D) grid-map with high precision in different environments.
This paper proposes a combination technique of the frequency-domain random demodulation (FRD) and the broadband digital predistorter (DPD). This technique can linearize the power amplifiers (PAs) at a low sampling rate in the feedback loop. Based on the theory of compressed sensing (CS), the FRD method preprocesses the original signal using the frequency domain sampling signal with different stages through multiple parallel channels. Then the FRD method is applied to the broadband DPD system to restrict the sampling process in the feedback loop. The proposed technique is assessed using a 30 W Class-F wideband PA driven by a 20 MHz orthogonal frequency division multiplexing (OFDM) signal, and a 40 W GaN Doherty PA driven by a 40 MHz 4-carrier long-term evolution (LTE) signal. The simulation and experimental results show that good linearization performance can be achieved at a lower sampling rate with about 24 dBc adjacent channel power ratio (ACPR) improvement by applying the proposed combination technique FRD-DPD. Furthermore, the performance of normalized mean square error (NMSE) and error vector magnitude (EVM) also has been much improved compared with the conventional technique.
Packet loss cannot be avoided in wireless network due to wireless transmission medium particularity, therefore improving retransmission efficiency is meaningful to wireless transmission. The current retransmission packet selection mechanisms based on opportunistic network coding (ONC) face low retransmission efficiency and high computational complexity problems. To these problems, an optimized encoding packet selection mechanism based on ONC in wireless network retransmission (OONCR) is proposed. This mechanism is based on mutual exclusion packets and decoding gain concepts, and makes full use of ONC advantages. The main contributions of this scheme are to control the algorithm complexity of the maximum encoding packets selection effectively, avoid the redundancy encoding packets due to the overlapping among encoding packets, and take the encoding packet local and global optimization problem into consideration. Retransmission efficiency is evaluated according to the computational complexity, the throughput, the retransmission redundancy ratio, and the number of average retransmission. Under the various conditions, the number of average retransmission of OONCR is mainly lower than that of other typical retransmission packet selection schemes. The average retransmission redundancy ratios of OONCR are lower about 5%~40% compared with other typical schemes. Simultaneously the computational complexity of OONCR is comparatively lower than that of other typical schemes.
Accurate estimationand real-time compensation for phase offset and Doppler shift are essential for coherent multi-input multi-output (MIMO) systems. Here, a spatial multiplexing MIMO scheme with non-coherent frequency-shift keying (FSK) detection is proposed. It is immune to random phase interference and Doppler shift while increasing capacity. It is valuable that the proposed spatial multiplexing MIMO based on energy detection (ED) is equivalent to a linear system, and there is no mutual interference caused by the product of simultaneous signals in square-law processing. The equivalent MIMO channel model is derived as a real matrix, which remains maximal multiplexing capacity and reduces the channel estimation complexity. Simulation results show that the proposed scheme has outstanding performance over Rician flat fading channel, and experimental system obtains four times the capacity through 4 antennas on both transmitter and receiver.
A source enumeration method based on diagonal loading of eigenvalues and constructing second-order statistics is proposed, for the case that the antenna array observed signals are overlapped with spatial colored noise, and the number of antennas compared with the number of snapshots meet the requirement of general asymptotic regime. Firstly, the sample covariance matrix of the observed signals is obtained, the eigenvalues of the sample covariance matrix can be acquired by eigenvalue decomposition, and the eigenvalues are diagonally loaded, and a new formula for calculating the diagonal loading is presented. Based on the diagonal loaded eigenvalues, the difference values are calculated for the adjacent eigenvalues after diagonal loading, and the statistical variance of the difference values is calculated. On this basis, the second-order statistics of the difference values are constructed, and when the second-order statistics are minimized, the corresponding number of sources is estimated. The proposed method has wide applicability, which is suitable for both general asymptotic regime and classical asymptotic system, and is suitable for both white Gaussian noise environment and colored noise environment. The method makes up for the lack of source enumeration methods in the case of general asymptotic system and colored noise.
This paper attempts to present an interactive color natural images segmentation method. This method extracts the feature of images by using the nonlinear compact structure tensor (NCST) and then uses GrabCut method to obtain the segmentation. This method not only realizes the non-parametric fusion of texture information and color information, but also improves the efficiency of the calculation. Then, the improved GrabCut algorithm is used to evaluate the foreground target segmentation. In order to calculate the simplicity and efficiency, this paper also extends the Gaussian mixture model (GMM) constructed base on the GrabCut to the tensor space, and uses the Kullback-Leibler (KL) divergence instead of the usual Riemannian geometry. Lastly, an iteration convergence criterion is proposed to reduce the time of the iteration of GrabCut algorithm dramatically with satisfied segmentation accuracy. After conducting a large number of experiments on synthetic texture images and natural images, the results demonstrate that this method has a more accurate segmentation effect.
For the interference hidden in the expected multi-carrier signal, this paper proposes a novel detection and recognition algorithm. The algorithm cannot only detect the single-carrier interference (SCI) by the high-order cumulant, but also finds the multi-carrier signal based on spectrum character. Besides, the algorithm can distinguish the modulation types of the SCI. The algorithm does not depend on any prior knowledge and data-aided, which is propitious to practical applications. The analysis and simulation results demonstrate that the proposed algorithm is effective.
The local direction pattern (LDP) is unsusceptible to random noise which is widely used in texture extraction of face region. LDP cannot encode the central pixel thus the important information will be lost. Thus a new feature descriptor called extended local directional pattern (ELDP) is proposed for face extraction. First, the mean value of the eight directional edge response values and the gray value of center pixel are calculated. Second, the mean value is taken as the threshold. Then, the expression image is encoded using nine encoded values. In order to reduce redundant information and get more effective information, the Gabor filter is used to obtain the multi-direction Gabor magnitude maps (GMMs), and then the ELDP is used to encode the GMMs. Finally, support vector machine (SVM) is applied to classify and recognize facial expression. The experimental results show that the feature dimensions is greatly reduced and the rate of facial expression recognition is improved.
This article puts forward a novel smooth rotated hyperbola model for support vector machine (RHSSVM) for classification. As is well known, the support vector machine (SVM) is based on statistical learning theory(SLT) and performs its high precision on data classification. However, the objective function is non-differentiable at the zero point. Therefore the fast algorithms cannot be used to train and test the SVM. To deal with it, the proposed method is based on the approximation property of the hyperbola to its asymptotic lines. Firstly, we describe the development of RHSSVM from the basic linear SVM optimization programming. Then we extend the linear model to non-linear model. We prove the solution of RHSSVM is convergent, unique, and global optimal. We show how
RHSSVM can be practically implemented. At last, the theoretical analysis illustrates that compared with other three typical models, the rotated hyperbola model has the least error on approximating the plus function. Meanwhile, computer simulations show that the RHSSVM can reduce the consuming time at most 54.6% and can efficiently handle large scale and high dimensional programming.
Harris hawks optimization ( HHO) algorithm is an efficient method of solving function optimization problems.
However, it is still confronted with some limitations in terms of low precision, low convergence speed and stagnation
to local optimum. To this end, an improved HHO ( IHHO) algorithm based on good point set and nonlinear
convergence formula is proposed. First, a good point set is used to initialize the positions of the population
uniformly and randomly in the whole search area. Second, a nonlinear exponential convergence formula is designed
to balance exploration stage and exploitation stage of IHHO algorithm, aiming to find all the areas containing the
solutions more comprehensively and accurately. The proposed IHHO algorithm tests 17 functions and uses Wilcoxon
test to verify the effectiveness. The results indicate that IHHO algorithm not only has faster convergence speed than
other comparative algorithms, but also improves the accuracy of solution effectively and enhances its robustness
under low dimensional and high dimensional conditions.
Robot grabbing has been successfully applied to a range of challenging environments but met the resource bottleneck. To answer this question, a hybrid cloud-based robot grabbing system is proposed, which supports centralized bin-picking management and deployment, large-scale storage, and communication technologies. The hybrid cloud combines the powerful computational capabilities through massive parallel computation and higher data storage facilities in the public cloud with data privacy in the private data center. The benchmark tasks against a public cloud based on robot grabbing method are evaluated, whose results indicate that the whole system reduces the data collection time and increases elastic resource scheduling and is adapted in the real industry.
Access control scheme is proposed for System Wide Information Management (SWIM) to address the problem of attribute revocation in practical applications. Based on the attribute based encryption (ABE), this scheme introduces the proxy re-encryption mechanism and key encrypting key (KEK) tree to realize fine-grained access control with attribute revocation. This paper defines the attributes according to the status quo of civil aviation. Compared with some other schemes proposed before, this scheme not only shortens the length of ciphertext (CT) and private key but also improves the efficiency of encryption and decryption. The scheme can resist collusion attacks and ensure the security of data in SWIM.
Based on the pseudo-symplectic space over F(2v+1) q of characteristics 2, combining the definition of low density parity check (LDPC) codes with the knowledge of graph theory, two kinds of LDPC codes with larger girth are constructed. By the knowledge of bipartite graph and the girth properties of LDPC codes, both the girth of the code C(m1,2v+1,q) and the code C(m2,2v+1,q) are computed are 8. The girth is larger, the performance of LDPC codes is better. Finally, according to the properties of the check matrix and the linear relation between the column vectors of the check matrix, both the minimum distances of the two codes are obtained are 2q +2.
Some emerging services such as augmented/virtual reality need high data rates. Whereas, the existing 2nd Generation (2G), the 3rd Generation (3G) and the 4th Generation (4G) networks cannot provide such high transmission rates. To meet these requirements, in this paper, the technology and the realization of a “virtual super terminal” that can perform the concurrent streaming of video via the cooperation of multiple heterogeneous terminals are introduced. This paper uses concurrent streaming scheduling algorithm, buffer overhead and packet delivery method, and packet loss prevention and recovery method to improve the performances of the system. Testbed measurements show that compared with the scheme without cooperative transmission this new design can significantly increase system throughput and decrease packet delay.
互联网与传统制造业的融合使得“工业物联网”(IoT)成为一个热门研究课题。但传统的工业网络仍然面临着来自资源管理,原始数据存储限制和计算能力的挑战。在本文中,我们提出了一种新的软件定义工业网络(SDIN)体系结构来解决IIoT中存在的资源利用,数据处理和存储以及系统兼容性等缺陷。该架构基于软件定义网络(SDN)架构,并结合分层云雾计算和内容感知缓存技术。文中基于SDIN架构,讨论了工业应用中的两种边缘计算策略,并通过考虑不同的场景和服务要求,仿真结果证实了SDIN架构在边缘计算卸载应用中的可行性和有效性。
An ensemble learning algorithm based on game theory is proposed to evaluate algorithms of image analysis and image feature extraction. A competition system is established to implement the algorithm for evaluating the applicability and efficiency of different edge detection algorithms. Through the game in the algorithm competition system, the most suitable algorithm as a winner in the competition can be selected. A group of optimal parameters for the corresponding edge detection can also be found. Firstly, based on the evolutionary game theory, a strategy of the competition of edge extraction algorithms is developed. Secondly, after selecting the most suitable algorithm from the candidates, the overall parameters are optimized. Experiments show that for a specific class of images, several candidate algorithms can be used as a class of preference algorithms based on the final evolutionary result. When analyzing the images, the priority algorithm can be recommended as the best edge detection algorithm from these reference algorithms. It is more effective than traditional methods in determining an algorithm and choosing parameters.
In order to solve the sensing and motion uncertainty problem of motion planning in narrow passage environment, a partition sampling strategy based on partially observable Markov decision process (POMDP) was proposed. The method combines partition sampling strategy and can improve the success rate of the robot motion planning in the narrow passage. Firstly, the environment is divided into open area and narrow area by using a partition sampling strategy, and generates the initial trajectory of the robot with fewer sampling points. Secondly, the method can calculate a local optimal solution of the initial nominal trajectory by solving POMDP problem, and iterates an overall optimal trajectory of robot motion. The proposed method follows the general POMDP solution framework, in which the belief dynamics is approximated by an extended Kalman filter (EKF), and the value function is represented by an effective quadratic function in the belief space near the nominal trajectory. Using a belief space variant of iterative linear quadratic Gaussian (iLQG) to perform the value iteration, which results in a linear control policy over the belief space that is locally optimal around the nominal trajectory. A new nominal trajectory is generated by executing the control strategy iteration, and the process is repeated until it converges to a locally optimal solution. Finally, the robot gets the optimal trajectory to safely pass through a narrow passage. The experimental results show that the proposed method can efficiently improves the performance of motion planning under uncertainty.
The factors like production accuracy and completion time are the determinants of the optimal scheduling of the complex products work-flow, so the main research direction of modern work-flow technology is how to assure the dynamic balance between the factors. Based on the work-flow technology, restraining the completion time, and analyzing the deficiency of traditional minimum critical path algorithm, a virtual iterative reduction algorithm (VIRA) was proposed, which can improve production accuracy effectively with time constrain. The VIRA with simplification as the core abstracts a virtual task that can predigest the process by combining the complex structures which are cyclic or parallel, finally, by using the virtual task and the other task in the process which is the iterative reduction strategy, determines a path which can make the production accuracy and completion time more balanced than the minimum critical path algorithm. The deadline, the number of tasks, and the number of cyclic structures were used as the factors affecting the performance of the algorithm, changing the influence factors can improve the performance of the algorithm effectively through the analysis of detailed data. Consequently, comparison experiments proved the feasibility of the VIRA.
Complex Network Modeling and Application
In recent years, there has been considerable attention and research on the higher-order interactions that are prevalent in various real-world networks. Hypergraphs, especially in the study of complex systems, are proved effective in capturing these interactions. To better characterize the model in reality, this paper proposes a theoretical model of node interdependent percolation in multiplex hypergraphs, considering “ weak ” interdependence. The proposed model includes pairwise and higher-order interactions, where the removal of nodes triggers cascading failures. However, interdependent nodes connected to failed nodes experience partial loss of connections due to “ weak” interdependence, reflecting the self-sustaining capabilities of real-world systems. Percolation theory is applied to the analysis to investigate the properties of the percolation threshold and phase transition. Both analysis and simulation results show that as the strength of interdependence between nodes weakens, the network transitions from a discontinuous to a continuous phase, thereby increasing its robustness.
In response to the challenge posed by the complexity of the system and the difficulty in obtaining accurate channel state information (CSI) for millimeter wave communication assisted by intelligent reflecting surfaces (IRS), we propose a deep learning-based channel estimation scheme. The proposed scheme employs a hybrid active/passive IRS architecture, wherein the least square (LS) algorithm is initially utilized to acquire the channel estimate from the active elements. Subsequently, this estimation is interpolated to obtain a preliminary channel estimation and ultimately refined into an accurate estimate of the channel using the channel super-resolution convolutional neural network (Chan-SRCNN) deep learning network. The simulation results demonstrate that the proposed scheme surpasses LS, orthogonal matching pursuit (OMP), synchronous OMP (SOMP), and deep neural network (DNN) channel estimation algorithms in terms of normalized mean squared error (NMSE) performance, thereby validating the feasibility of the proposed approach.
Due to the diversity of graph computing applications, the power-law distribution of graph data, and the high compute-to-memory ratio, traditional architectures face significant challenges regarding poor flexibility, imbalanced workload distribution, and inefficient memory access when executing graph computing tasks. Graph computing accelerator, GraphApp, based on a reconfigurable processing element ( PE) array was proposed to address the challenges above. GraphApp utilizes 16 reconfigurable PEs for parallel computation and employs tiled data. By reasonably dividing the data into tiles, load balancing is achieved and the overall efficiency of parallel computation is enhanced. Additionally, it preprocesses graph data using the compressed sparse columns independently ( CSCI) data compression format to alleviate the issue of low memory access efficiency caused by the high memory access-to-computation ratio. Lastly, GraphApp is evaluated using triangle counting ( TC) and depth-first search ( DFS) algorithms. Performance analysis is conducted by measuring the execution time of these algorithms in GraphApp against existing typical graph frameworks, Ligra, and GraphBIG, using six datasets from the Stanford Network Analysis Project ( SNAP) database. The results show that GraphApp achieves a maximum performance improvement of 30.86 % compared to Ligra and 20.43 % compared to GraphBIG when processing the same datasets.
Recently, physical layer security in wireless communication system attracts much attention, and the reconciliation protocol plays an important role in the final secure key distillation, since the secret keys extracted from the realistic characteristics of the wireless channel may not be the same between the transmitter and legitimate receiver. A high efficiency Polar coding key reconciliation scheme is proposed in the paper to correct these errors. In the scheme, the transmitter generates a random stream with the known frozen bits and positions. After that, the transmitter encodes the random bit stream to a code stream by Polar encoding and sends the corrupted version of the code stream and the secret keys to the legitimate receiver. The receiver decodes the received stream with Polar successive cancellation decoding algorithm. With Polar encoding, the receiver obtains a random bit stream, and achieves the final secure key by XOR operator. The results show that the proposed scheme has a higher efficiency and a lower computational complexity, along with a high success rate. The consistency of the keys is very good after the reconciliation.
In the 6th generation mobile communication system (6G) era, a large number of delay-sensitive and
computation-intensive applications impose great pressure on resource-constrained Internet of things (IoT) devices.
Aerial edge computing is envisioned as a promising and cost-effective solution, especially in hostile environments
without terrestrial infrastructures. Therefore, this paper focuses on integrating aerial edge computing into 6G for
providing ubiquitous computing services for IoT devices. This paper first presents the layered network architecture
of aerial edge computing for 6G. The benefits, potential applications, and design challenges are also discussed in
detail. Next, several key techniques like unmanned aerial vehicle (UAV) deployment, operation mode, offloading
mode, caching policy, and resource management are highlighted to present how to integrated aerial edge computing
into 6G. Then, the joint UAV deployment optimization and computation offloading method is designed to minimize
the computing delay for a typical aerial edge computing network. Numerical results reveal the significant delay
reduction of the proposed method compared with the other benchmark methods. Finally, several open issues for
aerial edge computing in 6G are elaborated to provide some guidance for future research.
The unique characteristics of opportunistic networks (ONs), such as intermittent connectivity and limited network resources, makes it difficult to support quality of service (QoS) provisioning, particularly to guarantee delivery ratio and delivery delay. In this paper, we propose a QoS-oriented packet scheduling scheme (QPSS) to make decisions for bundle transmissions to satisfy the needs for the delivery ratio and delivery delay constraints of bundles. Different from prior works, a novel bundle classification method based on the reliability and latency requirements is utilized to decide the traffic class of bundles. A scheduling algorithm of traffic class and bundle redundancy is used to maintain a forwarding and dropping priority queue and allocate network resources in QPSS. Simulation results indicate that our scheme not only achieves a higher overall delivery ratio but also obtains an approximate 14% increase in terms of the amount of eligible bundles.
Along with the increasing number of vehicles, parking space becomes narrow gradually, safety parking puts forward higher requirements on the driver’s driving technology. How to safely, quickly and accurately park the vehiclo to parking space right? This paper presents an automatic parking scheme based on trajectory planning, which analyzing the mechanical model of the vehicle, establishing vehicle steering model and parking model, coming to the conclusion that it is the turning radius is independent of the vehicle speed at low speed. The Matlab simulation environment verifies the correctness and effectiveness of the proposed algorithm for parking. A class of the automatic parking problem of intelligent vehicles is solved.
Deep convolutional neural network (CNN) makes great breakthroughs in computer vision. Recently, many works have demonstrated that the performance of the CNN depends on the stacked convolutional layers. It is obvious that the features of the fully connected layers lose the topological structures of images, and the convolutional layer features contain a large amount of redundant information that interferes with the performance of model. Thus, we propose an effective supervised deep Hashing method, enhancing convolutional deep Hashing (ECDH), which learns the binary codes from the strengthened convolutional layer. Specifically, an enhanced convolutional Hash layer is constructed between the top convolutional layer and the output layer, enhancing the local features of the convolutional layer outputs while learning the binary codes by optimizing an objective function. The proposed method works well for existing deep learning models such as Alex neural network (AlexNet), visual geometry group neural network (VGGNet), residual neural network (ResNet), and is easier to be trained. Compared with state-of-the-art methods, extensive experiments show that the proposed method achieves better retrieval performance.
As a special type of distributed denial of service (DDoS) attacks, the low-rate DDoS (LDDoS) attacks have characteristics of low average rate and strong concealment, thus, it is hard to detect such attacks by traditional approaches. Through signal analysis, a new identification approach based on wavelet decomposition and sliding detecting window is proposed. Wavelet decomposition extracted from the traffic are used for multifractal analysis of traffic over different time scale. The sliding window from flow control technology is designed to identify the normal and abnormal traffic in real-time. Experiment results show that the proposed approach has advantages on detection accuracy and timeliness.
Universality is an important property in software and hardware design. This paper concentrates on the universality of quantum secure multi-party computation (SMC) protocol. First of all, an in-depth study of universality has been onducted, and then a nearly universal protocol is proposed by using the Greenberger-Horne-Zeilinger (GHZ)-like state and stabilizer formalism. The protocol can resolve the quantum SMC problem which can be deduced as modulo subtraction, and the steps are simple and effective. Secondly, three quantum SMC protocols based on the proposed universal protocol: Quantum private comparison (QPC) protocol, quantum millionaire (QM) protocol, and quantum multi-party summation (QMS) protocol are presented. These protocols are given as examples to explain universality. Thirdly, analyses of the example protocols are shown. Concretely, the correctness, fairness, and efficiency are confirmed. And the proposed universal protocol meets security from the perspective of preventing inside attacks and outside attacks. Finally, the experimental results of the example protocols on the International Business Machines (IBM) quantum platform are consistent with the theoretical results. Our research indicates that our protocol is universal to a certain degree and easy to perform.
Deep learning (DL) requires massive volume of data to train the network. Insufficient training data will cause serious overfitting problem and degrade the classification accuracy. In order to solve this problem, a method for automatic modulation classification ( AMC) using AlexNet with data augmentation was proposed. Three data augmentation methods is considered, i. e. , random erasing, CutMix, and rotation. Firstly, modulated signals are converted into constellation representations. And all constellation representations are divided into training dataset and test dataset. Then, training dataset are augmented by three methods. Secondly, the optimal value of execution probability for random erasing and CutMix are determined. Simulation results show that both of them perform optimally when execution probability is 0.5. Thirdly, the performance of three data augmentation methods are evaluated. Simulation results demonstrate that all augmentation methods can improve the classification accuracy. Rotation improves the classification accuracy by 13.04% when signal noise ratio (SNR) is 2 dB. Among three methods, rotation outperforms random erasing and CutMix when SNR is greater than - 6 dB. Finally, compared with other classification algorithms, random erasing, CutMix, and rotation used in this paper achieved the performance significantly improved. It is worth mentioning that the classification accuracy can reach 90.5% with SNR at 10 dB.
Proxy re-encryption (PRE) allows users to transfer decryption rights to the data requester via proxy. Due to the current PRE schemes from lattice ( LPRE) cannot fulfill chosen-ciphertext attack ( CCA) security, an identity-based PRE (IB-PRE) scheme from learning with errors over ring ( RLWE) assumption with ciphertext evolution (IB-LPRE-CE) was proposed. IB-LPRE-CE generates the private key using the preimage sampling algorithm (SamplePre) and completes the ciphertext delegation using the re-encryption algorithm. In addition, for the problem of ciphertext delegation change caused by the long-term secret key update, the idea of PRE is used to complete ciphertext evolution and the modification of ciphertext delegation, which improves the efficiency of secure data sharing. In terms of security, IB-LPRE-CE is CCA security based on RLWE assumption. Compared with the current LPRE schemes, IB-LPRE-CE offers greater security and improves the computational efficiency of the encryption algorithm.
Video description aims to generate descriptive natural language for videos. Inspired from the deep neural network (DNN) used in the machine translation, the video description (VD) task applies the convolutional neural network (CNN) to extracting video features and the long short-term memory (LSTM) to generating descriptions. However, some models generate incorrect words and syntax. The reason may because that the previous models only apply LSTM to generate sentences, which learn insufficient linguistic information. In order to solve this problem, an end-to-end DNN model incorporated subject, verb and object (SVO) supervision is proposed. Experimental results on a publicly available dataset, i. e. Youtube2Text, indicate that our model gets a 58.4% consensus-based image
description evaluation (CIDEr) value. It outperforms the mean pool and video description with first feed (VD-FF) models, demonstrating the effectiveness of SVO supervision.
Digital rights management (DRM) applications are usually confronted with threats like key extraction, code lifting, and illegal distribution. White-box cryptography aims at protecting software implementations of cryptographic algorithms and can be employed into DRM applications to provide security. A general DRM solution based on white-box cryptography was proposed to address the three threats mentioned above. The method is to construct a general perturbation-enabled white-box compiler for lookup-table based white-box block ciphers, such that the white-box program generated by this compiler provides traceability along with resistance against key extraction and code lifting. To get a traceable white-box program, the idea of hiding a slight perturbation in the lookup-table was employed, aiming at perturbing its decryption functionality, so that each user can be identified. Security analysis and experimental results show that the proposed DRM solution is secure and practical.
Aiming to solve the poor performance of low illumination enhancement algorithms on uneven illumination images, a low-light image enhancement (LIME) algorithm based on a residual network was proposed. The algorithm constructs a deep network that uses residual modules to extract image feature information and semantic modules to extract image semantic information from different levels. Moreover, a composite loss function was also designed for the process of low illumination image enhancement, which dynamically evaluated the loss of an enhanced image from three factors of color, structure, and gradient. It ensures that the model can correctly enhance the image features according to the image semantics, so that the enhancement results are more in line with the human visual experience. Experimental results show that compared with the state-of-the-art algorithms, the semantic-driven residual low-light network (SRLLN) can effectively improve the quality of low illumination images, and achieve better subjective and objective evaluation indexes on different types of images.
The fifth generation mobile communication (5G) systems can provide Gbit/s data rates from massive multiple-input multiple-output (MIMO) combined with the emerging use of millimeter wavelengths in small heterogeneous cells. This paper develops an energy-efficiency based multi-user hybrid beamforming for downlink millimeter wave (mmWave) massive MIMO systems. To make better use of directivity gains of the analog beamforming and flexible baseband processing of the digital beamforming, this paper proposes the analog beamforming to select the optimal beam which can maximize the power of the objective user and minimize the interference to all other users. In addition, the digital beamforming maximizes the energy efficiency of the objective user with zero-gradient-based approach. Simulation results show the proposed algorithm provide better bit error rate (BER) performance compared with the traditional hybrid beamforming and obviously improved the sum rate with the increase in the number of users. It is proved that multi-user MIMO (MU-MIMO) can be a perfect candidate for mmWave massive MIMO communication system. Furthermore, the analog beamforming can mitigate the inter-user interference more effectively with the selection of the optimal beam and the digital beamforming can greatly improve the system performance through flexible baseband processing.
Due to the high cost and power consumption of the radio frequency (RF) chains, it is difficult to implement the full digital beamforming in millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. Fortunately, the hybrid beamforming (HBF) is proposed to overcome these limitations by splitting the beamforming process between the analog and digital domains. In recent works, most HBF schemes improve the spectral efficiency based on greedy algorithms. However, the iterative process in greedy algorithms leads to high computational complexity. In this paper, a new method is proposed to achieve a reasonable compromise between complexity and performance. The novel algorithm utilizes the low-complexity Gram-Schmidt method to orthogonalize the candidate vectors. With the orthogonal candidate matrix, the slow greedy algorithm is avoided. Thus, the RF vectors are found simultaneously without any iteration. Additionally, the phase extraction is applied to satisfy the element-wise constant-magnitude constraint on the RF matrix. Simulation results demonstrate that the new HBF algorithm can make substantial improvements in complexity while maintaining good performance.
Cloud download service, as a new application which downloads the requested content offline and reserves it in cloud storage until users retrieve it, has recently become a trend attracting millions of users in China. In the face of the dilemma between the growth of download requests and the limitation of storage resource, the cloud servers have to design an efficient resource allocation scheme to enhance the utilization of storage as well as to satisfy users’ needs like a short download time. When a user’s churn behavior is considered as a Markov chain process, it is found that a proper allocation of download speed can optimize the storage resource utilization. Accordingly, two dynamic resource allocation schemes including a speed switching (SS) scheme and a speed increasing (SI) scheme are proposed. Both theoretical analysis and simulation results prove that our schemes can effectively reduce the consumption of storage resource and keep the download time short enough for a good user experience.
In order to meet various challenges in the Internet of things (IoT), such as identity authentication, privacy
preservation of distributed data and network security, the integration of blockchain and IoT became a new trend in recent years. As the key supporting technology of blockchain, the consensus algorithm is a hotspot of distributed system research. At present, the research direction of the consensus algorithm is mainly focused on improving throughput and reducing delay. However, when blockchain is applied to IoT scenario, the storage capacity of lightweight IoT devices is limited, and the normal operations of blockchain system cannot be guaranteed. To solve this problem, an improved version of Raft (Imp Raft) based on Raft and the storage compression consensus (SCC) algorithm is proposed, where initialization process and compression process are added into the flow of Raft. Moreover, the data validation process aims to ensure that blockchain data cannot be tampered with. It is obtained from experiments and analysis that the new proposed algorithm can effectively reduce the size of the blockchain and the storage burden of lightweight IoT devices.
Aiming at the sensor faults of near-space hypersonic vehicles (NSHV), a fault identification method based on the extended state observer and kernel extreme learning machine (ESO-KELM) is proposed in this paper. The method is generated by a combination of the model-based method and the data-driven method. As the source of the fault diagnosis, the residual signals represent the difference between the ESO output and the result measured by the sensor in particular. The energy of the residual signals is distributed in both low frequency bands and high frequency bands. However, the energy of the sensor concentrates on the low-frequency bands. Combined with more different features detected by KELM, the proposed method devotes to improving the accuracy. Meanwhile, it is competent to calculate the magnitude of minor faults based on time-frequency analysis. Finally, the simulation is performed on the longitudinal channel of the Winged-Cone model published by the national aeronautics and space administration (NASA). Results show the validity and the accuracy in calculating the magnitude of the minor faults.
With the recent increase in routing App usage, it is urgent to analyze the impact of inevitable information delay in routing Apps on traffic. To analyze the negative impact of such delayed routing Apps on traffic, this paper investigates the impact of ideal and delayed routing Apps on traffic. Firstly, the effect of ideal routing Apps has been explored with theoretical analysis and derivation based on a macroscopic network model. Then, an extended network model is built to characterize the delayed routing. The impact of such delayed routing Apps is investigated by simulation experiments while considering routing usage proportion and amount of delay. The results demonstrate that ideal navigation does improve traffic efficiency, but in some cases, delayed navigation is even worse than no navigation.
Implementing face recognition efficiently to real world large scale dataset presents great challenges to existing approaches.The method in this paper was proposed to learn an identity distinguishable space for large scale face recognition in MSR- Bing image recognition challenge (IRC). Firstly, a deep convolutional neural network (CNN) was used to optimize a 128 B embedding for large scale face retrieval. The embedding was trained via using triplets of aligned face patches from FaceScrub and CASIA-WebFace datasets. Secondly, the evaluation of MSR-Bing IRC was conducted according to a cross-domain retrieval scheme. The real-time retrieval in this paper was benefited from the K-means clustering performed on the feature space of training data. Furthermore, a large scale similarity learning (LSSL) was applied on the relevant face images for learning a better identity space. A novel method for selecting similar pairs was proposed for LSSL. Compared with many existing networks of face recognition, the proposed model was lightweight and the retrieval method was promising as well.
Collaborative filtering (CF) is one of the most widely used Algorithm in recommender systems, which help users obtain the information they may like. We proposed a latent Dirichlet allocation (LDA) model combining time and rating (TR-LDA) for CF. We use mathematical methods to fit the Ebbinghaus forgetting curve in our method and introduce time weights based on time window to find out the impact of time on user's interests. The user's choice of items is not only influenced by his/ her interests, but also influenced by other's rating. According to the users' feedback, we find their rating distribution on items under the interests. Finally, experimental results on real data sets MovieLens 100 k and MovieLens 1 M show that the proposed Algorithm can predict the user implicit interests effectively and improve the recommendation performance.
The accuracy of the positioning system in indoor environment is often affected by none-line-of-sight ( NLOS) propagation. In order to improve the positioning accuracy in indoor NLOS environment, a method used ultra-wide-band ( UWB ) technology, which based on time of arrival ( TOA) principle, combining Markov chain and fingerprint matching was proposed. First, the TOA algorithm is used to locate the target tag. Then the Markov chain is used to identify if blocking happened and revise the position result. And the fingerprint matching is used to further improve the position accuracy. Finally, an experiment system was built to test the accuracy of the proposed method and the traditional Kalman filter method. The experimental results show that, compared with the traditional Kalman filter method, the proposed method can improve the positioning accuracy in indoor NLOS environment.
Image fusion is widely used in computer vision and image analysis. Considering that the traditional image fusion algorithm has a certain limitation in multi-channel image fusion, a memristor-based multi-channel pulse coupled neural network (M-MPCNN) for image fusion is proposed. Based on a dual-channel pulse coupled neural network (D-PCNN), a novel multi-channel pulse coupled neural network (M-PCNN) is firstly constructed in this paper. Then the exponential growth dynamic threshold model is used to improve the pulse generation of pulse coupled neural network, which can not only avoid multiple ignitions effectively, but can also improve operational efficiency and reduce complexity. At the same time, synchronous capture can also enhance image edge, which is more conducive to image fusion. Finally, the threshold and synaptic characteristics of pulse coupled neural networks (PCNNs) can be well realized by using a memristor-based pulse generator. Experimental results show that the proposed algorithm can fuse multi-source images more effectively than existing state-of-the-art fusion algorithms.
With the rapid development of Internet of thing (IoT) technology, it has become a challenge to deal with the increasing number and diverse requirements of IoT services. By combining burgeoning network function virtualization ( NFV) technology with cloud computing and mobile edge computing ( MEC), an NFV-enabled cloud-and-edge-collaborative IoT (CECIoT) architecture can efficiently provide flexible service for IoT traffic in the form of a service function chain (SFC) by jointly utilizing edge and cloud resources. In this promising architecture, a difficult issue is how to balance the consumption of resource and energy in SFC mapping. To overcome this challenge, an intelligent energy-and-resource-balanced SFC mapping scheme is designed in this paper. It takes the comprehensive deployment consumption as the optimization goal, and applies a deep Q-learning(DQL)-based SFC mapping (DQLBM) algorithm as well as an energy-based topology adjustment (EBTA) strategy to make efficient use of the limited network resources, while satisfying the delay requirement of users. Simulation results show that the proposed scheme can decrease service delay, as well as energy and resource consumption.
An artificial rabbit optimization algorithm based on chaotic mapping and Levy flight improvement is proposed, which has the advantages of good initial population quality and fast convergence compared with the traditional artificial rabbit optimization algorithm, called CLARO. CLARO’s improvement method starts from three aspects: to optimize the quality of the initial population of the algorithm a chaotic mapping is brought in to initialize the population; to avoid the algorithm from falling into local optimum Levy flight is added in the exploration phase and the threshold of energy factor A is optimized to better balance exploration and exploitation. The efficiency of CLARO is tested on a set of 23 benchmark function sets by comparing it with ARO and different meta-heuristics algorithms. At last, the comparison experiments conclude that all three improvement strategies enhance the performance of ARO to some extent, with Levy flight providing the most significant improvement in ARO performance. The experimental results showed that CLARO has better results and faster convergence compared to other algorithms, while successfully addressing the drawbacks of ARO and being able to face more challenging problems.
Gabor features have been shown to be effective for palm vein recognition. This paper presents a novel feature representation method, implementing the fusion of local Gabor histograms (FLGH), in order to improve the accuracy of palm vein recognition systems. A new local descriptor called local Gabor principal differences patterns (LGPDP) encodes the Gabor magnitude using the local maximum difference (LMD) operator. The corresponding Gabor phase patterns are encoded by local Gabor exclusive OR (XOR) patterns (LGXP). Fisher’s linear discriminant (FLD) method is then implemented to reduce the dimensionality of the feature representation. Low-dimensional Gabor magnitude and phase feature vectors are finally fused to enhance accuracy. Experimental results from Institute of Automation, Chinese Academy of sciences (CASIA) database show that the proposed FLGH method achieves better performance by utilizing score-level fusion. The equal error rate (EER) is 0.08%, which outperforms other conventional palm vein recognition methods (EER range from 2.87% to 0.16%), e.g., the Laplacian palm, minutiae feature, Hessian phase, Eigenvein, local invariant features, mutual foreground local binary patterns (LBP), and multi-sampling feature fusion methods.
Edge is the intrinsic geometric structure of an image. Edge detection methods are the key technologies in the field of image processing. In this paper, a multi-scale image edge detection method is proposed to effectively extract image geometric features. A source image is decomposed into the high frequency directional sub-bands coefficients and the low frequency sub-bands coefficients by non-subampled contourlet transform (NSCT). The high frequency sub-bands coefficients are used to detect the abundant details of the image edges by the modulus maxima (MM) algorithm. The low frequency sub-band coefficients are used to detect the basic contour line of the image edges by the pulse coupled neural network (PCNN). The final edge detection image is reconstructed with detected edge information at different scales and different directional sub-bands in the NSCT domain. Experimental results demonstrate that the proposed method outperforms several state-of-art image edge detection methods in both visual effects and objective evaluation.
Ultra-dense networks (UDNs) is a promising solution to meet the exponential increase in mobile data traffic. But the ultra-dense deployment of cells inevitably brings complicated inter-cell interference ( ICI) and existing interference coordination scheme cannot be directly applied. To minimize the aggregate interference of each small cells, this paper formulates the problem as a distributed noncooperation game-based interference coordination scheme in ultra-dense networks considering the real demand rate of each small cell user equipment (SUE) and proves it to be a potential game. An improved no-regret learning algorithm is introduced to coverage to the Nash equilibrium (NE) of the formulated game. Simulation results show that the proposed scheme has better performance compared with existing schemes.
To improve the security and effectiveness of mobile robot path planning,a slime mould rapid-expansion random tree (S-RRT) algorithm is proposed. This path planning algorithm is designed based on a biological optimization model and a rapid-expansion random tree ( RRT) algorithm. S-RRT algorithm can use the function of optimal direction to constrain the generation of a new node. By controlling the generation direction of the new node, an optimized path can be achieved. Thus, the path oscillation is reduced and the planning time is shortened. It is proved that S-RRT algorithm overcomes the limitation of paths zigzag of RRT algorithm through theoretical analysis. Experiments show that S-RRT algorithm is superior to RRT algorithm in terms of safety and efficiency.
A wireless powered communication network (WPCN) assisted by intelligent reflecting surface (IRS) is proposed in this paper, which can transfer information by non-orthogonal multiple access ( NOMA) technology. In the system, in order to ensure that the hybrid access point (H-AP) can correctly decode user information via successive interference cancellation ( SIC) technology, the information transmit power of user needs to satisfy a certain threshold, so as to meet the corresponding SIC constraints. Therefore, when the number of users who transfer information simultaneously increases, the system performance will be greatly restricted. To minimize the influence of SIC constraints on system performance, users are firstly clustered, and then each cluster collects energy from H-AP and finally, users transfer information based on NOMA with the assistance of IRS. Specifically, this paper aims to maximize the sum throughput of the system by jointly optimizing the beamforming of IRS and resource allocation of the system. The semi-definite relaxation (SDR) algorithm is employed to alternately optimize the beamforming of IRS in each time slot, and the joint optimization problem about user's transmit power and time is transformed into two optimal time allocation sub-problems. The numerical results show that the proposed optimization scheme can effectively improve the sum throughput of the system. In addition, the results in the paper further reveals the positive impact of IRS on improving the sum throughput of the system.
Balancing time, cost, and quality is crucial in intelligent manufacturing. However, finding the optimal value of
production parameters is a challengingnon-deterministic polynomial (NP)-hard problem. In the actual production
process, the production process has the characteristics of multi-stage parallel. Therefore, aiming at the difficult
problem of multi-stage nonlinear production process optimization, this paper proposes a workflow optimization
algorithm based on virtualization and nonlinear production quality under time constraints (T-OVQT). The algorithm
proposed in this paper first abstracts the actual production process into a virtual workflow model, which is divided
into three layers: The bottom production process collection layer, the middle layer of service node partial order
composition layer, and the high level of virtual node collection layer. Then, the virtual technology is used to
reconstruct the node set and divide the task interval. The optimal solution is obtained through inverse iterative
normalization and forward scheduling, and the global optimal solution is obtained by algorithm integration.
Experimental results demonstrate that this algorithm better meets actual production requirements than the traditional
minimum critical path (MCP) algorithm.
Terahertz and Microwave Microsystem
With the wide application of the fifth-generation mobile communication system (5G) technology, wireless communication equipment tends to develop in miniaturization, high frequency , and low loss. In this paper, a surface acoustic wave (SAW) filter with a center frequency of 3.5 GHz was designed. Firstly, the acoustic waveguide structure of the longitudinal leaky SAW (LLSAW) excitation is determined, and the two-dimensional (2D) theoretical model of the device is established by COMSOL Multiphysics. Secondly, the influence of electrode parameters on the performance of the device is studied, and the electrode parameters are optimized on this basis. By setting the device structure parameters reasonably, the spurious in the passband can be effectively suppressed. Finally, the center frequency of the mirror T-structure LLSAW filter is 3.536 GHz, the insertion loss is -1.414 dB, the bandwidth of -3 dB is 276 MHz, and the out-of-band rejection is greater than -30 dB.
Gallium nitride (GaN) high electron mobility transistor (HEMT) with symmetrical structure as a control device is discussed in this paper. The equivalent circuit model is proposed on the basis of physical and electrical properties of the GaN HEMT device. A transistor with 0.5 μm gate length and 6 × 125 μm gate width is fabricated to verify the model, which can be treated as a single pole single throw (SPST) switch due to the ON state and OFF state. The measurement results show a good agreement with the simulation results, which demonstrates the effectiveness of the proposed model.
The trustee and the trustor may have no previous interaction experiences before. So, intermediate nodes which are trusted by both the trustor and the trustee are selected to transit trust between them. But only a few intermediate nodes are key nodes which can significantly affect the transitivity of trust. To the best of our knowledge, there are no algorithms for finding key nodes of the trust transitivity. To solve this problem, the concept of trust is presented, and a comprehensive model of the transitivity of trust is provided. Then, the key nodes search (KNS) algorithm is proposed to find out the key nodes of the trust transitivity. The KNS algorithm is verified with three real social network datasets and the results show that the algorithm can find out all the key nodes for each node in directed,
weighted, and non-fully connected social Internet of things (SIoT) networks.
Updatable block-level message-locked encryption(MLE)) can efficiently update encrypted data, and public auditing can verify the integrity of cloud storage data by utilizing a third party auditor (TPA). However, there are seldom schemes supporting both updatable block-level deduplication and public auditing. In this paper, an updatable block-level deduplication scheme with efficient auditing is proposed based on a tree-based authenticated structure. In the proposed scheme, the cloud server (CS)can perform block-level deduplication, and the TPA achieves integrity auditing tasks. When a data block is updated, the ciphertext and auditing tags could be updated efficiently. The security analysis demonstrates that the proposed scheme can achieve privacy under chosen distribution attacks in secure deduplication and resist uncheatable chosen distribution attacks (UNC-CDA) in proof of ownership (PoW). Furthermore, the integrity auditing process is proven secure under adaptive chosen-message attacks. Compared with previous relevant schemes, the proposed scheme achieves better functionality and higher efficiency.
In order to improve the accuracy of text similarity calculation, this paper presents a text similarity function part of speech and word order-smooth inverse frequency (PO-SIF) based on sentence vector, which optimizes the classical SIF calculation method in two aspects: part of speech and word order. The classical SIF algorithm is to calculate sentence similarity by getting a sentence vector through weighting and reducing noise. However, the different methods of weighting or reducing noise would affect the efficiency and the accuracy of similarity calculation. In our proposed PO-SIF, the weight parameters of the SIF sentence vector are first updated by the part of speech subtraction factor, to determine the most crucial words. Furthermore, PO-SIF calculates the sentence vector similarity taking into the account of word order, which overcomes the drawback of similarity analysis that is mostly based on the word frequency. The experimental results validate the performance of our proposed PO-SIF on improving the accuracy of text similarity calculation.
This paper proposes a robust adaptive filter based on the exponent sin cost to improve the capability against Gaussian or multiple types of non-Gaussian noises of the adaptive filtering algorithm when dealing with time-varying/ time-invariant linear systems function exponent sin (ExpSin). Then a variable step-size (VSS)-ExpSin algorithm is extended further. Besides, the stepsize, the convergence, and the steady-state performance of the proposed algorithm are validated experimentally. The Monte Carlo simulation results of linear system identification illustrate the principle and efficiency of this proposed adaptive filtering algorithm. Results suggest that the proposed adaptive filtering algorithm has superior performance when estimating the unknown linear systems under multiple-types measurement noises.
Mobile manipulators are used in a variety of fields because of their flexibility and maneuverability. The path
planning capability of the mobile manipulator is one of the important indicators to evaluate the performance of the
manipulator, but it is greatly challenged in the face of maps with narrow channel. To address the problem, an
improved hierarchical motion planner (IHMP) is proposed, which consists of a two-dimensional (2D) path planner
for the mobile base, and a three-dimensional (3D) trajectory planner for the on-board manipulator. Firstly, a
hybrid sampling strategy is proposed, which can reduce invalid nodes of the generated probabilistic roadmap.
Bridge test is used to locate the narrow channel areas, and a Gaussian sampler is deployed in these areas and the
boundaries. Meanwhile, a random sampler is deployed in the rest areas. Trajectory planner for on-board
manipulator is to generate a collision-free and safe trajectory in the narrow channel with collaboration of the 2D path
planner. The experimental results show that IHMP is effective for mobile manipulator motion planning in complex
static environments, especially in narrow channel.
Most existing handover decision system (HDS) designs are monolithic, resulting in high computational cost and unbalance of overall network. A novel modular handover algorithm with a comprehensive load index for the 5th generation (5G) heterogeneous networks (HetNets) is proposed. In this paper, the handover parameters, serving as the basis for handover, are classified into network’s quality of service (QoS) module, user preference (UP) module and degree of satisfaction (DS) module according to the new modular HDS design. To optimize switching process, the comprehensive network load index is deduced by using triangle module fusion operator. With respect to the existing handover algorithm, the simulation results indicate that the proposed algorithm can reduce the handover frequency and maintain user satisfaction at a higher level. Meanwhile, due to its block calculation, it can bring about 1.4 s execution time improvement.
To extract and express the knowledge hidden in information systems, discernibility matrix and its extensions were introduced and applied successfully in many real life applications. Binary discernibility matrix, as a representative approach, has many interesting superior properties and has been rapidly developed to find intuitive and easy to understand knowledge. However, at present, the binary discernibility matrix is mainly adopted in the complete information system. It is a challenging topic how to achieve the attribute reduction by using binary discernibility matrix in incomplete information system. A form of generalized binary discernibility matrix is further developed for a number of representative extended rough set models that deal with incomplete information systems. Some useful properties and criteria are introduced for judging the attribute core and attribute relative reduction. Thereafter, a new algorithm is formulated which supports attribute core and attribute relative reduction based on the generalized binary discernibility matrix. This algorithm is not only suitable for consistent information systems but also inconsistent information systems. The feasibility of the proposed methods was demonstrated by worked examples and experimental analysis.
The application of frequency selection surfaces (FSSs) is limited by large area, narrow bandwidth, low stopband inhibition and large ripple in the passband. A method for designing high-order wide band miniaturized-element frequency selective surface (MEFSS) with capacitance loading is introduced. The proposed structure is composed of multiply sub-wavelength interdigital capacitance layer, sub-wavelength inductive wire grids separated by dielectric substrates. A simple equivalent circuit model, composed of short transmission lines coupled together with shunt inductors and capacitors, is presented for this structure. Using the equivalent circuit model and electromagnetic (EM) model, an analytical synthesis procedure is developed that can be used to synthesize the MEFSS from its desired system-level performance indicators such as the center frequency of operation, bandwidth and stopband inhibition. Using this synthesis procedure, a prototype of the proposed MEFSS with a third-order bandpass response, center frequency of 2.75 GHz, fractional bandwidth of 8% is designed, fabricated, and measured. The measurement results confirm the theoretical predictions and the design procedure of the structure and demonstrate that the proposed MEFSS has a stable frequency response with respect to the angle of incidence of the EM wave in the ±30° range incidence, and the in-band return loss is greater than 18 dB, and the rejection in the stopband is greater than 25 dB at the frequency of 3.2 GHz.
Given the large volume of video content and the diversity of user attention, it is of great importance to understand the characteristics of online video popularity for technological, economic and social reasons. In this paper, based on the data collected from a leading online video service provider in China, namely Youku, the dynamics of online video popularity are analyzed in-depth from four key aspects: overall popularity distribution, individual popularity distribution, popularity evolution pattern and early-future popularity relationship. How the popularity of a set of newly upload videos distributes throughout the observation period is first studied. Then the notion of active days is proposed, and the per-day and per-hour popularity distributions of individual videos are carefully studied. Next, how the popularity of an individual video evolves over time is investigated. The evolution patterns are further defined according to the number and temporal locations of popularity bursts, in order to describe the popularity growth trend. At last, the linear relationship between early video popularity and future video popularity are examined on a log-log scale. The relationship is found to be largely impacted by the popularity evolution patterns. Therefore, the specialized models are proposed to describe the correlation according to the popularity evolution patterns. Experiment results show that specialized models can better fit the correlation than a general model. Above all, the analysis results in our work can provide direct help in practical for the interested parties of online video service such as service providers, online advisers, and network operators.
A novel deep reinforcement learning-based steering control method of autonomous vehicles is proposed. A distortionless compressing method of action space is presented. Convolutional neural networks (CNNs) are designed to serve as an action policy. Driver experience is investigated and modeled to optimize policy of new actions exploration. Experimental results show that the proposed algorithm has better robustness and smoothness. Moreover, it is applicable to different roads, velocities or wire-control systems.
Integrated Circuit Design
This paper presents a wideband variable gain amplifier (VGA) featuring a decibel-linear gain control characteristic. The decibel-linear gain control function is realized using two VGA cells and a control signal converter. The bandwidth is extended using cascode architecture together with active inductive load. To achieve small parasitic and low area, direct current (DC) coupling is adopted in the circuit while a DC offset cancellation circuit (DCOC) is introduced to cancel the DC offset. Fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, the chip occupies an area of 0.53 mm × 0.48 mm (including pads) and draws a total current of 9 mA from a 1.8 V supply. The measurement results show that the gain of the VGA varies from -40 dB to 18 dB while the control voltage varies from 0 to 1.8 V, resulting in a total gain control range of 58 dB. The 3 dB bandwidth of the VGA is larger than 260 MHz at maximum gain.
A joint hybrid beamforming and power splitting (JHBPS) design problem for simultaneous wireless information and power transfer (SWIPT) in millimeter-wave (mmWave) system is studied. The considered scenario is a multi-antenna base station (BS) transfers information and energy simultaneously to multiple single-antenna receivers. BS adopts hybrid digital and analog beamforming architecture to reduce hardware costs. Receivers separate acquired signals with power splitters either for information decoding (ID) or energy harvesting (EH). The aim is minimizing total transmission power by joint design of hybrid beamforming and PS under ID and EH requirements. It is difficult to obtain the optimal hybrid beamformer directly since the analog beamformer and digital beamformer are
multiplied. Therefore, a two-stage algorithm is proposed to solve the problem. In the first stage, the optimal beamformer and PS ratios are obtained by solving the joint transmission beamforming and PS design problem. In the second stage, the optimal beamformer is approximated with the product of analog beamformer and digital beamformer. The superiority of proposed algorithm over the existing algorithms is demonstrated through simulations. Moreover, the effectiveness of approximation algorithm is testified.
The harm caused by malware in cloud computing environment is more and more serious. Traditional anti-virus software is in danger of being attacked when it is deployed in virtual machine on a large scale, and it tends not to be accepted by tenants in terms of performance. In this paper, a method of scanning malicious programs outside the virtual machine is proposed, and the prototype is implemented. This method transforms the memory of the virtual machine to the host machine so that the latter can access it. The user space and kernel space of virtual machine memory are analyzed via semantics, and suspicious processes are scanned by signature database. Experimental results show that malicious programs can be effectively scanned outside the virtual machine, and the performance impact on the virtual machine is low, meeting the needs of tenants.
The color, shape, and other appearance characteristics of the flame emitted by different flame engines are different. In order to make a preliminary judgment on the category of the device to which it belongs through studying exterior characteristics of the flame, this paper uses the flame of matches, lighters, and candles to simulate different types of flames. It is hoped that the flames can be located and classified by detecting the characteristics of flames using the object detection algorithm. First, different types of fire are collected for the dataset of experiments. The mmDetection toolbox is then used to build several different object detection frameworks, in which the dataset can be trained and tested. The object detection model suitable for this kind of problem is obtained through the evaluation index analysis. The model is ResNet50-based faster-region-convolutional neural network ( Faster R- CNN), whose mean average-precision ( mAP) is 93.6% . Besides, after clipping the detected flames through object detection, a similarity fusion algorithm is used to aggregate and classify the three types of flames. Finally, the color components are analyzed to obtain the red, green, blue ( RGB) color histograms of the three flames.
In IEEE 802.11 networks, many access points (APs) are required to cover a large area due to the limited coverage range of APs, and frequent handoffs may occur while a station (STA) is moving in an area covered by several APs. However, traditional handoff mechanisms employed at STAs introduce a few hundred milliseconds delay, which is far longer than what can be tolerated by some multimedia streams such as voice over Internet protocol (VoIP), it is a challenging issue for supporting seamless handoff service in IEEE 802.11 networks. In this paper, we propose a pre-scan based fast handoff scheme within an IEEE 802.11 enterprise wireless local area network (EWLAN) environment. The proposed scheme can help STA obtain the best alternative AP in advance after the pre-scan process, and when the handoff is actually triggered, STA can perform the authentication and reassociation process toward the alternative AP directly. Furthermore, we adopt Kalman filter to minimize the fluctuation of received signal strength (RSS), thus reducing the unnecessary pre-scan process and handoffs. We performed simulations to evaluate performance, and the simulation results show that the proposed scheme can effectively reduce the handoff delay.
In this paper, channel spatial characteristics which mainly depend on the spatial correlation are selected as the significant factors in over-the-air (OTA) testing for multiple input multiple output (MIMO) devices. The multi-probe anechoic chamber method, a promising candidate of the MIMO OTA testing methods, can reproduce the multipath environments in a controllable manner. A novel physical configuration based on the variation of relative positions of probes in a MIMO OTA setup is put forward to obtain better spatial characteristics. Two physical configurations are presented to make a comparison with the typical configuration in this paper. The simulation results show that by making a proper probe configuration, good channel simulation accuracy can be achieved. Meanwhile, in order to get better performance of emulating channel spatial characteristics, probes in the first and the last probe rings should be placed symmetrically in three dimensional (3D) physical probe configuration.
Voice conversion (VC) based on Gaussian mixture model (GMM) is the most classic and common method which converts the source spectrum to target spectrum. However this method is prone to over-fitting because of its frame-by-frame conversion. The VC with non-negative matrix factorization (NMF) is presented in this paper, which can keep spectrum from over-fitting by adjusting the size of basis vector (dictionary). In order to realize the non-linear mapping better, kernel NMF (KNMF) is adopted to achieve spectrum mapping. In addition, to increase the accuracy of conversion, KNMF combined with GMM (GKNMF) is also introduced into VC. In the end, KNMF, GKNMF, GMM, principal component regression (PCR), PCR combined with GMM (GPCR), partial least square regression (PLSR), NMF correlation-based frequency warping (NMF-CFW) and deep neural network (DNN) methods are compared with each other. The proposed GKNMF gets better performance in both objective evaluation and subjective evaluation.
(k, n) halftone visual cryptography (HVC) is proposed based on Shamir‘s secret sharing (HVCSSS), and through this method a binary secret image can be hided into n halftone images, and the secret image can be revealed from any k halftone images. Firstly, using Shamir‘s secret sharing, a binary secret image can be shared into n meaningless shares; secondly, hiding n shares into n halftone images through self-hiding method; and then n extracted shares can be obtained from n halftone images through self-decrypt method; finally, picking any k shares from n extracted shares, the secret image can be revealed by using Lagrange interpolation. The main contribution is that applying Shamir‘s secret sharing to realize a (k, n) HVC, and this method neither requires code book nor suffers from pixel expansion. Experimental results show HVCSSS can realize a (k, n) HVC in gray-scale and color halftone images, and correct decoding rate (CDR) of revealed secret image can be guaranteed.
Currently, the Photovoltaic (PV) cloud network is an important research point in energy Internet. From the perspective of the robustness analysis of PV cloud network, traditional robustness index, the relative size of the giant connected component before and after the cascading , which is focused on the giant component, is not appropriate for the distributed PV network because of the big difference between PV generators and large power grid. In this paper, a new index is proposed, the minimum removed nodes number (MRNN), which can make the entire system collapse, to evaluate the robustness of PV cloud network system. The simulation results show MRNN can clearly indicate the system robustness under different parameters.
Graphics processing is an increasing important application domain with the demand of real-time rendering, video streaming, virtual reality, and so on. Illumination is a critical module in graphics rendering and is typically compute-bound, memory-bound, and power-bound in different application cases. It is crucial to decide how to schedule different illumination algorithms with different features according to the practical requirements in reconfigurable graphics hardware. This paper analyze the performance characteristics of four main-stream lighting algorithms, Lambert illumination algorithm, Phong illumination algorithm, Blinn-Phong illumination algorithm, and Cook-Torrance illumination algorithm, using hardware performance counters on x86 processor platform KabyLake (KBL). The data movement, computation, power consumption, and memory accessing are evaluated over a range of application scenarios. Further, by analyzing the system-level behavior of these illumination algorithms, obtains the cons and pros of these specific algorithms were obtained. The associated relationship between performance/energy and the evaluated metrics was analyzed through Pearson correlation coefficient(PCC)analysis. According to these performance characterization data, this paper presents some reconfiguration suggestions in reconfigurable graphics processor.
Complex Network Modeling and Application
A geometry-based stochastic model ( GBSM) for unmanned aerial vehicle to vehicle ( UAV-V) multiple-input multiple-output (MIMO) wideband channel is proposed to investigate the characteristics of UAV-V channel. Based on the proposed model, a three-dimensional (3D) wideband channel matrix regarding channel numbers, time and delay is constructed. And some important channel characteristics parameters, such as power delay profile (PDP), root mean square ( RMS) delay spread, RMS Doppler spread, channel gain and Doppler power spectral density (PSD) are investigated with different vehicle velocities. It is much simpler and clearer compared with the complex analytical derivations. The results are compared with validated analysis to confirm the theoretical analysis.
Electrical connectors play a significant role in the electronic and communication systems. As they are often exposed in the atmosphere environment, it is extremely easy for them to cause electrical contact failure. It is essential to carry out the reliability modeling and predict the lifetime. In the present work, the accelerated lifetime testing method which is on account of the uniform design method was designed to obtain the degradation data under multiple environmental stresses of temperature and particulate contamination for electrical connectors. Based on the degradation data, the pseudo life can be acquired. Then the reliability model was established by analyzing the pseudo life. Accordingly, the reliability function and reliable lifetime function were set up, and the reliable lifetime of the connectors under the multiple environment stresses of temperature and particulate contamination could be predicted for electrical connectors.
As a way of training a single hidden layer feedforward network (SLFN),extreme learning machine (ELM) is rapidly becoming popular due to its efficiency. However, ELM tends to overfitting, which makes the model sensitive to noise and outliers. To solve this problem, L2,1-norm is introduced to ELM and an L2,1-norm robust regularized ELM (L2,1-RRELM) was proposed. L2,1-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function. In light of the non-convex feature of L2,1-RRELM, the concave-convex procedure (CCCP) is applied to solve its model. The convergence of L2,1-RRELM is also given to show its robustness. In order to further verify the effectiveness of L2,1-RRELM, it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine (UCI) datasets. And each algorithm in different noise environments is tested with two evaluation criterions root mean square error (RMSE) and fitness. The results of the simulation indicate that L2,1-RRELM has smaller RMSE and greater fitness under different noise settings. Numerical analysis shows that L2,1-RRELM has better generalization performance, stronger robustness, and higher anti-noise ability and fitness.
Network attacks evolved from single-step and simple attacks to complex multistep attacks. Current methods of multistep attack detection usually match multistep attacks from intrusion detection systems (IDS) alarms based on the correlation between attack steps. However, IDS has false negatives and false positives, which leads to incomplete or incorrect multistep attacks. Association based on simple similarity is difficult to obtain an accurate attack cluster, while association based on prior knowledge such as attack graphs is difficult to guarantee a complete attack knowledge base. To solve the above problems, a heuristic multistep attack scenarios construction method based on the kill chain (HMASCKC) model was proposed. The attack model graph can be obtained from dual data sources and heuristic multistep attack scenarios can be obtained through graph matching. The model graph of the attack and the predicted value of the next attack are obtained by calculating the matching value. And according to the purpose of the multistep attack, the kill chain model is used to define the initial multistep attack model, which is used as the initial graph for graph matching. Experimental results show that HMASCKC model can better fit the multistep attack behavior, the effect has some advantages over the longest common subsequence (LCS) algorithm, which can close to or match the prediction error of judge evaluation of attack intension ( JEAN) system. The method can make multistep attack model matching for unknown attacks, so it has some advantages in practical application.
One-bit compressed sensing(CS) technology reconstructs the sparse signal when the available measurements are reduced to only their sign-bit. It is well known that CS reconstruction should know the measurement matrix exactly to obtain a correct result. However, the measurement matrix is probably perturbed in many practical scenarios. An iterative algorithm called perturbed binary iterative hard thresholding (PBIHT) is proposed to reconstruct the sparse signal from the binary measurements (sign measurements) where the measurement matrix experiences a general perturbation. The proposed algorithm can reconstruct the original data without any prior knowledge about the perturbation. Specifically, using the ideas of the gradient descent, PBIHT iteratively estimates signal and perturbation until the estimation converges. Simulation results demonstrate that, under certain conditions, PBIHT improves the performance of signal reconstruction in the perturbation scenario.
For achieving a higher compression ratio (CR) in compression sensing, the time-sparse bio-signals, such as electrocardiograph (ECG), are generally directly filtered via a dynamic or fixed threshold, however, inevitably leading to the loss of critical diagnostic bio-information. We propose a compression scheme to reduce the transmitting loss. Instead of the directly utilizing the original ECG data, the residuals between original and synthetic ECG signals are applied as the input signal. We employ the dynamic model to guarantee the consistency between the synthetic ECG signals waves (P, Q, R, S, and T) and the originals. The feasibility of the proposed method is tested through operating on the recorded ECG signals from a healthy human. During the process of building simulation platform, the sparsity, percentage root mean difference (PRD) versus sampling frequency, and signals reconstruction algorithm are fully taken into account. Before compression, we set the threshold to filter the residual waves, in which utilizing the residuals as input data by setting the thresold as 0.01 mV and 0.08 mV resulted the amount reduction of the transmitting data by 18% and 81.2%, respectively. And the simulation results show that CR can reach 2.75 when the PRD value is less than 9%.
To solve the problem of security and efficiency of anonymous authentication in the vehicle Ad-hoc network(VANET), a conditional privacy protection authentication scheme for vehicular networks is proposed based on bilinear pairings. In this scheme, the tamper-proof device in the roadside unit (RSU) is used to complete the message signature and authentication process together with the vehicle, which makes it more secure to communicate between RSU and trusted authority (TA) and faster to update system parameters and revoke the vehicle. And this is also cheaper than installing tamper-proof devices in each vehicle unit. Moreover, the scheme provide provable security proof under random oracle model (ROM), which shows that the proposed scheme can meet the security requirements such as conditional privacy, unforgeability, traceability etc. And the results of simulation experiment demonstrate that this scheme not only of achieves high efficiency, but also has low message loss rate.
In this paper, a power allocation to maximize tradeoff between spectrum efficiency (SE) and energy efficiency
(EE) is considered for the downlink non-orthogonal multiple access (NOMA) system with arbitrarily clusters and arbitrarily users, where the subcarriers of clusters are mutually orthogonal to each other. Specifically, an optimization problem of maximizing SE-EE tradeoff is formulated by optimizing power allocation among users under the constraints of user rate requirements. Then, the optimization problem is decomposed into a group of sub-problems with the aim of maximizing SE-EE tradeoff for each cluster, which is solved by using bisection method and monotonicity of function. Finally, the power allocation optimization problem among users is transformed into that between clusters, and a two steps inter-cluster power allocation algorithm is developed to solve this problem. Simulation results show that SE-EE tradeoff of the proposed scheme is better than that of the existing schemes.
The research of emotion recognition based on electroencephalogram (EEG) signals often ignores the related information between the brain electrode channels and the contextual emotional information existing in EEG signals, which may contain important characteristics related to emotional states. Aiming at the above defects, a spatiotemporal emotion recognition method based on a 3-dimensional (3D) time-frequency domain feature matrix was proposed. Specifically, the extracted time-frequency domain EEG features are first expressed as a 3D matrix format according to the actual position of the cerebral cortex. Then, the input 3D matrix is processed successively by multivariate convolutional neural network (MVCNN) and long short-term memory (LSTM) to classify the emotional state. Spatiotemporal emotion recognition method is evaluated on the DEAP data set, and achieved accuracy of 87.58% and 88.50% on arousal and valence dimensions respectively in binary classification tasks, as well as obtained accuracy of 84.58% in four class classification tasks. The experimental results show that 3D matrix representation can represent emotional information more reasonably than two-dimensional (2D). In addition, MVCNN and LSTM can utilize the spatial information of the electrode channels and the temporal context information of the EEG signal respectively.
Addressing the impact of capacitor mismatch on the conversion accuracy of successive approximation register analog-to-digital converter (SAR ADC), a 12-bit 1 MS/s sub-binary SAR ADC designed using variable step size digital calibration was proposed. The least mean square (LMS) calibration algorithm was employed with a ramp signal used as the calibration input signal. Weight errors, extracted under injected disturbances, underwent iterative training to optimize weight values. To address the trade-off between conversion accuracy and speed caused by a fixed step size, a novel variable step size algorithm tailored for SAR ADC calibration was propased. The core circuit and layout of the SAR ADC were implemented using the Taiwan Semiconductor manufacturing Company (TSMC) 0.35 μm complementary metal-oxide-semiconductor (CMOS) commercial process. Simulation of the SAR ADC calibration algorithm was conducted using Simulink, demonstrating quick convergence and meeting conversion accuracy requirements compared to fixed step size simulation. The results indicated that the convergence speed of the LMS digital calibration algorithm with variable step size was approximately eight times faster than that with a fixed step size, also yielding a lower mean square error (MSE). After calibration, the simulation results for the SAR ADC exhibited an effective number of bit (ENOB) of 11.79 bit and a signal-to-noise and distortion ratio (SNDR) of 72.72 dB, signifying a notable enhancement in the SAR ADC performance.
From the perspective of compressed sensing (CS) theory, the channel estimation problem in large-scale multiple input multiple output (MIMO)-orthogonal frequency division multiplexing (OFDM) system is investigated. According to the theory, the smaller mutual coherence the reconstruction matrix has, the higher success probability the estimation can obtain. Aiming to design a pilot that can make the system reconstruction matrix having the smallest mutual coherence, this paper proposes a low complexity joint algorithm and obtains a kind of non-orthogonal pilot pattern. Simulation results show that compared with the conventional orthogonal pilot pattern, applying the proposed pattern in the CS channel estimation can obtain the better normalized mean square error performance. Moreover, the bit error rate performance of the large-scale MIMO-OFDM system is also improved.
The underflow concentration prediction of deep-cone thickener is a difficult problem in paste filling. The existing prediction model only determines the influence of some parameters on the underflow concentration, but lacks a prediction model that comprehensively considers the thickening process and various factors. This paper proposed a model which analyzed the variation of the underflow concentration from a number of influencing factors in the
concentrating process. It can accurately predict the underflow concentration. After preprocessing and feature selection of the history data set of the deep-cone thickener, this model uses the eXtreme gradient boosting (XGBOOST) in machine learning to deal with the relationship between the influencing factors and the underflow concentration, so as to achieve a more comprehensive prediction of the underflow concentration of the deep-cone thickener. The experimental results show that the underflow concentration prediction model based on XGBOOST shows a mean absolute error (MAE) of 0.31% and a running time of 1.6 s on the test set constructed in this paper, which fully meet the demand. By comparing the following three classical algorithms: back propagation (BP) neural network, support vector regression (SVR) and linear regression, we further verified the superiority of XGBOOST under the conditions of this study.
Massive multiple input multiple output (MIMO) systems can increase capacity and reliability greatly. However, extremely high hardware costs and computational complexity lead to the demand for reasonable antenna selection. Aiming at the problem that the traditional antenna selection algorithm based on maximizing sum capacity has large complexity and worse bit error rate (BER) performance, a two-step selection algorithm is proposed, which selects a part of the antennas based on the norm-based antenna selection (NBS) firstly, and then selects the antenna based on maximizing capacity via convex optimization. The simulation results show that the improved algorithm has better BER performance than the traditional algorithms. At the same time, it reduces computational complexity greatly.
Vehicle trajectory modeling is an important foundation for urban intelligent services. Trajectory prediction of cars is a hot topic. A model including convolutional neural network (CNN) and long short-term memory (LSTM) was proposed, which is named trajectory-CNN-LSTM (TCL). CNN can extract the spatial features of the trajectory in the input image. Besides, LSTM can extract the time-series features of the input trajectory. After that, the model uses fully connected layers to merge the two features for the final predicting. The experiments on the Porto dataset of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) show that the average prediction error of TCL is reduced by 0.15 km, 0.42 km, and 0.39 km compared to the trajectory-convolution (T-CONV), multi-layer perceptron (MLP), and recurrent neural network (RNN) model, respectively.
The sensing light source of the line scan camera cannot be fully exposed in a low light environment due to the extremely small number of photons and high noise, which leads to a reduction in image quality. A multi-scale fusion residual encoder-decoder (FRED) was proposed to solve the problem. By directly learning the end-to-end mapping between light and dark images, FRED can enhance the image's brightness with the details and colors of the original image fully restored. A residual block (RB) was added to the network structure to increase feature diversity and speed up network training. Moreover, the addition of a dense context feature aggregation module (DCFAM) made up for the deficiency of spatial information in the deep network by aggregating the context's global multi-scale features. The experimental results show that the FRED is superior to most other algorithms in visual effect and quantitative evaluation of peak signa-to-noise ratio (PSNR) and structural similarity index measure (SSIM). For the factor that FRED can restore the brightness of images while representing the edge and color of the image effectively, a satisfactory visual quality is obtained under the enhancement of low-light.
In view of the privacy security issues such as location information leakage in the interaction process between the base station and the sensor nodes in the sensor-cloud system, a base station location privacy protection algorithm based on local differential privacy (LDP) is proposed. Firstly, through the local obfuscation algorithm (LOA), the base station can get the data of the real location and the pseudo location by flipping a coin, and then send the data to the fog layer, then the obfuscation location domain set is obtained. Secondly, in order to reconstruct the location distribution of the real location and the pseudo location in the base station, the location domain of the base station is divided into several decentralized sub-regions, and a privacy location reconstruction algorithm ( PLRA) is performed in each sub-region. Finally, the base station correlates the location information of each sub-region, and
then uploads the data information containing the disturbance location to the fog node layer. The simulation results show that compared with the existing base station location anonymity and security technique (BLAST) algorithm, the proposed method not only reduce the algorithm's running time and network delay, but also improve the data availability. So the proposed method can protect the location privacy of the base station more safely and efficiently.
A tracking structure suitable for L6 signal of quasi-zenith satellite system (QZSS) was proposed in order to track the L6 signal without other frequency assistance. Moreover, the tracking structure does not change the receiver's hardware structure. The main difference between the proposed and the traditional tracking structure lies in the generation of local codes of E, P and L branches. The method of local code generation is designed in a two-stage manner. The first stage is the generation of the P branch local code with fast fourier transform (FFT). In the second stage, the local codes of the E and L branches are obtained with the code-chip interval. The tracking structure can track and decode L6 signal separately, and track code shift keying (CSK) modulated signal as well. The structure was verified using both simulation data, generated in different conditions, and actual data obtained from QZSS satellites respectively. The results show that the improved tracking loop is able to track L6 signal without other frequency assitance. Furthermore, the biterror ratio (BER) of L6 tracking algorithm is lower than that of L1C/A assist L6 algorithm, when the Doppler remains a constant and reed solomn (RS) encode are applied. To be more specific, with the proposed structure the BER decreased by 11.40%, 17.07%, 15.00%, 11.15%, 5.19% when carrier to noise ratio (CNR) is 36 -40 dB·Hz.
The flexibility of the media access control (MAC) layer has always been an important concern in the existing
communication architecture. To meet the more stringent requirements under large-scale connections, the MAC layer
structure needs to be optimized carefully. This paper proposes a new architecture of the MAC layer to optimize the
complex communication backhaul link structure, which will increase the flexibility of the system and decrease the
transmission delay. Moreover, an adaptive transmission time interval (TTI) bundling with self-healing scheme is
proposed to further decrease the transmission delay and improve the quality of service (QoS). The simulation
results show that the average transmission delay is greatly reduced with our proposed scheme. The bit error rate
(BER) and the block error rate are also improved even if the channel changes drastically.
In order to solve the problem of high computational complexity in demodulation for multi-h continuous phase modulation (CPM) signal, a maximum cumulative measure combing with the Laurent decomposition (MCM-LD) scheme is proposed to reduce the number of the grid states and the required number of matched filters, which degrades the demodulation complexity at the receiver. The advanced range telemetry (ARTM) Tier 2 CPM signal is adopted to evaluate the performance in simulation. The results show that, compared with the traditional maximum likelihood sequence detection (MLSD), MCM-LD can respectively reduce the numbers of grid states and matched filters from 256 to 32 and 128 to 48 with negligible performance loss, which effectively degrades the computational complexity for multi-h CPM signal.
The existing level set segmentation methods have drawbacks such as poor convergence, poor noise resistance, and long iteration times. In this paper, a fractional order distance regularized level set segmentation method with bias correction is proposed. This method firstly introduces fractional order distance regularized term to punish the deviation between the level set function (LSF) and the signed distance function. Secondly a series of covering template is constructed to calculate fractional derivative and its conjugate of image pixel. Thirdly introducing the offset correction term and fully using the local clustering property of image intensity, the local clustering criterion of image intensity is defined and integrated with the neighborhood center to obtain the global criterion of image segmentation. Finally, the fractional distance regularization, offset correction, and external energy constraints are combined, and the energy optimization segmentation method for noisy image is established by level set. Experimental results show that the proposed method can accurately segment the image, and effectively improve the efficiency and robustness of exiting state of the art level set related algorithms.
A 20 GHz - 24 GHz three-stage low noise amplifier ( LNA) was implemented using the GaAs pseudomorphic high electron mobility transistor ( PHEMT) process. The schematic design and optimization of the LNA were carried out using advanced design system ( ADS). The three-stage series structure is used to increase the gain of the amplifier. Additionally, a self-biasing network and negative feedback circuit can expand the bandwidth while increasing the stability of the circuit and obtaining better input matching and noise. The test results show that the gain in the 20 GHz - 24 GHz band is greater than 20 dB, the noise figure ( NF) is 2. 1 dB, and the input and output reflection coefficients are less than - 10 dB, which meets the design requirements. The amplifier serves a wide range of applications, including wireless communications, radar systems, satellite communications, and other areas that require high-frequency amplification to enhance system performance and sensitivity.
Non-binary low density parity check (NB-LDPC) codes are considered as preferred candidate in conditions where short/medium codeword length codes and better performance at low signal to noise ratios (SNR) are required. They have better burst error correcting performance, especially with high order Galois fields (GF). A shared comparator(SCOMP) architecture for elementary of check node (ECN)/elementary of variable node (EVN) to reduce decoder complexity is introduced because high complexity of check node (CN) and variable node (VN) prevent NB-LDPC decoder from widely applications. The decoder over GF(16) is based on the extended min-sum (EMS) algorithm. The decoder matrix is an irregular structure as it can provide better performance than regular ones. In order to provide higher throughput and increase the parallel processing efficiency,the clock which is 8 times of the system frequency is adopted in this paper to drive the CN/VN modules. The decoder complexity can be reduced by 28% from traditional decoder when shared comparator architecture is introduced. The result of synthesis software shows that the throughput can achieve 34 Mbit/s at 10 iterations. The proposed architecture can be conveniently extended to GF such as GF(64) or GF(256). Compared with previous works, the decoder proposed in this paper has better hardware efficiency for practical applications.
The prediction of colorectal cancer (CRC) survivability has always been a challenging research issue. Considering the importance of predicting CRC patients' survival rates, we compared the performance of three data mining methods: decision trees (DTs), artificial neural networks (ANNs) and support vector machines (SVMs), for predicting 5-year survival of CRC patients to assist clinicians in making treatment decisions. The CRC dataset used to build the prediction model comes from the surveillance, epidemiology, and end results (SEER) program. The 5-fold cross-validation and random forest algorithm were respectively utilized for measuring the model predictive accuracy and the importance of features. Experimental results show that the predictive accuracy of ANNs (0.73) and SVMs (0.75) were higher than that of DTs, and they also have the best result in the area under the receiver operating characteristic (ROC) curve (area under curve (AUC) =0.82). This result may indicate high predictive power of ANNs and SVMs for predicting 5-year survival of CRC patients.
Integrated Circuit Design
The ever-increasing complexity of on-chip interconnection poses great challenges for the architecture of conventional system-on-chip (SoC) in semiconductor industry. The rapid development of process technology enables the creation of stacked 3-dimensional (3D) SoC by means of through-silicon-via (TSV). Stacked 3D SoC testing consists of two major issues, test architecture optimization and test scheduling. This paper proposed game theory based optimization of test scheduling and test architecture to achieve win-win result as well as individual rationality for each player in a game. Game theory helps to achieve equilibrium between two correlated sides to find an optimal solution. Experimental results on handcrafted 3D SoCs built from ITC’2 benchmarks demonstrate that the proposed approach achieves comparable or better test times at negligible computing time.
Primitive assembly is an inevitable procedure of graphics rendering which performs the objects preparation for the following steps, however, the conventional approaches suffer from some issues, such as the missing of surface attribute, mismatch of color mode for clipped primitives, and performance bottleneck of rendering pipeline. This paper takes all these issues into considerations, and proposes a parallel primitive assembly accelerator (PPAA) which can solve not only the functional problems but also improve the shading performance. The register transfer level (RTL) circuit is designed and the detailed approach is presented. The prototype systems are implemented on Xilinx field programmable gate array (FPGA) XC6VLX550T and Altera FPGA EP2C70F896C6. The experimental results show that PPAA can accomplish the assembly tasks correctly and with higher performance of 1.5x and 2.5x of two previous implementations. For the most frequently independent primitives, the PPAA can efficiently enhance the throughput by squeezing out the pipeline bubbles and by balancing the pipeline stages.
To enhance the segmentation performance and robustness of kernel weighted fuzzy local information C-means (KWFLICM) clustering for image segmentation in the presence of high noise, an improved KWFLICM algorithm aggregating neighborhood membership information is proposed. This algorithm firstly constructs a linear weighted membership function by combining the membership degrees of current pixel and its neighborhood pixels. Then it is normalized to meet the constraint that the sum of membership degree of pixel belonging to different classes is 1. In the end, normalized membership is used to update the clustering centers of KWFLICM algorithm. Experimental results show that the proposed adaptive KWFLICM ( AKWFLICM) algorithm outperforms existing state of the art fuzzy clustering-related segmentation algorithms for image with high noise.
As a more trackable performance metric for secrecy capacity, the secure degrees of freedom (SDoF) are widely studied for most multiuser networks in the high signal to noise ratio (SNR) region. However, the SDoF for these networks under rank-deficiency and arbitrary antenna configurations have not yet been determined. In this paper, the SDoF of two-user general multiple-input multiple-output ( MIMO ) interference channel with confidential messages (ICCM) under rank-deficiency are derived. For the two-user rank-deficient MIMO ICCM, the model is generalized to fully asymmetric settings, where the transmitters and receivers are equipped with arbitrary antennas. The outer bound of SDoF is the union of three outer bounds that are based on the Fano's inequality and the secrecy constraints, the secrecy penalty lemma and the role of a helper lemma, and the transmitters cooperation, respectively. The SDoF region is subdivided into five regions according to the number of transceivers antennas, and each region has an achievability scheme with designed null space transmission and alignment techniques. Numerical results indicate that the SDoF increase at first and then decrease as the rank of the channel matrix decreases. The SDoF improve by increasing the transmitting antenna or reducing receiving antenna, but the effect of the transmitting antenna is greater.
On-demand routing protocols are widely used in mobile Ad-hoc network (MANET). Flooding is an important dissemination scheme in routing discovering of on-demand routing protocol. However, in high-density MANET redundancy flooding packets lead to dramatic deterioration of the performance which calls broadcast storm problem (BSP). A location-aided probabilistic broadcast (LAPB) algorithm for routing in MANET is proposed to reduce the number of routing packets produced by flooding in this paper. In order to reduce the redundancy packets, only nodes in a specific area have the probability, computed by location information and neighbor knowledge, to propagate the routing packets. Simulation results demonstrate that the LAPB algorithm can reduce the packets and discovery delay (DD) in the routing discovery phase.
Cloud storage is getting more and more popular as a new trend of data management. Data replication has been widely used as a means of increasing the data availability in large-scale cloud storage systems where failures are normal. However, most data replication schemes do not fully consider cost and latency issues when users need large amounts of remote replicas. We present animproved dynamic replication management scheme (IDRMS). By adding a prediction model, the optimal allocation of replicas among the cloud storage nodes is determined that the total communication cost and network delay are minimal. When the local data block is frequently requested, the data replicas can be moved to a closer or cheaper node for cost reduction and increased efficiency. Moreover, we replace
the B+ tree with the B*tree to speed up the search speed and reduce workload with the lowest blocking probability. We define the value of popularity to adjust the placement of replicas dynamically. We divide the data nodes in the network into hot nodes and cool nodes. By changing to visit cool nodes instead of hot nodes, we can balance the workload in the network. Finally, we implement IDRMS in Matlab simulation platform and simulation results demonstrate that IDRMS outperforms other replication management schemes in terms of communication cost and load balancing for large-scale cloud storage.
Sharing of the electronic medical records among different hospitals raises serious concern of the leakage of individual privacy for the adoption of the semi trustworthiness of the medical cloud platform. The tracking and revocation of malicious users have become urgent problems. To solve these problems, this paper proposed a traceable and directly revocable medical data sharing scheme. In the scheme, a unique identity parameter (ID), which was generated and embedded in the private key generation phase by the medical service provider (MSP), is used to identify legal authorized user and trace malicious user. Only when attributes satisfy the access policy and user's ID is not in the revocation list can the user calculate the decryption key. Malicious user can be tracked and directly revoked by using the revocation list. Under the assumption of decision bilinear Diffie-Hellman (DBDH), this paper has proved that the scheme is able to achieve security against chosen-plaintext attack (CPA). The performance analysis demonstrates that the sizes of the public key and private key are shorter, and the time overhead is less than other schemes in the public-private key generation, data encryption and data decryption stages.
With the rapid growth of location-based social networks (LBSNs), point-of-interest (POI) recommendation has become an important research problem. As one of the most representative social media platforms, Twitter provides various real-life information for POI recommendation in real time. Despite that POI recommendation has been actively studied, tweet images have not been well utilized for this research problem. State-of-the-art visual features like convolutional neural network (CNN) features have shown significant performance gains over the traditional bag-of-visual-words in unveiling the image’s semantics. Unfortunately, they have not been employed for POI recommendation from social websites. Hence, how to make the most of tweet images to improve the performance of POI recommendation and visualization remains open. In this paper, we thoroughly study the impact of tweet images on POI recommendation for different POI categories using various visual features. A novel topic model called social media Twitter-latent Dirichlet allocation (SM-TwitterLDA) which jointly models five Twitter features, (i.e., text, image, location, timestamp and hashtag) is designed to discover POIs from the sheer amount of tweets. Moreover, each POI is visualized by representative images selected on three predefined criteria. Extensive experiments have been conducted on a real-life tweet dataset to verify the effectiveness of our method.
Anomaly detection in smart grid is critical to enhance the reliability of power systems. Excessive manpower has to be involved in analyzing the measurement data collected from intelligent motoring devices while performance of anomaly detection is still not satisfactory. This is mainly because the inherent spatio-temporality and multi-dimensionality of the measurement data cannot be easily captured. In this paper, we propose an anomaly detection model based on encoder-decoder framework with recurrent neural network (RNN). In the model, an input time series is reconstructed and an anomaly can be detected by an unexpected high reconstruction error. Both Manhattan distance and the edit distance are used to evaluate the difference between an input time series and its reconstructed one. Finally, we validate the proposed model by using power demand data from University of California, Riverside (UCR) time series classification archive and IEEE 39 bus system simulation data. Results from the analysis demonstrate that the proposed encoder-decoder framework is able to successfully capture anomalies with a precision higher than 95%.
The popularity of IEEE 802.11 based Wireless Local Area Network (WLAN) increased significantly in recent years and resulted in the dense deployment of WLANs. While densification can contribute to increasing coverage, it could also lead to increasing interference and cannot insure high spatial reuse due to the current physical carrier sensing of IEEE 802.11. To tackle these challenges, the dynamic sensitivity control (DSC) is considered in IEEE 802.11ax, which dynamically selects the appropriate carrier sensing threshold (CST) to improve spectrum efficiency and enhance spatial reuse in densely deployed network. A dynamic Q-learning based CST selection method is proposed to enable a network to select the optimal CST according to the channel condition. Simulation results show that the propsoed scheme provides 40% aggregate throughput gain of a dense network when compared with legacy IEEE 802.11.
The cellular heterogeneous network (HetNet) with ultra dense small cells is called ultra cellular HetNet. The energy efficiency for this network is very important for future green wireless communications. The data rates and power consumptions for three parts (i.e., macro cells, small cells, and mixed backhaul links) in ultra cellular HetNet are jointly formulated to model downlink energy efficiency considering the active base stations (BSs) and inactive BSs. Then, in order to decrease the downlink co-channel interference, the interference price functions are also jointly set up for the three parts in ultra cellular HetNet. Next, energy efficiency optimization iterative algorithm scheme using the fractional programming and Lagrangian multiplier with constraints for density of ultra dense small cells and fraction of mixed backhaul links is presented with interference pricing. The convergence and computation complexity are also proved in this scheme. The numerical simulations finally demonstrate convergence behavior of the proposed algorithm. By comparison, some conclusion can be drawn. Maximizing energy efficiency of system is lower as the density of small cell is high. The effect on maximizing energy efficiency with interference price outperforms that without interference price. And the energy efficiency increases as the fraction of mixed backhaul links is higher because of more power consumption in the microwave backhaul links.
Unmanned aerial vehicles (UAV) are applied widely and profoundly in various fields. Moreover, high-precision
positioning and tracking in multiple scenarios are the core requirements for UAV usage. To ensure stable
communication of UAVs in denial environments with substantial electromagnetic interference, a systematic solution
is proposed based on a deep learning algorithm for target detection and visible light for UAV tracking. Considering
the cost and computational power limitations on the hardware, the you only look once (YOLO) v4-Tiny model is
used for static target detection of the UAV model. For UAV tracking, and a light tracker that can adjust the angle
of emitted light and focus it on the target is used for dynamic tracking processing. Thus, achieving the primary
conditions of UAV optical communication with good secrecy is also suitable for dynamic situations. The UAV tracker positions the UAV model by returning the coordinates and calculating the time delay, and then controls the
spotlight to target the UAV. In order to facilitate the deployment of deep learning models on hardware devices, the
lighter and more efficient model is selected after comparison. The trained model can achieve 99.25% accuracy on
the test set. The dynamic target detection can reach 20 frames per second (FPS) on a computer with an MX520
graphics processing unit (GPU) and 6 GB of random access memory (RAM). Dynamic target detection on a Jetson
Nano can reach 5.4 FPS.
Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not only the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms.
This paper presents a modified circular-cut multiband fractal antenna with good radiation patterns designed for digital cellular system (DCS), personal communication system (PCS), 2.4/5.2/5.8 GHz wireless local area networks (WLAN) and 2.5/3.5/5.5 GHz worldwide interoperability for microwave access (WiMAX) applications simultaneously. Originally, the modified circular monopole antenna is designed to resonate at around 2.1 GHz and 3.6 GHz. After subtracting the circular iterative tree fractal structure, it can produce three other resonances at around 5.6 GHz, 6.47 GHz and 7.89 GHz. Besides, as the number of iterations increases, not only do the new frequency bands appear (it demonstrates the good self-similarity property of the proposed antenna), but also the operating bands shift from high frequency to low frequency (it shows the well space filling property). Furthermore, the proposed antenna owns a compact structure, which can achieve the 5.28 dBi of relative high gain. And the measured results are basically accordant to simulated results, which proves the effectiveness of the proposed antenna.
A feature fusion approach is presented to extract the region of interest (ROI) from the stereoscopic video. Based on human vision system (HVS), the depth feature, the color feature and the motion feature are chosen as vision features. The algorithm is shown as follows. Firstly, color saliency is calculated on superpixel scale. Color space distribution of the superpixel and the color difference between the superpixel and background pixel are used to describe color saliency and color salient region is detected. Then, the classic visual background extractor (Vibe) algorithm is improved from the update interval and update region of background model. The update interval is adjusted according to the image content. The update region is determined through non-obvious movement region and background point detection. So the motion region of stereoscopic video is extracted using improved Vibe algorithm. The depth salient region is detected by selecting the region with the highest gray value. Finally, three regions are fused into final ROI. Experiment results show that the proposed method can extract ROI from stereoscopic video effectively. In order to further verify the proposed method, stereoscopic video coding application is also carried out on the joint model (JM) encoder with different bit allocation in ROI and the background region.
In order to solve the energy crisis and pollution problems, smart grid is widely used. However, there are many
challenges such as the management of distributed energy during the construction. Blockchain, as an emerging
technology, can provide a secure and transparent solution to the decentralized network. Meanwhile, fog computing
network is considered to avoid the high deployment cost. The edge servers have abundant computing and storage
resources to perform as nodes in grid blockchain. In this paper, an innovative structure of smart grid blockchain
integrated with fog computing are proposed. And a new consensus mechanism called scalable proof of cryptographic
selection (SPoCS) is designed to adapt the hybrid networks. The mechanism not only includes a special index,
contribution degree, to measure the loyalty of fog nodes and the probability of being a function node, but also has
flexible block interval adjustment method. Meanwhile, the number of function nodes (validating nodes and ordering
nodes) can also be adjusted. And a deep reinforcement learning (DRL) method is used to select the appropriate
quantity to improve the performance under the strict constraints of security and decentralization. The simulation
shows the scheme performs well in the throughput, cost and latency.
In order to solve the impact of image degradation on object detection, an object detection method based on light field super-resolution ( LFSR) is proposed. This method takes LFSR as an image enhancement step to provide high- quality images for object detection without using expensive imaging equipment. To evaluate this method, three types of objects: person, bicycle, and car, are chosen and the results are compared from 5 parts: detected object quantity, mean confidence score, detection results in different scenes, error detection, and detection results from different images sizes and detection speed. Experimental results based on the common object in context ( COCO) dataset show that the method incorporated LFSR improves performance of object detection models.
Complex Network Modeling and Application
In order to improve the reliability and resource utilization efficiency of vehicle-to-vehicle (V2V) communication system, in this paper, the fairness optimization and power allocation for the cognitive V2V network that takes into account the realistic three-dimensional (3D) channel are investigated. Large-scale and small-scale fading are considered in the proposed channel model. An adaptive non-orthogonal multiple access ( NOMA) / orthogonal multiple access (OMA) scheme is proposed to reduce the complexity of successive-interference-cancellation (SIC) in decoding and improve spectrum utilization. Also, a fairness index that takes into account each user’s requirements is proposed to indicate the optimal point clearly. In the imperfect SIC, the optimization problem of maximizing user fairness is formulated. Then, a subgradient descent method is proposed to solve the optimization problem with customizable precision. And the computational complexity of the proposed method is analyzed. The achievable rate, outage probability and user fairness are analyzed. The results show that the proposed adaptive NOMA / OMA (A-NOMA / OMA) outperforms both NOMA and OMA. The simulation results are compared with validated analysis to confirm the theoretical analysis.
Considering the shortcomings of the existing vehicle-to-vehicle (V2V) communication antennas, this paper proposes a regular hexagon broadband microstrip antenna. By loading shorting pins and etching V-shape slots with different size at each angle of the regular hexagon patch, it realizes impedance matching and obtains better impedance bandwidth. The simulated results show that the relative bandwidth of this antenna reaches 35.55%, covers the frequency band of 4.74 GHz to 6.79 GHz. The antenna acquires an omni-directional radiation pattern in the horizontal plane whose out of roundness is less than 0.5 dB. In addition, the antenna is manufactured and tested, whose tested results are basically consistent with simulated results. Because the height of antenna is 3 mm, it is easy to be hidden on roof of a vehicle for V2V communication.
Area-efficient design methodology is proposed for the analog decoding implementations of the rate-1/2 accumulate repeat-4 jagged-accumulate (AR4JA) low density parity check (LDPC) code. The proposed approach is designed using optimized decoding architecture and regularized routing network, in such a way that the overall wiring overhead is minimized and the silicon area utilization is significantly improved. The prototyping chip used to verify the approach is fully integrated in a four-metal double-poly 0.35 μm complementary metal oxide semiconductor (CMOS) technology, and includes an input-output interface that maximizes the decoder throughput. The decoding core area is 2.02 mm2 with a post-layout area utilization of 80%. The decoder was successfully tested at the maximum data rate of 10 Mbit/s, with a core power consumption of 6.78 mW at 3.3 V, which corresponds to an energy per decoded bit of 0.677 nJ. The proposed analog LDPC decoder with low processing power and high-reliability is suitable for space- and power-constrained spacecraft system.
For large-scale radio frequency identification ( RFID) indoor positioning system, the positioning scale is relatively large, with less labeled data and more unlabeled data, and it is easily affected by multipath and white noise. An RFID positioning algorithm based on semi-supervised actor-critic co-training (SACC) was proposed to solve this problem. In this research, the positioning is regarded as Markov decision-making process. Firstly, the actor-critic was combined with random actions and selects the unlabeled best received signal arrival intensity (RSSI) data by co-training of the semi-supervised. Secondly, the actor and the critic were updated by employing Kronecker-factored approximation calculate (K-FAC) natural gradient. Finally, the target position was obtained by co-locating with labeled RSSI data and the selected unlabeled RSSI data. The proposed method reduced the cost of indoor positioning significantly by decreasing the number of labeled data. Meanwhile, with the increase of the positioning targets, the actor could quickly select unlabeled RSSI data and updates the location model. Experiment shows that, compared with other RFID indoor positioning algorithms, such as twin delayed deep deterministic policy gradient (TD3), deep deterministic policy gradient (DDPG), and actor-critic using Kronecker-factored trust region ( ACKTR), the proposed method decreased the average positioning error respectively by 50.226%, 41.916%, and 25.004%. Meanwhile, the positioning stability was improved by 23.430%, 28.518%, and 38.631%.
Attribute-based broadcast encryption ( ABBE) under continual auxiliary leakage-resilient ( CALR) model can enhance the security of the shared data in broadcasting system since CALR model brings the possibility of new leakage-resilient (LR) guarantees. However, there are many shortcomings in the existing works, such as relying on the strong assumptions, low computational efficiency and large size of ciphertexts, etc. How to solve the trade-off between security and efficiency is a challenging problem at present. To solve these problems, this paper gives an ABBE scheme resisting continual auxiliary leakage ( CAL ) attack. ABBE scheme achieves constant size ciphertexts, and the computational complexity of decryption only depends on the number of receivers instead of the maximum number of receivers of the system. Additionally, it achieves adaptive security in the standard model where the security is reduced to the general subgroup decision (GSD) assumptions (or called static assumptions in the subgroup). Furthermore, it can tolerate leakage on the master secret key and private key with continual auxiliary inputs. Performance analysis shows that the proposed scheme is more efficient and practical than the available schemes.
Mobile robots have been used for many industrial scenarios which can realize automated manufacturing process instead of human workers. To improve the quality of the optimal rapidly-exploring random tree ( RRT* ) for planning path in dynamic environment, a high-quality dynamic rapidly-exploring random tree ( HQD-RRT* ) algorithm is proposed in this paper, which generates a high-quality solution with optimal path length in dynamic environment. This method proceeds in two stages: initial path generation and path re-planning. Firstly, the initial path is generated by an improved smart rapidly-exploring random tree ( RRT* -SMART) algorithm, and the state tree information is stored as prior knowledge. During the process of path execution, a strategy of obstacle avoidance is proposed to avoid moving obstacles. The cost and smoothness of path are considered to re-plan the initial path to improve the path quality in this strategy. Compared with related work, a higher-quality path in dynamic
environment can be achieved in this paper. HQD-RRT* algorithm can obtain an optimal path with better stability. Simulations on the static and dynamic environment are conducted to clarify the efficiency of HQD-RRT* in avoiding unknown obstacles.
In modern society, it is necessary to perform some secure computations for private sets between different entities. For instance, two merchants desire to calculate the number of common customers and the total number of users without disclosing their own privacy. In order to solve the referred problem, a semi-quantum protocol for private computation of cardinalities of set based on Greenberger-Horne-Zeilinger (GHZ) states is proposed for the first time in this paper, where all the parties just perform single-particle measurement if necessary. With the assistance of semi-honest third party (TP), two semi-quantum participants can simultaneously obtain intersection cardinality and union cardinality. Furthermore, security analysis shows that the presented protocol can stand against some well-known quantum attacks, such as intercept measure resend attack, entangle measure attack. Compared with the existing quantum protocols of Private Set Intersection Cardinality (PSI-CA) and Private Set Union Cardinality (PSU-CA), the complicated oracle operations and powerful quantum capacities are not required in the proposed protocol. Therefore, it seems more appropriate to implement this protocol with current technology.
A compact common-mode filter is proposed to suppress common-mode noise for application of high-speed differential signal traces. The filter adopts one big C-shaped defected ground structure (DGS) cell in the left of ground plane and two small C-shaped DGS cells with opposite direction in the right of ground plane. Because these DGS cells have different dimensions, the filter has three adjacent equivalent resonant points, which can suppress wideband common-mode noise effectively. The left C-shaped DGS and its adjacent C-shaped DGS cell form an approximate closed structure, which can efficiently reduce the influence of the mutual capacitance. The filter provides a common-mode suppression from 3.6 GHz to 14.4 GHz over 15 dB while it has a small size of 10 mm 10 mm. The fractional bandwidth of the filter is 120%, and the differential signals still keep good signal integrity. The experimental results are in good agreement with the simulated results.
The objective assessment method of network video quality is a challenge, because the video quality will be distorted by various factors, including transmission and compression. In order to improve the objective method, an objective assessment method based on fuzzy inference system of Mamdani is proposed. Firstly, six quality parameters are introduced. All the quality parameters are inputted to fuzzy logic controller system. Secondly, the outputs are used as next inputs and inferred by another fuzzy logic controller system to obtain the objective quality of network video. Lastly, the performance of proposed method is validated on four videos with different network environment. Meanwhile this method is compared with other methods. The experimental results show that the proposed method can improve the similarity between subjective and objective assessment.
Lattice-based hierarchical identity-based broadcast encryption ( H-IBBE) schemes have broad application prospects in the quantum era,because it reduces the burden of private key generator (PKG) and is suitable for one-to-many communication. However, previous lattice-based H-IBBE schemes are mostly constructed in the random oracle model with more complex trapdoor delegation process and have lower practical application. A lattice-based H-IBBE is proposed in the fixed dimension under the standard model, which mainly consists of binary tree encryption (BTE) system, MP12 trapdoor function and ABB10b trapdoor delegation algorithm. First, this paper uses BTE system to eliminate the random oracle so that the scheme can be implemented under the standard model, and it also uses MP12 trapdoor function to reduce trapdoor generation complexity and obtains a safe and efficient trapdoor matrix; Second, this paper uses ABB10b trapdoor delegation algorithm to delegate user爷s private key, and the trapdoor matrices' dimensions are the same before and after the trapdoor delegation. Comparative analysis shows that trapdoor delegation process reduces complexity, and the size of cipher-text and trapdoor matrix does not increase with deeper trapdoor delegation process. This paper achieves indistinguishability of cipher-texts under a selective chosen-cipher-text and chosen-identity attack (INDr-sID-CCA) security in the standard model based on learning with errors (LWE) hard assumption.
With the popularity of adaptive multi-rate wideband (AMR-WB) audio in mobile communication, many AMR-WB based techniques, such as a similar compression architecture to transmit secret information during the process of compression, were proposed to transmit covert messages. However, if a sender does not have the original WAV audio, the architecture cannot be used. In this paper, a new covert message method, which takes effect after WAV audio is compressed into AMR-WB speech, is proposed. This method takes advantage of algebraic codebook search. Aiming at improving speed and reducing search space, it does not perform algebraic codebook search using the optimal search algorithm, and it does not reach the positions of non-zero pulses via depth-first tree search that characterizes the energy of audio. According to the features of search methods and the codebook index construction, every track in each subframe is analyzed to find the proper positions for embedding secret information. Experimental results show that the proposed method has satisfactory capacity and simplicity regardless of compression process.
The rapid development of building information modelling (BIM) and its enabling technologies has attracted
extensive attention in the field of architecture, engineering and construction (AEC). By combining BIM models
with the real world, the potential of BIM can be further exploited with the help of augmented reality (AR)
technology. However, a BIM model usually involves a huge amount of data. Considering the limited computing
capability of current mobile devices, these applications therefore suffer from significant performance problems,
especially model loading and rendering problems. To this end, an AR-based multi-user BIM collaboration system,
which can realise the on-demand dynamical loading of the BIM model by using the block-wise loading strategy of
model transformation, thus solving the problem of model loading delay, was proposed. In addition, dynamic
rendering technology is adopted to solve the problem of rendering lag. Experimental results show that the realisation
of virtual-reality fusion and interaction for the BIM model and remote multi-user collaboration can effectively
improve work efficiency and intelligence in the engineering field.
In convolutional neural networks ( CNNs), the floating-point computation in the traditional convolutional layer is enormous, and the execution speed of the network is limited by intensive computing, which makes it challenging to meet the real-time response requirements of complex applications. This work is based on the principle that the time domain convolution result equals the frequency domain point multiplication result to reduce the amount of floating- point calculations for convolution. The input feature map and the convolution kernel are converted to the frequency domain by the fast Fourier transform( FFT), and the corresponding point multiplication is performed. Then the frequency domain result is converted back to the time domain, and the output result of the convolution is obtained. In the shared CNN, the input feature map is much larger than the convolution kernel, resulting in many invalid operations. The overlap addition method is proposed to reduce invalid calculations and speed up network execution better. This work designs a hardware accelerator for frequency domain convolution and verifies its efficiency on the Xilinx Zynq UltraScale + MPSoC ZCU102 board. Comparing the calculation time of visual geometry group 16 ( VGG16 ) under the ImageNet dataset faster than the traditional time domain convolution, the hardware acceleration of frequency domain convolution is 8. 5 times.
In challenging environment, sensory data must be stored inside the network in case of sink failures, we need to redistribute overflowing data items from the depleted storage source nodes to sensor nodes with available storage space and residual energy. We design a distributed energy efficient data storage algorithm named distributed data preservation with priority (D2P2). This algorithm takes both data redistribution costs and data retrieval costs into account and combines these two problems into a single problem. D2P2 can effectively realize data redistribution by using cooperative communication among sensor nodes. In order to solve the redistribution contention problem, we introduce the concept of data priority, which can avoid contention consultations between source nodes and reduce energy consumption. Finally, we verify the performance of the proposed algorithm by both theory and simulations. We demonstrate that D2P2’s performance is close to the optimal centralized algorithm in terms of energy consumption and shows superiority in terms of data preservation time.
High efficiency video coding (HEVC) uses half of the bitrate compared to H.264/advanced video coding(AVC) for encoding the same sequence with similar quality. Because of the advanced hierarchical structures of coding units (CUs), predicting units (PUs), and transform units (TUs), HEVC can better adapt when encoding full high definition (HD) and ultra high definition (UHD) videos. At the expense of encoding efficiency, the complexity of HEVC sharply increases compared to H.264/AVC, mainly due to its quad-tree structure that splits pictures. In this study, the probability distribution, which is generated by a rate distortion optimizing (RDO) cost, is analyzed. Then, an early terminating method is proposed to decrease the complexity of the HEVC based on probability distributions. The experiment shows that the coding time is reduced by 44.9% for HEVC intra coding, at the cost of a 0.61% increase in the Bjøntegaard delta rate (BD-rate), on average.
A hybrid model for broadband multiple-input multiple-output (MIMO) relay-aided indoor power line communications (PLC) system was proposed in this paper. The proposed model combines the top-down and bottom-up approaches and extends to a two-hop relay-aided cooperative system with variable gain relay in amplify-and-forward (AF) mode. Based on the proposed PLC model and generated channel, the channel statistical characteristics are further investigated in 2MHz - 100MHz bandwidth. Simulated results show that the proposed model overcomes the difficulties that the existing models need a lot of topological information of the network or measurements information. It provides a practical simulation analysis method for cooperative relay MIMO-PLC system. The results also show that cooperative MIMO relaying communications can improve the indoor PLC performances and communication reliability.
Currentlyradio frequency identification (RFID) technology has been widely used in many kinds of applications. Store retailers use RFID readers with multiple antennas to monitor all tagged items. However, because of the interference from environment and limitations of the radio frequency technology, RFID tags are identified by more than one RFID antenna, leading to the false positive readings. To address this issue, we propose a RFID data stream cleaning method based on K-means to remove those false positive readings within sampling time. First, we formulate a new data stream model which adapts to our cleaning algorithm. Then we present the preprocessing method of the data stream model, including sliding window setting, feature extraction of data stream and normalization. Next, we introduce a novel way using K-means clustering algorithm to clean false positive readings. Last, the effectiveness and efficiency of the proposed method are verified by experiments. It achieves a good balance between performance and price.
Radio-frequency identification (RFID) antennas are critical components in wireless communication networks for the Internet of things (IoT). The RFID systems make it possible to realize the dynamic interconnection of various things. To better summarize the operating principles of the RFID antennas and associate antennas with specific complex applications, a review of RFID systems and antennas is necessary. In this paper, a review of reader antennas for ultra-high frequency ( UHF) RFID systems is presented, and the categories of RFID systems are summarized for the first time. The antennas are classified according to the reading region and operating principle. The reading region determines the most crucial performance that should be concentrated on when designing an antenna, while the operating principle affects the current distribution on the surface of the antenna, and further the electromagnetic radiation. By the summary of the RFID systems and antennas, the understanding of future researchers on the operating principles of the RFID antennas could be facilitated, which can be helpful in the advanced design and implementation of RFID antennas. In addition, taking engineering requirements into account, the future prospective of RFID applications is discussed, as well as the challenges to be addressed.
To apply a quasi-cyclic low density parity check (QC-LDPC) to different scenarios, a data-driven pipelined macro-instruction set and a reconfigurable processor architecture are proposed for the typical QC-LDPC algorithm. The data-level parallelism is improved by instructions to dynamically configuring the multi-core computing units. Simultaneously, an intelligent adjustment strategy based on programmable wake-up controller (WuC) is designed so that the computing mode, operating voltage, and frequency of the QC-LDPC algorithm can be adjusted. This adjustment can improve the computing efficiency of the processor. The QC-LDPC decoders are verified on the Xilinx ZCU102 Field Programmable Gate Array (FPGA) board and the computing efficiency is measured. The experimental results indicate that the QC-LDPC processor can support two encoding lengths of three typical QC-LDPC algorithms and 20 adaptive operating modes of operating voltage and frequency. The maximum efficiency can reach up to 12.18 Mbit(s·mW)-1, which is more flexible than existing state-of-the-art processor for QC-LDPC.
In order to improve the accuracy of used car price prediction, a machine learning prediction model based on the
retention rate is proposed in this paper. Firstly, a random forest algorithm is used to filter the variables in the data.
Seven main characteristic variables that affect used car prices, such as new car price, service time, mileage and so
on, are filtered out. Then, the linear regression classification method is introduced to classify the test data into high
and low retention rate data. After that, the extreme gradient boosting ( XGBoost) regression model is built for the
two datasets respectively. The prediction results show that the comprehensive evaluation index of the proposed
model is 0. 548, which is significantly improved compared to 0. 488 of the original XGBoost model. Finally,
compared with other representative machine learning algorithms, this model shows certain advantages in terms of
mean absolute percentage error (MAPE), 5% accuracy rate and comprehensive evaluation index. As a result, the
retention rate-based machine learning model established in this paper has significant advantages in terms of the
accuracy of used car price prediction.
Arbitrated quantum signature (AQS) is an important branch in quantum cryptography to authenticate quantum information, and cryptanalysis on AQS protocols helps to evaluate and improve security of AQS. Recently, it is discovered that an AQS protocol base on chained controlled-NOT (CNOT) algorithm is vulnerable to a novel attack because a transformation from binary keys into permutations and the chained CNOT algorithm have special properties, which enables a malicious receiver to forge signatures with probability 1/2. Moreover, a malicious signer can also deny his signatures with probability 1/4. Then, two possible improved methods are presented to resist these attacks: one is padding constants to reduce probability of the successful attacks, and the other is a circular chained CNOT algorithm to make the attack strategy invalid. And the security analysis shows that both the two improve methods could well resist these attacks.
There are many people in China who suffered from physical and mental diseases and need physical and psychological rehabilitation. Traditional treatments can work in rehabilitation, but will take much money and labor. In recent years, virtual reality (VR) technology has become a new direction to innovate rehabilitation. This paper summarizes the research results of VR technology in rehabilitation for stroke, anxiety, depression and autism. Based on these results, we propose a framework for VR rehabilitation system with virtual agents. A prototype system is developed to corroborate the proposed design guidelines. With the prototype rehabilitation system, autistic children can train their life skills in different scenarios. Preliminary test results show the proposed framework can push users to take the rehabilitation training actively and can be a new method for rehabilitation training.
In order to detect and cancel the self-interference (SI) signal from desired binary phase-shift keying (BPSK)
signal, the polarization-based optimal detection (POD) scheme for cancellation of digital SI in a full-duplex (FD) system is proposed. The POD scheme exploits the polarization domain to isolate the desired signal from the SI signal and then cancel the SI to obtain the interference-free desired signal at the receiver. In FD communication, after antenna and analog cancellation, the receiver still contains residual SI due to non-linearities of hardware imperfections. In POD scheme, a likelihood ratio expression is obtained, which isolates and detects SI bits from the desired bits. After isolation of these signal points, the POD scheme cancels the residual SI. As compared to the conventional schemes, the proposed POD scheme gives significantly low bit error rate (BER), a clear constellation diagram to obtain the boundary between desired and SI signal points, and increases the receiver's SI cancellation performance in low signal to interference ratio (SIR) environment.
To solve the problem of mixing matrix estimation for underdetermined blind source separation (UBSS) when the number of sources is unknown, this paper proposed a novel mixing matrix estimation method based on average information entropy and cluster validity index (CVI). Firstly, the initial cluster center is selected by using fuzzy C-means (FCM) algorithm and the corresponding membership matrix is obtained, and then the number of clusters is obtained by using the joint decision of CVI and average information entropy index of membership matrix, then multiple cluster number estimation results can be obtained by using multiple CVIs. Then, according to the results of the number of multiple clusters estimation, the number of radiation sources is determined according to the principle of the subordination of the minority to the majority. The cluster center vectors obtained from the clustering operation of the estimated number of radiation sources are fused, that is the mixing matrix is estimated based on the degree of similarity of the cluster center vectors. When the source signal is not sufficiently sparse, the time-frequency single source detection processing can be combined with the proposed method to estimate the mixing matrix. The effectiveness of the proposed method is validated by experiments.
The physical principle of infrared imaging leads to the low contrast of the whole image, the blurring of contour and edge details, and it is also sensitive to noise. To improve the quality of infrared image and visual effect, an adaptive weighted guided filter (AWGF) for infrared image enhancement algorithm was proposed. The core idea of AWGF algorithm is to propose an adaptive strategy to update the weights of guided filter (GF) parameters, which not only improves the accuracy of regularization parameter estimation in GF theory, but also achieves the purpose of removing infrared image noise and improving its detail contrast. A large number of real infrared images were used to verify AWGF algorithm, and good experimental results were obtained. Compared with other guided filtering algorithms, the halo phenomenon at the edge of infrared images processed by the AWGF algorithm is significantly avoided, and the evaluation parameter values of information entropy (IE), average gradient (AG), and moment of inertia (MI)are relatively high. This shows that the quality of infrared image processed by the AWGF algorithm is better.
Emotional space refers to a multi-dimensional emotional model that describes a group of subjective feelings or emotions. Since the existing discrete emotional space is mainly aimed at human’s primary emotions, it cannot describe the complex emotions evoked when watching movies. In order to solve this problem, an emotional fusion space for videos was constructed by selecting movies and TV dramas with rich emotional semantics as the research objects. Firstly, emotional words based on movie and TV drama videos are acquired and analyzed by using subjective evaluation and semantic analysis methods. Then, the emotional word vectors obtained from the above analysis are fused, reduced dimension by t-distributed stochastic neighbor embedding (t-SNE) algorithm, and clustered by bisecting K-means clustering algorithm to get a discrete emotional space for movie and TV drama videos. This emotional fusion space can obtain different categories by changing the value of the emotion classification number without re-labeling and calculation.
Face anti-spoofing is used to assist face recognition system to judge whether the detected face is real face or fake face. In the traditional face anti-spoofing methods, features extracted by hand are used to describe the difference between living face and fraudulent face. But these handmade features do not apply to different variations in an unconstrained environment. The convolutional neural network (CNN) for face deceptions achieves considerable results. However, most existing neural network-based methods simply use neural networks to extract single-scale features from single-modal data, while ignoring multi-scale and multi-modal information. To address this problem, a novel face anti-spoofing method based on multi-modal and multi-scale features fusion ( MMFF) is proposed. Specifically, first residual network ( Resnet )-34 is adopted to extract features of different scales from each modality, then these features of different scales are fused by feature pyramid network (FPN), finally squeeze-and-excitation fusion ( SEF) module and self-attention network ( SAN) are combined to fuse features from different modalities for classification. Experiments on the CASIA-SURF dataset show that the new method based on MMFF achieves better performance compared with most existing methods.
Aiming at the problem that the current encrypted traffic classification methods only use the single network framework such as CNN, RNN, and SAE, and only construct a shallow network to extract features, which leads to the low accuracy of encrypted traffic classification, we proposed an encrypted traffic classification framework based on the fusion of Vision Transformer and temporal features. The framework use BoTNet to extract spatial features and BiLSTM to extract temporal features, then use After the two sub-networks are parallelized, the framework uses the feature fusion method of early fusion to perform feature fusion after the two sub-networks parallelized, and finally identify encrypted traffic through the fused features. The experimental results show that the method in this paper can enhance the performance of encrypted traffic classification by fusing multi-dimensional features. The accuracy rate of VPN and non-VPN binary classification is as high as 99.9%, and the accuracy rate of fine-grained encrypted traffic twelve-classification can also reach more than 99%.
Novel high power supply rejection ratio (PSRR) high-order temperature-compensated subthreshold metal-oxide- semiconductor (MOS) bandgap reference (BGR) is proposed in Semiconductor Manufacturing International Corporation (SMIC) 0.13 μm complementary MOS (CMOS) process. By adopting subthreshold MOS field-effect transistors (MOSFETs) and the piecewise-curvature temperature-compensated technique, the output reference voltage’s temperature performance of the subthreshold MOS BGR is effectively improved. The subthreshold MOS BGR achieves high PSRR performance by adopting the technique of pre-regulator. Simulation results show that the temperature coefficient (TC) of the subthreshold MOS BGR is 1.38× /°C when temperature is changed from 40 °C to 125 °C with a power supply voltage of 1.2 V. The subthreshold MOS BGR achieves the PSRR of 104.54 dB, 104.54 dB, 104.5 dB, 101.82 dB and 79.92 dB at 10 Hz, 100 Hz, 1 kHz, 10 kHz and 100 kHz respectively.
The compressed sensing matrices based on affine symplectic space are constructed. Meanwhile, a comparison is made with the compressed sensing matrices constructed by DeVore based on polynomials over finite fields. Moreover, we merge our binary matrices with other low coherence matrices such as Hadamard matrices and discrete fourier transform (DFT) matrices using the embedding operation. In the numerical simulations, our matrices and modified matrices are superior to Gaussian matrices and DeVore's matrices in the performance of recovering original signals.
NAND flash chips have been innovated from two-dimension (2D) design which is based on planar NAND cells to three-dimension (3D) design which is based on vertical NAND cells. Two types of NAND flash technologies–charge-trap (CT) and floating-gate (FG) are presented in this paper to introduce NAND flash designs in detail. The physical characteristics of CT-based and FG-based 3D NAND flashes are analyzed. Moreover, the advantages and disadvantages of these two technologies in architecture, manufacture, interference and reliability are studied and compared.
The compatible-invariant subset of deterministic finite automata (DFA) is investigated to solve the problem of subset stabilization under the frameworks of semi-tensor product (STP) of matrices. The concepts of compatible-invariant subset and largest compatible-invariant subset are introduced inductively for Moore-type DFA, and a necessary condition for the existence of largest compatible-invariant subset is given. Meanwhile, by using the STP of matrices, a compatible feasible event matrix is defined with respect to the largest compatible-invariant subset. Based on the concept of compatible feasible event matrix, an algorithm to calculate the largest compatible-invariant subset contained in a given subset is proposed. Finally, an illustrative example is given to validate the results.
Network virtualization provides a powerful way of sharing substrate networks. Efficient allocation of network resources for multiple virtual networks (VNs) has always been a challenging task. In particular, with the demands of the customized VN requests are increasing, many problems arise as network conditions change dynamically. Especially, when the resources conflicting appear during the lifetime of VNs, it needs service provider (SP) to provide a fast and effective solution. Recently, software defined network (SDN) has emerged as a new networking paradigm, SDN’s centralized control and customizable routing features present new opportunities for convenient and flexible embedding VNs in the network. However, due to the limitations of the SDN, in the short term, replacing
all legacy devices in current operational networks by SDN-enabled switches is impractical. Thus, in our study, we focus on the scenario of VN embedding (VNE) in software-defined hybrid networks. In this work, first of all, we propose partially deploying SDN nodes, and then, we use the characteristics of SDN to allocate resources for VN requests, and redirect the path for requests conflict in hybrid SDN network. We formulate the problems and provide simple algorithms to solve them. Simulation results show that our scheme is high responsiveness and acceptance ratio.
The premise of image emotion recognition is to determine its representative emotional adjectives and establish the quantifiable emotion space. In this paper, focusing on aroused emotion from film and television (TV) scene images, a method of selecting emotional adjectives and establishing the emotion space based on subjective perception experiment is proposed. Firstly, a special data set about film and TV scene images was established and a set of initial emotional adjectives was collected. Then the subjective perception experiment was designed to let subjects to evaluate the affection of all the initial adjectives during watching these scene images. Then the method of principal basis analysis was used for variable selection. Finally, the factor analysis was applied to accomplish the second dimension reduction to form a 5-dimensional(5D)orthogonal emotion space. The optimized emotion space can explain more than 94% of original emotional adjectives, which greatly reduces the dimension of emotional adjectives and lays a foundation for the further research on image content and emotion recognition.
Cross-project defect prediction (CPDP) uses one or more source projects to build a defect prediction model and applies the model to the target project. There is usually a big difference between the data distribution of the source project and the target project, which makes it difficult to construct an effective defect prediction model. In order to alleviate the problem of negative migration between the source project and the target project in CPDP, this paper proposes an integrated transfer adaptive boosting (TrAdaBoost) algorithm based on multi-source data sets (MSITrA). The algorithm uses an existing two-stage data filtering algorithm to obtain source project data related to the target project from multiple source items, and then uses the integrated TrAdaBoost algorithm proposed in the paper to build a CPDP model. The experimental results of Promise's 15 public data sets show that: 1) The cross-project software defect prediction model proposed in this paper has better performance in all tested CPDP methods; 2) In the within-project software defect prediction (WPDP) experiment, the proposed CPDP method has achieved the better experimental results than the tested WPDP method.
With the development of maritime informatization and the increased generation of marine data, the demands of
efficient and reliable maritime communication surge. However, harsh and dynamic marine communication
environmentcan distort transmission signal, which significantly weaken the communication performance. Therefore,
for maritime wireless communication system, the channel estimation is often required to detect the channel suffered
from the impacts of changing factors. Since there is no universal maritime communication channel model and
channel varies dynamically, channel estimation method needs to make decision dynamically without pre-knowledge
of channel distribution. This paper studies the radio channel estimation problem of wireless communications over the
sea surface. To improve the estimation accuracy, this paper utilizes multi-armed bandit (MAB) problem to deal
with the uncertainty of channel state information (CSI), then proposes a dynamic channel estimation algorithm to explore the global changing channel information, and asymptotically minimize the estimation error. By the aid of
MAB, the estimation is not only dynamic according to channel variation, but also does not need to know the
channel distribution. Simulation results show that the proposed algorithm can achieve higher estimation accuracy
compared to matching pursuit (MP)-based and fractional Fourier transform (FrFT)-based methods.
This paper focuses on the linear transceiver design for multiple input multiple output (MIMO) interference channel (IC), in which a bounded channel error model is assumed. Two optimization problems are formulated as minimizing maximum per-user mean square error (MSE) and sum MSE with the per-transmitter power constraint. Since these optimization problems are not jointly convex on their variable matrices, the transmitter and receiver can be optimized alternately respectively. For each matrix, an approximated approach is presented where the upper bound of constraint is derived so that it has less semidefinite, thus the problem can be viewed as second-order-cone programming (SOCP) and gets less computational complexity. Compared with the conventional S-procedure method, the proposed approach achieves similar performance, but reduces the complexity significantly, especially for the system with large scale number of antennas.
In modern data centers, power consumed by network is an observable portion of the total energy budget and thus improving the energy efficiency of data center networks (DCNs) truly matters. One effective way for this energy efficiency is to make the size of DCNs elastic along with traffic demands by flow consolidation and bandwidth scheduling, i.e., turning off unnecessary network components to reduce the power consumption. Meanwhile, having the instinct support for data center management, software defined networking (SDN) provides a paradigm to elastically control the resources of DCNs. To achieve such power savings, most of the prior efforts just adopt simple greedy heuristic to reduce computational complexity. However, due to the inherent problem of greedy algorithm, a good-enough optimization cannot be always guaranteed. To address this problem, a modified hybrid genetic algorithm (MHGA) is employed to improve the solution’s accuracy, and the fine-grained routing function of SDN is fully leveraged. The simulation results show that more efficient power management can be achieved than the previous studies, by increasing about 5% of network energy savings.
Human motion prediction is a critical issue in human-robot collaboration (HRC) tasks. In order to reduce thelocal error caused by the limitation of the capture range and sampling frequency of the depth sensor, a hybrid human motion prediction algorithm, optimized sliding window polynomial fitting and recursive least squares (OSWPF-RLS) was proposed. The OSWPF-RLS algorithm uses the human body joint data obtained under the HRC task as input, and uses recursive least squares (RLS) to predict the human movement trajectories within the time window. Then, the optimized sliding window polynomial fitting (OSWPF) is used to calculate the multi-step prediction value, and the increment of multi-step prediction value was appropriately constrained. Experimental results show that compared with the existing benchmark algorithms, the OSWPF-RLS algorithm improved the multi-
step prediction accuracy of human motion and enhanced the ability to respond to different human movements.
Objective video quality assessment methods often evaluate all the frames regardless of their importance. For wireless distorted videos, not every frame has the same contribution to the final overall quality due to the channel fading and interference, which may lead to the capacity variation in temporal. Besides, with the content similarity and error propagation pattern in temporal domain, it is possible to evaluate the overall quality with only part of the frames. In this paper, a demonstration is performed to show that the video quality can be evaluated with reduced frames set (RFS), and a state transition model is proposed to extract the RFS. At last, a video quality assessment (VQA) method is carried out based on RFS. Compared with several state-of-the-art methods, our method can achieve a suitable accuracy with less frames to be processed.
The reachability problem of synchronizing transitions bounded Petri net systems (BPNSs) is investigated in this paper by constructing a mathematical model for dynamics of BPNS. Using the semi-tensor product (STP) of matrices, the dynamics of BPNSs, which can be viewed as a combination of several small bounded subnets via synchronizing transitions, are described by an algebraic equation. When the algebraic form for its dynamics is established, we can present a necessary and sufficient condition for the reachability between any marking (or state) and initial marking. Also, we give a corresponding algorithm to calculate all of the transition paths between initial marking and any target marking. Finally, an example is shown to illustrate proposed results. The key advantage of our approach, in which the set of reachable markings of BPNSs can be expressed by the set of reachable markings of subnets such that the big reachability set of BPNSs do not need generate, is partly avoid the state explosion problem of Petri nets (PNs).
An ant colony optimization (ACO) based load balancing routing and wavelength assignment (RWA) algorithm (ALRWA) was put forward for the sake of achieving a fairy load balancing over the entire optical satellite networks. A multi-objective optimization model is established considering the characteristic of global traffic distribution. This not only employs the traffic intensity to modify the light path cost, but also monitors the wavelength utilization of optical inter-satellite links (ISLs). Then an ACO algorithm is utilized to solve this model, leading to finding an optimal light path for every connection request. The optimal light path has the minimum light path cost under satisfying the constraints of wavelength utilization, transmission delay and wavelength-continuity. Simulation results show that ALRWA performs well in blocking probability and realizes efficient load balancing. Meanwhile, the average transmission delay can meet the basic requirement of real-time business transmission.
In heterogeneous wireless networks, there are various kinds of service demands from the users. A network selection algorithm based onthe analytic hierarchy process (AHP) and Similarity is proposed to solve this problem. The services are divided into three classes: Conversational Class, Streaming Class and Interactive Class. According to the characteristics of each service, a different judgment matrix is assigned and then the AHP method is used to
calculate the network attribute weights. Taking the dynamic changes in user demands and network environment into account, a formula based on Lance distance for computing the attributes similarity is derived to evaluate the degree of conformity between user requirements and network attributes, from which the similarity between the user requirements and network attributes is calculated and then the total similarity by weighting. The network with the largest total similarity is the best choice. Simulation results demonstrate the effectiveness of the proposed scheme in improving the quality of service (QoS) according to the user requirements under three kinds of services.
Adaptive learning paths provide individual learning objectives that best match a learner's characteristics. This is especially helpful when learners need to balance limited available learning time and multiple learning objectives. The automatic generation of personalized learning paths to improve learning efficiency has therefore attracted significant interest. However, most current research only focuses on providing learners with adaptive objects and sequences according to their own interests or learning goals given a normal amount of time or ordinary conditions. There is little research that can help learners to obtain the most important knowledge for a test in the shortest time possible, which is a typical scenario in exanimation-oriented education systems. This study aims to solve this problem by introducing a new approach that builds on existing methods. First, the eight properties in Gardner's multiple intelligence theory are introduced into the present knowledge and learner models to define the relationship between learning objects (LOs) and learners, thereby improving recommendation accuracy rates. Then, a novel adaptive learning path recommendation model is presented where viable knowledge topologies, knowledge bases and the previously-established properties relating to a learner's ability are combined by Dempster-Shafer (D-S) evidence theory. A series of practical experiments were performed to assess the approach's adaptability, the appropriateness of the selected evidence and the effectiveness of the recommendations. In the results, it was found that the proposed learning path recommendation model helped learners learn the most important elements and obtain superior test grades when confronted with limited time for learning.
A high resolution and fast conversion rate time-to-digital converter (TDC) design based on time amplifier (TA) is proposed. The pulse-train TA employs a two-step scheme. The input time interval is first amplified by a N-times TA and the effective time is extracted in pulse-train using a time-register. Then the resulted interval is further amplified by the other pulse-train amplifier to obtain the final result. The two-step TA can thus achieve large gain that is critical for high resolution TDC. Simulation results in 1.2 V, 65 nm technology showed that for a 10 bit TDC, a resolution of 0.8 ps and a conversion rate of 150 MS/s are achieved while consuming 2.1 mW power consumption.
Pushing popular contents to the edge of the network can meet the growing demand for data traffic, reduce latency and relieve the pressure of the backhaul. However, considering the limited storage space of the base stations, it is impossible to cache all the contents, especially in ultra-dense network ( UDN). Furthermore, the uneven distribution of mobile users results in load imbalance among small base stations (SBSs) in both time and space, which also affects the caching strategy. To overcome these shortcoming, the impact of the changing load imbalance in UDN was investigated, and then a dynamic hierarchical collaborative caching (DHCC) scheme was proposed to optimize latency and caching hit rate. The storage of the SBS is logically divided into the independent caching layer and the collaborative caching layer. The independent caching layer caches the most popular contents for local users爷interest, and the collaborative caching layer caches contents as much as possible for the benefit of content diversity in the region. Different SBSs have respective storage space layer division ratios, according to their real-time traffic load. For SBSs with heavy load, the independent caching layers are allocated with more space. Otherwise, the collaborative caching layers could store more contents with larger space. The simulation results show that, DHCC improved both transmission latency and hit rate compared with existing caching schemes.
Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things ( IIoT), a hierarchical edge networking collaboration ( HENC ) framework based on the cloud-edge collaboration and computing first networking (CFN) is proposed to improve the capability of task processing with fixed computing resources on the edge effectively. To optimize the delay and energy consumption in HENC, a multi-objective optimization (MOO) problem is formulated. Furthermore, to improve the efficiency and reliability of the system, a resource prediction model based on ridge regression (RR) is proposed to forecast the task size of the next time slot, and an emergency-aware ( EA) computing resource allocation algorithm is proposed to reallocate tasks in edge CFN. Based on the simulation result, the EA algorithm is superior to the greedy resource allocation in time delay, energy consumption, quality of service (QoS) especially with limited computing resources.
To reduce the side-lobe level of L-shaped expansion array and improve the output signal to interference and noise
ratio (SINR), the algorithm of side-lobe constraint based on minimum variance distortionless response ( MVDR-
SC) is proposed. Firstly, the approach of mixing diagonal loading and Mailloux-Zatman (DLMZ) is used to taper
the covariance matrix of the expansion array. Then, the second order cone programming ( SOCP) obtained by
constructing a new matrix is used to control the beam side-lobe. Finally, the new adaptive weight numbers are
constructed by adjusting the proportion between DLMZ and SOCP. Simulation results show that the MVDR-SC
algorithm can effectively reduce the side-lobe of beamforming under the L-shaped expansion array and obtain a
larger output SINR. At the same time, it has good robustness to the mutual coupling error.
Hard competition learning has the feature that each point modifies only one cluster centroid that wins. Correspondingly, soft competition learning has the feature that each point modifies not only the cluster centroid that wins, but also many other cluster centroids near this point. A soft competition learning method is proposed. Centroid all rank distance(CARD), CARDx, and Centroid all rank distance batch K-means(CARDBK) are three clustering algorithms that adopt the soft competition learning method proposed by us. Among them the extent to which one point affects a cluster centroid depends on the distances from this point to the other nearer cluster centroids, rather than just the rank number of the distance from this point to this cluster centroid among the distances from this point to all cluster centroids. In addition, the validation experiments are carried out in order to compare the three soft competition learning algorithms CARD, CARDx, and CARDBK with several hard competition learning algorithms as well as neural gas(NG) algorithm on five data sets from different sources. Judging from the values of five performance indexes in the clustering results, this kind of soft competition learning method has better clustering effect and efficiency, and has linear scalability.
Nowadays, the service of network video is increasing explosively. But the quality of experience (QoE) model of network video quality is not stable. The video quality may be impaired by many factors. This paper proposes QoE models for network video quality. It consists of two components: 1) the perceptual video quality model considering the impair factors related to video content as well as distortion caused by content and transmission. Next the model is built through a decision tree using a set of measured features form the network video. This proposed model can qualitatively give the grade of video quality and improve the accuracy of prediction. 2) Based on the above model, another model is proposed to give the concrete objective score of video quality. It also considers original impair factors and predicts the video quality using fuzzy decision tree. The two models have their own advantages. The first model has a good computational complexity; the second model is more precise. All the models are simulated by actual experiments. They can improve the accuracy of objective model. The detail results are shown.
The sudden surge of various applications poses great challenges to the computation capability of mobile devices. To address this issue, computation offloading to multi-access edge computing (MEC) was proposed as a promising paradigm. This paper studies partial computation offloading scenario by considering time delay and energy consumption, where the task can be splitted into several blocks and computed both in local devices and MEC, respectively. Since the formulated problem is a nonconvex problem, this paper proposes an ant colony-based algorithm to achieve the suboptimal solution. Specifically, the proposed method first establish a multi-user one-MEC scenario, in which user devices are able to offload some part of the task to MEC server. Then, it develops an ant colony-based algorithm to decide the offloading parts and allocation strategy of MEC resources to minimize system cost. Finally, simulation results show the effectiveness of the proposed algorithm in terms of system cost and demonstrate that it outperforms other existing methods.
Electromagnetic Field and Microwave Tech
Lumped element lowpass filter (LPF) for ultra-high frequency (UHF) radio frequency (RF) front-end system is
presented based on multilayer liquid crystal polymer (LCP). The lumped element LPF can achieve miniaturization
and one transmission zero on the stopband by the 8-shaped inductor. The lumped element LPF is fabricated on a 4-
layer LCP substrate with a compact size of 9 mm × 14 mm × 0.193 mm. The measured cut off frequency of the
lumped element LPF is 0.5 GHz with insertion loss (IL) less than 0.37 dB. Both measured and simulated results
suggest that it is a possible candidate for the application of UHF RF front-end system.
A two-dimensional direction-of-arrival (DOA) estimation method for non-uniform two-L-shaped array is presented in which the element spacing is larger than half-wavelength. To extract automatically paired low-variance cyclically ambiguous direction cosines and high-variance unambiguous direction cosines from the sub-blocks, the proposed method constructs and partitions the cross-correlation matrices. Then, the low-variance unambiguous direction cosines are obtained using the ambiguity resolved technique. Simulation results demonstrate that the proposed method has lower computation complexity and higher resolution than the existing methods especially when the elevation angles are between 70 and 90 degrees.
Real-time graphics processing is all along a crucial task of mobile device, and it is conventionally supported by programmable graphics processing unit (GPU). These GPUs are designed to flexibly support vertex and pixel processing with classic techniques such as on-chip cache and dynamic programmable pipelining. However, it is difficult for the vertex shader and pixel shader to achieve high utilization of hardware resources, even though there is a certain balance by reasonable processor quantity ratio. In this paper, a unified render shader with a very long instruction word (VLIW) processor was designed. The viewport transformation algorithm and the mipmap mapping algorithm are respectively mapped on the shader, with the purpose of providing an energy-efficient and flexible hardware platform for graphics processing in mobile device. The implemented operating frequency is up to 134 MHz on Xilinx XC7Z045-2-FFG900 field programmable gate array (FPGA), and unified architecture shader has a performance of 134 Mpixels/s in pixel fill rate, 546Mtexels/s in Texel Fill rate.
Traditional methods for removing ocular artifacts (OAs) from electroencephalography (EEG) signals often involve a large number of EEG electrodes or require electrooculogram (EOG) as the reference, these constraints make subjects uncomfortable during the acquisition process and increase the complexity of brain-computer interfaces (BCI). To address these limitations, a method combining a convolutional autoencoder (CAE) and a recursive least squares (RLS) adaptive filter is proposed. The proposed method consists of offline and online stages. In the offline stage, the peak and local mean of the four-channel EOG signals are automatically extracted to obtain the CAE model. Once the model is trained, the EOG channels are no longer needed. In the online stage, by using the CAE model to identify the OAs from a single-channel raw EEG signal, the identified OAs and the given raw EEG signal are used as the reference and input for an RLS adaptive filter. Experiments show that the root mean square error (RMSE) of the CAE-RLS algorithm and independent component analysis (ICA) are 1.253 3 and 1.254 6 respectively, and the power spectral density (PSD) curve for the CAE-RLS is similar to the original EEG signal. These experimental results indicate that by using only a couple of EEG channels, the proposed method can effectively remove OAs without parallel EOG records and accurately reconstruct the EEG signal. In addition, the processing time of the CAE-RLS is shorter than that of ICA, so the CAE-RLS algorithm is very suitable for BCI system.
Mobile edge computing (MEC) networks can provide a variety of services for different applications. End-to-end
performance analysis of these services serves as a benchmark for the efficient planning of network resource allocation
and routing strategies. In this paper, a performance analysis framework is proposed for the end-to-end data-flows in
MEC networks based on stochastic network calculus (SNC). Due to the random nature of routing in MEC
networks, probability parameters are introduced in the proposed analysis model to characterize this randomness into
the derived expressions. Taking actual communication scenarios into consideration, the end-to-end performance of
three network data-flows is analyzed, namely, voice over Internet protocol (VoIP), video, and file transfer protocol
(FTP). These network data-flows adopt the preemptive priority scheduling scheme. Based on the arrival processes
of these three data-flows, the effect of interference on their performances and the service capacity of each node in
the MEC networks, closed-form expressions are derived for showing the relationship between delay, backlog upper
bounds, and violation probability of the data-flows. Analytical and simulation results show that delay and backlog
performances of the data-flows are influenced by the number of hops in the network and the random probability
parameters of interference-flow (IF).
To deal with the secrecy issues and energy efficiency issues in the unmanned aerial vehicles ( UAVs) assisted communication systems, an UAV-enabled multi-hop mobile relay system is studied in an urban environment. Multiple rotary-wing UAVs with energy budget considerations are employed as relays to forward confidential information between two ground nodes in the presence of multiple passive eavesdroppers. The system secrecy energy efficiency ( SEE), defined by the ratio of minimum achievable secrecy rate ( SR) to total propulsion energy consumption (PEC), is maximized via jointly optimizing the trajectory and transmit power of each UAV relay. To solve the formulated non-convex fractional optimization problem subject to mobility, transmit power and information-causality constraints, an effective iterative algorithm is proposed by applying the updated-rate-assisted block coordinate decent method, successive convex approximation (SCA) technique and Dinkelbach method. Simulation
results demonstrate the effectiveness of the proposed joint trajectory design and power control scheme.
In this paper, a modified susceptible infected susceptible (SIS) epidemic model is proposed on community structure networks considering birth and death of node. For the existence of node’s death would change the topology of global network, the characteristic of network with death rate is discussed. Then we study the epidemiology behavior based on the mean-field theory and derive the relationships between epidemic threshold and other parameters, such as modularity coefficient, birth rate and death rates (caused by disease or other reasons). In addition, the stability of endemic equilibrium is analyzed. Theoretical analysis and simulations show that the epidemic threshold increases with the increase of two kinds of death rates, while it decreases with the increase of the modularity coefficient and network size.
Compressed sensing (CS) provides a new approach to acquire data as a sampling technique and makes it sure that a sparse signal can be reconstructed from few measurements. The construction of compressed matrixes is a central problem in compressed sensing. This paper provides a construction of deterministic CS matrixes, which are also disjunct and inclusive matrixes, from singular pseudo-symplectic space over finite fields of characteristic 2. Our construction is superior to DeVore’s construction under some conditions and can be used to reconstruct sparse signals through an efficient algorithm.
Auction was widely used to tackle spectrum allocation and sharing in the secondary market under the condition of spectrum scarcity. In real communication system, such as broadband communication, the utilization of spectrum resource is various because of different requirements and complex application scenarios. So, these schemes cannot be directly applied to the above wireless communication system. To solve this problem, a new model where sellers/buyers can sell/buy multi-unit for heterogeneous spectrum was proposed and a truthful multi-unit double auction framework was designed for heterogeneous spectrum trading. A valuation function is first applied to represent the buyer’s true valuation of the sub-band and reflect the buyer’s satisfaction degree and a novel concept termed ‘virtual player’ was introduced. Then the buyer group was constructed based on the conflict graph to reuse the same spectrum among interference-free buyers in both spatial and temporal domains. The winner determination strategy and algorithm of clearing price were designed elaborately. According to the theoretical analysis, the scheme can satisfy three critical economic properties: truthfulness, individual rationality, and budget balance. Finally, simulation results show that the proposed scheme can achieve better user satisfaction, auction efficiency and spectrum reuse rate for the real communication system. The proposed auction framework is practical and effective.
To progressively provide the competitive rate-distortion performance for aerial imagery, a quantized block compressive sensing (QBCS) framework is presented, which incorporates two measurement-side control parameters: measurement subrate (S) and quantization depth (D). By learning how different parameter
combinations may affect the quality-bitrate characteristics of aerial images, two parameter allocation models are derived between a bitrate budget and its appropriate parameters. Based on the corresponding allocation models, a model-guided image coding method is proposed to pre-determine the appropriate (S, D) combination for acquiring an aerial image via QBCS. The data-driven experimental results show that the proposed method can achieve near-optimal quality-bitrate performance under the QBCS framework.
The indoor positioning system based on fingerprint receives more and more attention due to its high positioning accuracy and time efficiency. In the existing positioning approaches, much consideration is given to the positioning accuracy improvement by using the angle of signal, but the optimization of access points (APs) deployment is ignored. In this circumstance, an adaptive APs deployment approach is proposed. First of all, the criterion of reference points (RPs) effective coverage is proposed, and the number of deployed APs in target environment is obtained by using the region partition algorithm and full coverage algorithm. Secondly, the wireless signal propagation model is established for target environment, and meanwhile based on the initial APs deployment, the simulation fingerprint database is constructed for the sake of establishing the discrimination function with respect to fingerprint database. Thirdly, the greedy algorithm is applied to optimize APs deployment. Finally, the extensive experiments show that the proposed approach is capable of achieving adaptive APs deployment as well as improving positioning accuracy.
Application programming interface (API) is a procedure call interface to operation system resource. API-based behavior features can capture the malicious behaviors of malware variants. However, existing malware detection approaches have a deal of complex operations on constructing and matching. Furthermore, graph matching is adopted in many approaches, which is a nondeterministic polynominal (NP)-complete problem because of computational complexity. To address these problems, a novel approach is proposed to detect malware variants. Firstly, the API of the malware are divided by their functions and parameters. Then, the classified behavior graph (CBG) is constructed from the API call sequences. Finally, the signature based on CBGs for each malware family is generated. Besides, the malware variants are classified by ensemble learning algorithm. Experiments on 1 220 malware samples show that the true positive rate (TPR) is up to 89.0% with the low false positive rate (FPR) 3.7% by ensemble learning.
The city-wide ridesharing package delivery is becoming popular as it provides a convenience such as extra profits to the vehicle-s driver and high traffic efficiency to the city. The vehicle dispatching is a significant issue to improve the ridesharing efficiency in package delivery. The classic one-hop ridesharing package delivery requires the highly similar paths between the package and the vehicle given by the limited detour time, which depresses the ridesharing efficiency. To tackle this problem, a city-wide vehicle dispatching strategy for the multi-hop ridesharing package delivery was proposed, where a package is permitted to be delivered sequentially by different vehicles, until arriving the destination. The study formulates the vehicle dispatching as a maximum multi-dimensional bipartite matching problem with the goal of maximizing the total saving distance given by the limited detour time and ridesharing capacity. A multi-hop ridesharing vehicle dispatching algorithm was proposed to solve this problem by selecting the farthest reachable locker and multi-dimensional matching. Simulation results based on real vehicle dataset of Beijing demonstrate the effectiveness and efficiency of the proposed vehicle dispatching strategy.
Existing algorithms of news recommendations lack in depth analysis of news texts and timeliness. To address these issues, an algorithm for news recommendations based on time factor and word embedding ( TFWE) was proposed to improve the interpretability and precision of news recommendations. First, TFWE used term frequency- inverse document frequency ( TF-IDF ) to extract news feature words and used the bidirectional encoder representations from transformers ( BERT ) pre-training model to convert the feature words into vector representations. By calculating the distance between the vectors, TFWE analyzed the semantic similarity to construct a user interest model. Second, considering the timeliness of news, a method of calculating news popularity by integrating time factors into the similarity calculation was proposed. Finally, TFWE combined the similarity of news content with the similarity of collaborative filtering ( CF) and recommended some news with higher rankings to users. In addition, results of the experiments on real dataset showed that TFWE significantly improved precision, recall, and F1 score compared to the classic hybrid recommendation algorithm.
Polar codes become the coding scheme for control channels of enhanced mobile broadband (eMBB) scenarios in the fifth generation (5G) communication system due to their excellent decoding performance. For the cell search procedure in 5G system, some common information bits ( CIBs) are transmitted in consecutive synchronization signal blocks ( SSBs). In this paper, a dual-cyclic redundancy check ( dual-CRC) aided encoding scheme is proposed, and the corresponding dual-successive cancellation flip ( dual-SCFlip) algorithm is given to further improve the performance of polar codes in the low signal-to-noise ratio ( SNR) environment. In dual-CRC aided encoding structure, the information bits of polar codes in different transmission blocks add cyclic redundancy check (CRC) sequences respectively according to CIBs and different information bits (DIBs). The structure enlarges the size of CIBs to improve the block error ratio ( BLER) performance of the system. The dual-SCFlip decoder can perform bit flip immediately once CIBs is decoded completely, and then decode DIBs or terminate decoding in advance according to the CRC result, which reduces the delay of decoding and mitigates the error propagation effect. Simulation results show that the dual-CRC aided encoding scheme and dual-SCFlip decoder have significant performance improvement compared to other existing schemes with low SNR.
Complex Network Modeling and Application
In order to study the financial behavior of investors in the spot market, the transmission process of futures prices to spot prices is analyzed. Firstly, a coarse-graining method is proposed to construct a dual-layer coupled complex network of spot price and futures price. Then, to characterize the financial behavior of investors in the spot market, a price coupling strength indicator is introduced to capture investors' overreaction and underreaction behavior. The simulation results show that, despite the focus of researchers on arbitrage opportunities between futures and spot markets, investors in the spot market will not overreact or delay when the acceptance level of price fluctuations remains unchanged. On the contrary, when the stable coefficient of the price difference between the futures and spot markets remains unchanged, investors undergo a nonlinear process of overreaction followed by underreaction as their acceptance level of price fluctuations increases.
Because of its wide application in anonymous authentication and attribute-based messaging, the attribute-based signature scheme has attracted the public attention since it was proposed in 2008. However, most of the existing attribute-based signature schemes are no longer secure in quantum era. Fortunately, lattice-based cryptography offers the hope of withstanding quantum computers. And lattices has elevated it to the status of a promising potential alternative to cryptography based on discrete log and factoring, owing to implementation simplicity, provable security reductions and quantum-immune. In this paper, the first lattice attribute-based signature scheme in random oracle model is proposed, which is proved existential unforgeability and perfect privacy. Compared with the current attribute-based signature schemes, our new attribute-based signature scheme can resist quantum attacks and has much shorter public-key size and signature size. Furthermore, this scheme is extended into an attribute-based signature scheme on number theory research unit (NTRU) lattice, which is also secure even in quantum era and has much higher efficiency than the former.
In order to change the path candidates, reduce the average list size, and make more paths pass cyclic redundancy check (CRC), multiple CRC-aided variable successive cancellation list (SCL) decoding algorithm is proposed. In the decoding algorithm, the whole unfrozen bits are divided into several parts and each part is concatenated with a corresponding CRC code, except the last part which is concatenated with a whole unfrozen CRC code. Each CRC detection is performed, and only those satisfying each part CRC become the path candidates. A variable list is setup for each part to reduce the time complexity. Variable list size is setup for each part to reduce the time complexity until one survival path in each part can pass its corresponding CRC. The results show that the proposed algorithm can reduce the average list size, and the frame error rate (FER) performance, and has a better performance with the increase of the part number.
Accurate modeling and recognition of the brain activity patterns for reliable communication and interaction are still a challenging task for the motor imagery (MI) brain-computer interface (BCI) system. In this paper, we propose a common spatial pattern (CSP) and chaotic particle swarm optimization (CPSO) twin support vector machine (TWSVM) scheme for classification of MI electroencephalography (EEG). The self-adaptive artifact removal and CSP were used to obtain the most distinguishable features. To improve the recognition results, CPSO was employed to tune the hyper-parameters of the TWSVM classifier. The usefulness of the proposed method was evaluated using the BCI competition IV-IIa dataset. The experimental results showed that the mean recognition accuracy of our proposed method was increased by 5.35%, 4.33%, 0.78%, 1.45%, and 9.26% compared with the CPSO support vector machine (SVM), particle swarm optimization (PSO) TWSVM, linear discriminant analysis (LDA), back propagation (BP) and probabilistic neural network (PNN), respectively. Furthermore, it achieved a faster or comparable central processing unit (CPU) running time over the traditional SVM methods.
Image segmentation directly determines the performance of automatic screening technique. However, there are overlapping nuclei in nuclei images. It raises a challenge to nuclei segmentation. To solve the problem, a segmentation method of overlapping cervical nuclei based on the identification is proposed. This method consists of three stages: classifier training, recognition and fine segmentation. In the classifier training, feature selection and classifier selection are used to obtain a classifier with high recognition rate. In the recognition, the outputs of the rough segmentation are classified and processed according to their labels. In the fine segmentation, the severely overlapping nuclei are further segmented based on the prior knowledge provided by the recognition. Experiments show that this method can accurately segment overlapping nuclei.
The new encoding tools of high efficiency video coding (HEVC) make the interpolation operation more complex in motion compensation (MC) for better video compression, but impose higher requirements on the computational efficiency and control logic of the hardware architecture. The reconfigurable array processor can take into consideration both the computational efficiency and flexible switching of algorithms very well. Through mining the data dependency and parallelism among interpolation operation, this paper presents a parallelization method based on the dynamic reconfigurable array processor proposed by the project team. The number of pixels loaded from the external memory is reduced significantly, by multiplexing the common data in the previous reference block and the current reference block. Flexible switching of variable block operation is realized by using dynamic reconfiguration mechanism. A 16 x 16 processor element (PE)'s array is used to dynamically process a 4 x 4 - 64 x 64 block size. The experimental results show that, the reference block update speed is increased by 39.9%. In the case of an array size of 16 PEs, the number of pixels processed in parallel reaches 16.
The energy-efficiency(EE) optimization problem was studied for resource allocation in an uplink single-cell
network, in which multiple mobile users with different quality of service (QoS) requirements operate under a non-orthogonal multiple access (NOMA) scheme. Firstly, a multi-user feasible power allocation region is derived as a multidimensional body that provides an efficient scheme to determine the feasibility of original channel and power assignment problem. Then, the size of feasible power allocation region was first introduced as utility function of the subchannel-user matching game in order to get high EE of the system and fairness among the users. Moreover, the power allocation optimization to the EE maximization is proved to be a monotonous decline function. The simulation results show that compared with the conventional schemes, the network connectivity of the proposed scheme is significantly enhanced and besides, for low rate massive connectivity networks, the proposed scheme obtains performance gains in the EE of the system.
To improve the efficiency and stability of data transmission in the long-range (LoRa) Internet of things (IoT),a hybrid time slot allocation algorithm is proposed, which implements a priority mechanism with high-priority nodes sending data in fixed time slots and low-priority nodes using the carrier sense multiple access (CSMA) algorithm to compete for shared time slots to transmit data. To improve network efficiency, a gateway is used to adjust the time slot allocation policy according to network status and balance the number of fixed and shared time slots. And more, a retransmission time slot is added to the time slot allocation algorithm, which redesigns the time frame structure, and adopts a retransmission mechanism to improve communication reliability. Simulation and measurement results show that the packet loss rate and transmission delay of the proposed hybrid algorithm are smaller than those of the fixed slot allocation algorithm, making the proposed algorithm more suitable for LoRa IoT.
In order to solve the hole-filling mismatch problem in virtual view synthesis, a three-step repairing (TSR) algorithm was proposed. Firstly, the image with marked holes is decomposed by the non-subsampled shear wave transform ( NSST), which will generate high-/ low-frequency sub-images with different resolutions. Then the improved Criminisi algorithm was used to repair the texture information in the high-frequency sub-images, while the improved curvature driven diffusion (CDD) algorithm was used to repair the low-frequency sub-images with the image structure information. Finally, the repaired parts of high-frequency and low-frequency sub-images are synthesized to obtain the final image through inverse NSST. Experiments show that the peak signal-to-noise ratio (PSNR) of the TSR algorithm is improved by an average of 2 - 3 dB and 1 - 2 dB compared with the Criminisi algorithm and the nearest neighbor interpolation (NNI) algorithm, respectively.
Concerning current deep learning-based electrocardiograph ( ECG) classification methods, there exists domain discrepancy between the data distributions of the training set and the test set in the inter-patient paradigm. To reduce the negative effect of domain discrepancy on the classification accuracy of ECG signals, this paper incorporates transfer learning into the ECG classification, which aims at applying the knowledge learned from the raining set to the test set. Specifically, this paper first develops a deep domain adaptation network ( DAN) for ECG classification based on the convolutional neural network ( CNN). Then, the network is pre-trained with training set data obtained from the famous Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) ECG arrhythmia database. On this basis, by minimizing the multi-kernel maximum mean discrepancy (MK-MMD) between the data distributions of the training set and the test set, the pre-trained network is adjusted to learn transferable feature representations. Finally, with the low-density separation of unlabeled target data, the feature representations are more transferable. The extensive experimental results show that the proposed domain adaptation method has reached a 7. 58% improvement in overall classification accuracy on the test set, and achieves competitive performance with other state-of-the-arts.
To achieve the confidentiality and retrievability of outsourced data simultaneously, a dynamic multi-keyword fuzzy ranked search scheme (DMFRS) with leakage resilience over encrypted cloud data based on two-level index structure was proposed. The first level index adopts inverted index and orthogonal list, combined with 2-gram and location-sensitive Hashing (LSH) to realize a fuzzy match. The second level index achieves user search permission decision and search result ranking by combining coordinate matching with term frequency-inverse document frequency (TF-IDF). A verification token is generated within the results to verify the search results, which prevents the potential malicious tampering by cloud service providers (CSP). The semantic security of DMFRS is proved by the defined leakage function, and the performance is evaluated based on simulation experiments. The analysis results demonstrate that DMFRS gains certain advantages in security and performance against similar schemes, and it meets the needs of storage and privacy-preserving for outsourcing sensitive data.
Caused by the environment clutter, the radar false alarm plots are unavoidable. Suppressing false alarm points has always been a key issue in Radar plots procession. In this paper, a radar false alarm plots elimination method based on multi-feature extraction and classification is proposed to effectively eliminate false alarm plots. Firstly, the density based spatial clustering of applications with noise (DBSCAN) algorithm is used to cluster the radar echo data processed by constant false-alarm rate (CFAR). The multi-features including the scale features, time domain features and transform domain features are extracted. Secondly, a feature evaluation method combining pearson correlation coefficient (PCC) and entropy weight method (EWM) is proposed to evaluate interrelation among features, effective feature combination sets are selected as inputs of the classifier. Finally, False alarm plots classified as clutters are eliminated. The experimental results show that proposed method can eliminate about 90% false alarm plots with less target loss rate.
To solve the efficiency problem of batch anonymous authentication for vehicular Ad-hoc networks (VANET), an improved scheme is proposed by using bilinear pairing on elliptic curves. The signature is jointly generated by roadside unit(RSU) node and vehicle, thus reducing the burden of VANET certification center and improving the authentication efficiency, and making it more difficult for attacker to extract the key. Furthermore, under random oracle model (ROM) security proof is provided. Analyses show that the proposed scheme can defend against many kinds of security problems, such as anonymity, man-in-the-middle (MITM) attack, collusion attack, unforgeability, forward security and backward security etc., while the computational overheads are significantly reduced and the authentication efficiency is effectively improved. Therefore, the scheme has great theoretical significance and application value under computational capability constrained internet of things (IoT) environments.
To realize the distributed storage and management of a secret halftone image in blockchain, a secure separable reversible data hiding (RDH) of halftone image in blockchain (SSRDHB) was proposed. A secret halftone image can be used as the original image to generate multiple share images which can be distributed storage in each point of blockchain, and additional data can be hidden to achieve management of each share image. Firstly, the secret halftone image was encrypted through Zu Chongzhi (ZUC) algorithm by using the encryption key (EK). Secondly, the method of using odd or even of share data was proposed to hide data, and a share dataset can be generated by using polynomial operation. Thirdly, multiple share images can be obtained through selecting share data, and different additional data can be hidden through controlling odd or even of share data, and additional data can be protected by using data-hiding key (DK). After sharing process, if the receiver has both keys, the halftone image can be recovered and additional data can be revealed, and two processes are separable. Experiment results show that multiple share images hidden additional data can be obtained through SSRDHB, and the halftone image can be recovered with 100% by picking any part of share images, and one additional data can be revealed with 100% by picking any one share image.
In response to the growing complexity and
performance of Integrated Circuit (IC), there is an urgent need to enhance the
testing and stability of IC test equipment. A method was proposed to predict
equipment stability using the upper side boundary value of normal distribution.
Initially, the K-means clustering algorithm classifies and analyzes sample
data. The accuracy of this boundary value is compared under two common
confidence levels to select the optimal threshold. A range is then defined to
categorize unqualified test data. Through experimental verification, the method
achieves the purpose of measuring the stability of qualitative IC equipment
through a deterministic threshold value and judging the stability of the
equipment by comparing the number of unqualified data with the threshold value,
which realizes the goal of long-term operation monitoring and stability
analysis of IC test equipment.
Currently, most deep learning methods used for Parkinson’s disease ( PD) detection lack reliability assessment. This characteristic makes it is difficult to identify erroneous results in practice, leading to potentially serious consequences. To address this issue, a prior network with the distance measure ( PNDM) layer was proposed in this paper. PNDM layer consists of two modules: prior network ( PN) and the distance measure ( DM) layer. The prior network is employed to estimate data uncertainty, and the DM layer is utilized to estimate model uncertainty. The goal of this work is to provide accurate and reliable PD detection through uncertainty estimation. Experiments show that PNDM layer can effectively estimate both model uncertainty and data uncertainty, rendering it more suitable for uncertainty estimation in PD detection compared to existing methods.
This paper investigates the propagation of computer viruses and establishes a novel propagation model. In contrast to the existing models, this model can directly indicate the impact of removable media and external computers on the propagation of computer virus. The stability results of equilibrium point are derived by Hurwitz criterion and Bendixson Dulac criterion. The effectiveness of the proposed results is shown by numerical simulation. In order to show the superiority of the proposed model, some comparisons with the existing models are presented. The acceptable threshold and the reasonable strategies for suppressing the propagation of computer virus are also suggested, respectively.
Aiming at the shortcomings of current gesture tracking methods in accuracy and speed, based on deep learning You Only Look Once version 4 (YOLOv4) model, a new YOLOv4 model combined with Kalman filter rea-time hand tracking method was proposed. The new algorithm can address some problems existing in hand tracking technology such as detection speed, accuracy and stability. The convolutional neural network (CNN) model YOLOv4 is used to detect the target of current frame tracking and Kalman filter is applied to predict the next position and bounding box size of the target according to its current position. The detected target is tracked by comparing the estimated result with the detected target in the next frame and, finally, the real-time hand movement track is displayed. The experimental results validate the proposed algorithm with the overall success rate of 99.43%
at speed of 41.822 frame/ s, achieving superior results than other algorithms.
The effect of electrocardiogram (ECG) signal wavelet denoising depends on the optimal configuration of its
control parameters and the selection of the optimal decomposition level. Nevertheless, the existing optimal
decomposition level selection scheme has some problems, such as lack of reliable theoretical guidance and
insufficient accuracy, which need to be solved urgently. To solve this problem, this paper proposes an optimal
decomposition level selection method based on multi-index fusion, which is used to select the optimal decomposition
level for wavelet threshold denoising of ECG signal. In the stage of index selection, in order to overcome the
limitation of a single evaluation index, the optimal multi-evaluation index is selected through the joint analysis of
the geometric and physical significance of traditional evaluation indexes. In the stage of index fusion, based on the
method of weighting the selected multiple indexes by the information entropy weight method and the coefficient of
variation method, an optimal decomposition level selection method based on the evaluation index Z is proposed to
improve the accuracy of the optimal decomposition level selection. Finally, extensive experiments are carried out on
the real ECG signal from the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia
database and simulated signal to test the performance of the proposed method. The experimental results show that
the accuracy of this method is superior to other related methods, and it can achieve better denoising effect of ECG
signal.
The authentication codes with arbitration are able to solve dispute between the sender and the receiver. The authentication codes with trusted arbitration are called , the authentication codes with distrust arbitration are called . As an expansion of , an is an authentication system which is closer to the reality environment. Therefore, have more extensive application value. In this paper, we construct a class of based on polynomials over finite fields, give the parameters of the constructed codes, and calculate a variety of cheating attacks the maximum probabilities of success. Especially, in a special case, the constructed are perfect. Compared with a known type of codes, they have almost the same security level, however, our codes need less storage requirements. Therefore, our codes have more advantages.
Object recognition in very high-resolution remote sensing images is a basic problem in the field of aerial and
satellite image analysis. With the development of sensor technology and aerospace remote sensing technology, the quality and quantity of remote sensing images are improved. Traditional recognition methods have a certain limitation in describing higher-level features, but object recognition method based on convolutional neural network (CNN) can not only deal with large scale images, but also train features automatically with high efficiency. It is mainly used on object recognition for remote sensing images. In this paper, an AlexNet CNN model is trained using 2 100 remote sensing images, and correction rate can reach 97.6% after 2 000 iterations. Then based on trained model, a parallel design of CNN for remote sensing images object recognition based on data-driven array processor (DDAP) is proposed. The consuming cycles are counted. Simultaneously, the proposed architecture is realized on Xilinx V6 development board, and synthesized based on SMIC 130 nm complementary metal oxid semiconductor (CMOS) technology. The experimental results show that the proposed architecture has a certain degree of parallelism to achieve the purpose of accelerating calculations.
Coaxial connectors are generally regarded as a kind of potential passive non-linear source when magnetic materials are applied in the coating or under-plating, which may result in serious passive intermodulation (PIM) interference and degrade the communication quality. In this paper, the effect of connector coating materials on the PIM is theoretically studied using finite element analysis (FEA) and circuit simulations. Considering the material composition both in central and outer conductor, an FEA model of connector is proposed to identify the current density in magnetic material region. An equivalent circuit model expressing the nonlinearity in coating material is developed, coupled with the non-linear transfer model. The PIM product power of the connector with related material configuration is predicted by harmonic balance simulation. Intentionally design connector samples are used in PIM tests and the measurement results are consistent with the theoretical predictions. The PIM performance in coaxial connectors is demonstrated from the perspectives of both modeling analysis and experimental investigations.
In this communication, a frequency-, radiation pattern- and polarization-reconfigurable antenna employing liquid metal is presented. Two crossed dipole antennas are surrounded by four independent reflectors and directors to realize multi-beam switching. The length of dipole arms can be adjusted by extracting the liquid metal from the needle tube to achieve frequency reconfiguration. The polarization can be switched by injecting liquid metal into different dipole microfluidic channels. It is simple in design and has multiple reconfigurable capabilities. An antenna with a relative frequency tuning range of 35.8% extending from 2.43 GHz to 3.49 GHz is fabricated. It also can perform 6 kinds of beam steering over a 360 coverage and switch between two different polarizations. The antenna has potential to employ cognitive radio (CR) and base station in wireless systems.
Electromagnetic Field and Microwave Tech
In this paper, a W-band broadband vialess microstrip (MS)-to-MS vertical transition in multilayer liquid crystal
polymer (LCP) substrate is presented, which consists of two MS lines in the top layer, a common ground plane and
slotline resonators in the second layer, and a close-loop transmission-line in the third layer. To increase the
passband of the vialess vertical transition, an H-shaped slotline resonator is introduced, which greatly improves the
impedance performance of the slotline resonator, and the full-wave simulated results indicate that insertion loss
(IL) is less than 2 dB and return loss (RL) is better than 10 dB at W-band. To verify this design, the broadband
vertical transition is fabricated and measured. The measured results indicate that a broadband vertical transition
with RL better than 10 dB and IL less than 5.67 dB can be obtained in the frequency range from 70.00 GHz to
104.09 GHz. Due to the fabrication error in the preparation process, the measured results are deteriorated
compared to the simulated results, and the investigation indicates that the deviation is caused by the thickness error
of the LCP substrate.
Conventional outdoor navigation systems are usually based on orbital satellites, e.g., global positioning system (GPS) and global navigation satellite system (GLONASS). The latest advances from wearable, e.g., BaiduEye and Google Glass, have enabled new approaches to leverage information from the surrounding environment. For example, they enable the change from passively receiving information to actively requesting information. Thus, such changes might inspire brand new application scenarios that were not possible before. In this work, we propose a vision-based navigation system based on wearable like Baidu Eye. We discuss the associated challenges and propose potential solutions for each of them. The system utilizes crowd sensing to collect and build a traffic signpost database for positioning reference. Then it leverages context information, such as cell identification (Cell ID), signal strength, and altitude combined with traffic sign detection and recognition to enable real-time positioning. A hybrid cloud architecture is proposed to enhance the capability of sensing devices (SD) to realize the proposed vision.
Robot error compensation is a technique for enhancing the positioning accuracy of the system. This paper presented an error measuring technique for serial robots based on the multi-hole measuring method, combined with the intelligent particle swarm optimisation (PSO) to obtain the optimal solution of the robot's error compensation values, thereby improving the positioning accuracy of the robot. In the experiment, the robot error was measured using self-made multi-hole measuring plates and probes, and the experimental data were combined with PSO for the error comprehensive analysis. The results showed that on this type of serial robot, the multi-hole measuring method and PSO algorithm had obvious error compensation effects, which effectively improved the positioning accuracy of the robot, with the error reduced by 35% after compensation.
In view of the fact that current data delivery methods are not enough to meet the security requirements of today's distributed crowd sensing, and the data delivery methods are not flexible enough, this paper proposes a crowd sensing data interaction method based on tangle directed acyclic graph (DAG) network. In this method, users and platforms are regarded as nodes of the network in the process of performing crowd sensing tasks. First, the heaviest chain is generated through the main chain strategy to ensure the stability of the network. Then, the hidden Markov model ( HMM ) prediction model is used to improve the correlation of the perceived data to improve the performance. Then, the confidential transaction and commitment algorithm is used to ensure the reliability of the transaction, overcome the security risks faced by the trusted third party, and simplify the group intelligence aware transaction mode. Finally, through simulation experiments, the security and feasibility of the group intelligence aware data delivery method based on tangle DAG network are verified.
To solve the satellite repeater’s flexible and wideband frequency conversion problem, we propose a novel microwave photonic repeater system, which can convert the upload signal’s carrier to six different frequencies. The scheme employs one 20 GHz bandwidth dual-drive Mach-Zehnder modulator (MZM) and two 10 GHz bandwidth MZMs. The basic principle of this scheme is filtering out two optical sidebands after the optical carrier suppression (OCS) modulation and combining two sidebands modulated by the input radio frequency (RF) signal. This structure can realize simultaneous multi-band frequency conversion with only one frequency-fixed microwave source and prevent generating harmful interference sidebands by using two corresponding optical filters after optical modulation. In the simulation, one C-band signal of 6 GHz carrier can be successfully converted to 12 GHz (Ku-band), 28 GHz, 34 GHz, 40 GHz, 46 GHz (Ka-band) and 52 GHz (V-band), which can be an attractive method to realize multi-band microwave photonic satellite repeater. Alternatively, the scheme can be configured to generate multi-band local oscillators (LOs) for widely satellite onboard clock distribution when the input RF signal is replaced by the internal clock source.
The finite-difference time-domain (FDTD) method is extensively applied in dealing with time-domain microwave imaging(MWI) problems since it is robust, fast, simple to implement. However, the FDTD method is an explicit time-stepping technique, due to the constraint of the Courant-Friedrich-Levy (CFL) stability condition, the time step needs to be as small as the size of the fine cells, which brings a major increase in computational costs. A fast nonlinear electromagnetic reconstruction algorithm for layered loss-y media by using the alternating-direction implicit finite-difference time-domain (ADI-FDTD) method is proposed. This algorithm is based on an adjoint method, and the nonlinear iterations apply the ADI-FDTD method to calculate the forward and adjoint field, and adopt the Polak, Ribiere, Polyar conjugate-gradient (PRP-CG) optimization scheme. By comparing the simulation results based on ADI-FDTD method and the FDTD method, the validity and efficiency of the proposed algorithm have been proved. Furthermore, the relative residual errors (RRE) are introduced as the iterative computation termination conditions, which further prove the accuracy of this algorithm.
In this paper, a new spatial quadrature modulation ( NSQM ) scheme is proposed to improve the error
performance of indoor visible light communication ( VLC) systems. NSQM is different from generalized spatial
quadrature modulation ( SQM) in two aspects. First, the transmitted optical signal is directly detected at the
receiver, which does not need to estimate the indices of the transmitted antenna. Second, an optimization approach
is used with NSQM to minimize the upper error bound of the transmitted signals. In addition, several NSQM
schemes are described in detail. Numerical results show that the proposed NSQM scheme achieves superior error
performance compared with the SQM scheme.
Personalized recommender systems provide various personalized recommendations for different users through the analysis of their respective historical data. Currently, the problem of the “filter bubble" which has to do with over-specialization persists. Serendipity (SRDP), one of the evaluation indicators, can provide users with unexpected and useful recommendations, and help to successfully mitigate the filter bubble problem, and enhance users' satisfaction levels and provide them with diverse recommendations. Since SRDP is highly subjective and challenging to study, only a few studies have focused on it in recent years. In this study, the research results on SRDP were summarized, the various definitions of SRDP and its applications were discussed, the specific SRDP calculation process from qualitative to quantitative perspectives was presented, the challenges and the development directions were outlined to provide a framework for further research.
Complex Network Modeling and Application
The signal is subjected to lots of interferences in vehicle-to-vehicle (V2V) channel propagation, resulting in receiving error codes. Two-dimensional (2D) and three-dimensional (3D) geometrical channel models are used to depict the wideband V2V multiple-input multiple-output (MIMO) channels. Using the channel model, Turbo code and low-density parity-check (LDPC) code are investigated for wideband V2V MIMO system, and the encoding and the decoding schemes are investigated. The bit error rate (BER), channel capacity and outage probability of wideband V2V MIMO system using Turbo code and LDPC code are analyzed at different typical speeds. The results show that the performance of wideband V2V MIMO system using Turbo code outperform that using LDPC code. The performance is affected by transmitting and receiving speeds with the same coding scheme. And the channel capacity of the 3D channel is larger than that of 2D channel.
As the widespread employment of firewalls on the Internet, user datagram protocol (UDP) based voice over Internet protocol (VoIP) system will be unable to transmit voice data. This paper proposed a novel method to transmit voice data based on transmission control protocol (TCP). The method adopts a disorder TCP transmission strategy, which allows discontinuous data packets in TCP queues read by application layer directly without waiting for the retransmission of lost data packets. A byte stream data boundary identification algorithm based on consistent overhead byte stuffing algorithm is designed to efficiently identify complete voice data packets from disordered TCP packets arrived so as to transmit the data to the audio processing module timely. Then, by implementing the prototype system and testing, we verified that the proposed algorithm can solve the high time delay, jitter and discontinuity problems in standard TCP protocol when transmitting voice data packets, which caused by its error control and retransmission mechanism. We proved that the method proposed in this paper is effective and practical.
The state-of-the-art soft-output decoder of polar codes is the soft cancellation (SCAN) decoding algorithm, which performs well at the cost of plentiful computations. Based on the SCAN decoding algorithm, a modified method with revised iterative formula is proposed, marked modified min-sum SCAN (MMS-SCAN). The proposed algorithm simplifies the update formula of nodes and reduces the complexity of iterative decoding process by the piecewise
approximation function. Meanwhile, the bit error rate (BER) of the proposed method can approach the performance of original SCAN decoding method without performance loss. The simulation reveals that the MMS-SCAN decoding algorithm can achieve the effect that the BER curve almost coincides with the original SCAN decoding curve.
As the significant branch of intelligent vehicle networking technology, the intelligent fatigue driving detection technology has been introduced into the paper in order to recognize the fatigue state of the vehicle driver and avoid the traffic accident. The disadvantages of the traditional fatigue driving detection method have been pointed out when we study on the traditional eye tracking technology and traditional artificial neural networks. On the basis of the image topological analysis technology, Haar like features and extreme learning machine algorithm, a new detection method of the intelligent fatigue driving has been proposed in the paper. Besides, the detailed algorithm and realization scheme of the intelligent fatigue driving detection have been put forward as well. Finally, by comparing the results of the simulation experiments, the new method has been verified to have a better robustness, efficiency and accuracy in monitoring and tracking the drivers’ fatigue driving by using the human eye tracking technology.
As the telecommunication market in China becomes increasingly mature, operators have begun to focus their primary effort on user management; within this focus, determining the proper tariff package for users and offering them relevant recommendations are key issues to resolve. This paper introduces a matching model that links tariff packages and users’ usage behavior (e.g., the total minutes used, data usage, etc.) based on the market segmenting theory. Microsoft Visual Fox Pro 9.0 is selected as the development tool to implement the matching model, while the tariff packages and user behavior data for a city branch of China Mobile are used to validate the model.
To reduce fetching cost from a remote source, it is natural to cache information near the users who may access the information later. However, with development of 5G ultra-dense cellular networks andmobile edge computing (MEC), a reasonable selection among edge servers for content delivery becomes a problem when the mobile edge obtaining sufficient replica servers. In order to minimize the total cost accounting for both caching and fetching process, we study the optimal resource allocation for the content replica servers’deployment. We decompose the total cost as the superposition of cost in several coverages. Particularly, we consider the criterion for determining the coverage of a replica server and formulate the coverage as a tradeoff between caching cost and fetching cost. According to the criterion, a coverage isolation (CI) algorithm is proposed to solve the deployment problem. The numerical results show that the proposed CI algorithm can reduce the cost and obtain a higher tolerance for different centrality indices.
A novel low profile multiband rectenna was proposed for harvesting the 2nd generation(2G)\ the 3rd generation (3G)\ the 4th generation (4G)\wireless local area networks (WLAN) electromagnetic wave energy. The proposed rectenna consists of a novel multiband antenna and a rectifier. The multiband antenna includes a radiating element on one side of a single layer dielectric substrate and a feeding spiral balun on the other side of the substrate. A conductive via is connected between the balun and the radiating element. In the radiating element, a deformed dipole structure is connected with an equiangular spiral slot structure and is used to generate a low frequency radiation around 900 MHz. The multiband antenna can work simultaneously at 0.869 GHz–0.948 GHz, 1.432 GHz–2.173 GHz, and 2.273 GHz–2.465 GHz with its peak gains of 7.1 dBi at 903 MHz, 4.1 dBi at 1800 MHz, 5.2 dBi at 2430 MHz. The radio frequency to direct current (RF-to-DC) conversion efficiencies of the rectifier are 58 %–62 % at these three frequencies for an input power of 0 dBm. The overall measurement results validate that the rectenna suits for energy harvesting and exhibits approximate maximum efficiencies of 58 % at 0.9 GHz, 56 % at 1.8 GHz, and 55 % at 2.4 GHz with a low incident power density of 8 μW/cm2.
The network attack profit graph (NAPG) model and the attack profit path predication algorithm are presented herein to cover the shortage of considerations in attacker-s subjective factors based on existing network attack path prediction methods. Firstly, the attack profit is introduced, with the attack profit matrix designed and the attack profit matrix generation algorithm given accordingly. Secondly, a path profit feasibility analysis algorithm is proposed to analyze the network feasibility of realizing profit of attack path. Finally, an opportunity profit path and an optimal profit path are introduced with the selection algorithm and the prediction algorithm designed for accurate prediction of the path. According to the experimental test, the network attack profit path predication algorithm is applicable for accurate prediction of the opportunity profit path and the optimal profit path.
Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration registration between the monocular camera and the 16-line lidar. The study is divided into two parts: single-sensor independent calibration and multi-sensor joint registration, in which the selected objective world is used. The system associates the lidar coordinates with the camera coordinates. The lidar and the camera are used to obtain the normal vectors of the calibration plate and the point cloud data representing the calibration plate by the appropriate algorithm. Iterated closest points (ICP) is the method used for the iterative refinement of the registration.
In recent years, with the rapid development of Internet of things (IoT) technology, radio frequency identification (RFID) technology as the core of IoT technology has been paid more and more attention, and RFID network planning(RNP) has become the primary concern. Compared with the traditional methods, meta-heuristic method is widely used in RNP. Aiming at the target requirements of RFID, such as fewer readers, covering more tags, reducing the interference between readers and saving costs, this paper proposes a hybrid gray wolf optimization-cuckoo search (GWO-CS) algorithm. This method uses the input representation based on random gray wolf search and evaluates the tag density and location to determine the combination performance of the reader's propagation area. Compared with particle swarm optimization ( PSO) algorithm, cuckoo search( CS) algorithm and gray wolf optimization ( GWO) algorithm under the same experimental conditions, the coverage of GWO-CS is 9.306% higher than that of PSO algorithm, 6.963% higher than that of CS algorithm, and 3.488% higher than that of GWO algorithm. The results show that the GWO-CS algorithm cannot only improve the global search range, but also improve the local search depth.
A novel asymmetrical Pi-shaped defected ground structure (DGS) with 3-interations Koch fractal curves is proposed to design a microstrip low-pass filter (LPF) with ultra-wide stop-band (SB). The proposed LPFs with a single resonator and two cascaded resonators are both designed, simulated, manufactured and measured. Simulation and experiment results demonstrate that the designed LPF has a very sharp transition band (TB) and an ultra-wide SB performance compared with the existed similar symmetrical and asymmetrical DGS. The proposed LPF with two cascaded resonators is with a compact size of 36.8 mm×24.0 mm, a very low insertion loss of less than 0.7 dB under 1.9 GHz, and a wide SB from 2.2 GHz to 8 GHz with rejection of larger than 30 dB.
The number of short videos on the Internet is huge, but most of them are unlabeled. In this paper, a rough labelling method of short video based on the neural network of image classification is proposed. Convolutional auto-encoder is applied to train and learn unlabeled video frames, in order to obtain the feature in certain level of the network. Using these features, we extract key-frames of the video by our method of feature clustering. We put these key-frames which represent the video content into the image classification network, so that we can get the labels for every video clip. We also compare the different architectures of convolutional auto-encoder, while optimizing and selecting the better performance architecture through our experiment result. In addition, the video frame feature from the convolutional auto-encoder is compared with those features from other extraction methods. On the whole, this paper propose a method of image labels transferring for the realization of short video rough labelling, which can be applied to the video classes with few labeled samples.
Simultaneous localization and mapping (SLAM) technology becomes more and more important in robot localization. The purpose of this paper is to improve the robustness of visual features to lighting changes and increase the recall rate of map re-localization under different lighting environments by optimizing the image transformation model. An image transformation method based on matches and photometric error (name the method as MPT) is proposed in this paper, and it is seamlessly integrated into the pre-processing stage of the feature-based visual SLAM framework. The results of the experiment show that the MPT method has a better matching effect on different visual features. In addition, the image transformation module encapsulated by a robot operating system (ROS) can be used with multiple visual SLAM systems and improve its re-localization effect under different lighting environments.
There are many studies on
sales forecasting in e-commerce, most of which focus on how to forecast sales
volume with related e-commerce
operation data. In this paper, a deep learning method named FS-LSTM was
proposed, which combines long short-term
memory (LSTM) and feature selection mechanism to forecast the sales volume. The indicators with most
contributions by the extreme gradient boosting (XGBoost) model are selected as
the input features of LSTM model. FS-LSTM
method can get less mean average error (MAE) and mean squared error (MSE) in the forecasting
of e-commerce sales volume, comparing with the LSTM model without feature
selection. The results show that the FS-LSTM
can improve the performance of original LSTM for forecasting the sales volume.
The distributed parameters of the transmission lines have the significant impact to the signal propagation. In the conventional method of the distributed parameter extraction, the discontinuity of inverse trigonometric or hyperbolic can arise the problem about phase ambiguity which causes significant errors for transmission models. A difference iteration method (DIM) is proposed for extracting distributed parameters of high frequency transmission line structure in order to overcome the phase ambiguity in the conventional method. The formulations of the proposed method are first derived for two-conductor and multi-conductor lines. Then the validation is performed for the models of micro-strip transmission line. Numerical results demonstrate that the proposed difference iteration method can solve the problem about the phase ambiguity and the extracted distributed parameters are accurate and efficient for a wide range of the frequencies of interest and line lengths.
In the post quantum era, public key cryptographic scheme based on lattice is considered to be the most promising cryptosystem that can resist quantum computer attacks. However, there are still few efficient key agreement protocols based on lattice up to now. To solve this issue, an improved key agreement protocol with post quantum security is proposed. Firstly, by analyzing the Wess-Zumino model + ( WZM + ) key agreement protocol based on small integer solution (SIS) hard problem, it is found that there are fatal defects in the protocol that cannot resist man-in-the-middle attack. Then based on the bilateral inhomogeneous small integer solution (Bi-ISIS) problem, a mutual authenticated key agreement (AKA) protocol with key confirmation is proposed and designed. Compared with Diffie-Hellman (DH) protocol, WZM + key agreement protocol, and the AKA agreement based on the ideal lattice protocol, the improved protocol satisfies the provable security under the extend Canetti-Krawczyk (eCK) model and can resist man-in-the-middle attack, replay attack and quantum computing attack.
This paper considers a wireless powered communication network (WPC network, WPCN) based on non-orthogonal multiple access (NOMA) technology aided by intelligent reflective surfaces (IRS). WPCN mainly
focuses on downlink energy transfer (ET) and uplink information transmission (IT). At the ET phase, a dedicated
multi-antenna power station (PS) is equipped to supply power to users with the assistance of IRS, and at the IT
phase, the IRS adjusts the phase to assist the user in applying NOMA technology to transmit information to the base
station (BS), thus minimizing the impact of dynamic IRS on the system. Based on the above settings, the
maximization of sum-throughput of the system under this working mode is studied. Due to the non-convexity of
maximization problem of the sum-throughput of this system, block coordinate descent (BCD) technology is applied
for alternative optimization of each system block by semidefinite relaxation (SDR) and particle swarm optimization
(PSO) respectively. The numerical results show that compared with baseline scheme, the proposed optimization
scheme can provide greater sum-throughput of the system.
To solve the problem that the performance of the coverage, interference rate, load balance andweak power in the radio frequency identification (RFID) network planning. This paper proposes an elite opposition-based learning and Levy flight sparrow search algorithm (SSA), which is named elite opposition-based learning and Levy flight SSA (ELSSA). First, the algorithm initializes the population by an elite opposed-based learning strategy to enhance the diversity of the population. Second, Levy flight is introduced into the scrounger's position update formula to solve the situation that the algorithm falls into the local optimal solution. It has a probability that the current position is changed by Levy flight. This method can jump out of the local optimal solution. In the end, the proposed method is compared with particle swarm optimization (PSO) algorithm, grey wolf optimzer (GWO) algorithm and SSA in the multiple simulation tests. The simulated results showed that, under the same number of readers, the average fitness of the ELSSA is improved respectively by 3.36%, 5.67% and 18.45%. By setting the different number of readers, ELSSA uses fewer readers than other algorithms. The conclusion shows that the proposed method can ensure a satisfying coverage by using fewer readers and achieving higher comprehensive performance.
Received signal strength (RSS) based positioning schemes ignore the actual environmental feature that the volatility of RSS increases as signal propagation distance grows. Therefore, RSS over long distance generally has relatively large measurement error and degrades the positioning performance. To reduce the negative impact of these RSSs over long distances, a weighted semidefinite programming (WSDP) positioning scheme was proposed. The WSDP positioning scheme first assesses the signal propagation quality using the average variance of all RSS sets. Then appropriate weighting factors are set based on the variance of each RSS set, and a weighted semidefinite programming optimizer is formulated to estimate the positions of target nodes. Simulation results show that the WSDP positioning scheme can effectively improve the positioning performance.
The problem of solving differential equations and the properties of solutions have always been an important content of differential equation the study. In practical application and scientific research, it is difficult to obtain analytical solutions for most differential equations. In recent years, with the development of computer technology, some new intelligent algorithms have been used to solve differential equations. They overcomes the drawback of traditional methods and provide the approximate solution in closed form (i.e., continuous and differentiable). The least squares support vector machine (LS-SVM) has nice properties in solving differential equations. In order to further improve the accuracy of approximate analytical solutions and facilitative calculation, a novel method based on numerical methods and LS-SVM methods is presented to solve linear ordinary differential equations (ODEs). In our approach, a high precise of the numerical solution is added as a constraint to the nonlinear LS-SVM regression model, and the optimal parameters of the model are adjusted to minimize an appropriate error function. Finally, the approximate solution in closed form is obtained by solving a system of linear equations. The numerical experiments demonstrate that our proposed method can improve the accuracy of approximate solutions.
A novel single-cavity equilateral triangular substrate integrated waveguide (TSIW) bandpass filter (BPF) with a complementary triangular split ring resonator (CTSRR) is designed in this paper. A metallic via-hole is used to split the degenerate modes and adjust the transmission zeros (TZs) properly. Meanwhile, the CTSRR is utilized as a resonator to work together with the degenerate modes of the TSIW cavity. The resonant frequency of the CTSRR can be adjusted by its own size. Meanwhile, a TZ is observed in the lower band due to the CTSRR. Finally, a 16% 3dB fractional bandwidth (FBW) triple-mode TSIW BPF with three TZs in both lower and upper bands is simulated, fabricated, and measured. There is a good agreement between the simulated and measured ones.
Aiming at the statistical sparse decomposition principle (SSDP) method for underdetermined blind source signal recovery with problem of requiring the number of active signals equal to that of the observed signals, which leading to the application bound of SSDP is very finite, an improved SSDP (ISSDP) method is proposed. Based on the principle of recovering the source signals by minimizing the correlation coefficients within a fixed time interval, the selection method of mixing matrix's column vectors used for signal recovery is modified, which enables the choose of mixing matrix's column vectors according to the number of active source signals self-adaptively. By simulation experiments, the proposed method is validated. The proposed method is applicable to the case where the number of active signals is equal to or less than that of observed signals, which is a new way for underdetermined blind source signal recovery.
Low-loss, non-blocking, scalable passive optical interconnect network on-lhip ( LOOKNoC) structure was proposed based on 2 *2 optical exchange switches, using wavelength division multiplexing (WDM) technology to expand to 8 *8, 16 *16, 32 *32, 64 *64 passive optical interconnection networks, which can achieve non-blocking communication. The experimental results show that based on the 16 *16 optical interconnection network structure, the number of microring resonators (MRs) in LOOKNoC was reduced by 90.9%, 90.9%, 20.0% and 75.0% compared with the generic wavelength-routed optical router (GWOR), λ-router topology and CrossBar structure. By testing the performance parameters based on the structure of 16 *16 by the OMNET + + platform, as the result shows, the average insertion loss of LOOKNoC is 3.0%, 11.6%, 4.8% and 16.7% less than that of
GWOR, λ-router, Mesh, and CrossBar structures.
In the design of a graphic processing unit (GPU), the processing speed of triangle rasterization is an important factor that determines the performance of the GPU. An architecture of a multi-tile parallel-scan rasterization accelerator was proposed in this paper. The accelerator uses a bounding box algorithm to improve scanning efficiency. It rasterizes multiple tiles in parallel and scans multiple lines at the same time within each tile. This highly parallel approach drastically improves the performance of rasterization. Using 65nm process standard cell library of Semiconductor Manufacturing International Corporation (SMIC), the accelerator can be synthesized to a maximum clock frequency of 220MHz. An implementation on the Genesys2 field programmable gate array (FPGA) board fully verifies the functionality of the accelerator. The implementation shows a significant improvement in rendering speed and efficiency and proves its suitability for high- performance rasterization.
Text classification means to assign a document to one or more classes or categories according to content. Text classification provides convenience for users to obtain data. Because of the polysemy of text data, multi-label classification can handle text data more comprehensively. Multi-label text classification become the key problem in the data mining. To improve the performances of multi-label text classification, semantic analysis is embedded into the classification model to complete label correlation analysis, and the structure, objective function and optimization strategy of this model is designed. Then, the convolution neural network (CNN) model based on semantic embedding is introduced. In the end, Zhihu dataset is used for evaluation. The result shows that this model outperforms the related work in terms of recall and area under curve (AUC) metrics.
An on-chip debug circuit based on Joint Test Action Group (JTAG) interface for L-digital signal processor (L- DSP) is proposed, which has debug functions such as storage resource access, central processing unit (CPU) pipeline control, hardware breakpoint/ observation point, and parameter statistics. Compared with traditional debug mode, the proposed debug circuit completes direct transmission of data between peripherals and memory by adding data test-direct memory access (DT-DMA) module, which improves debug efficiency greatly. The proposed circuit was designed in a 0-18 μm complementary metal-oxide-semiconductor ( CMOS) process with an area of 167 234.76 μm2 and a power consumption of 8.89 mW. And the proposed debug circuit and L-DSP were verified under a field programmable gate array (FPGA). Experimental results show that the proposed circuit has complete
debug functions and the rate of DT-DMA for transferring debug data is three times faster than the CPU.
In order to take into account the computing efficiency and flexibility of calculating transcendental functions, this paper proposes one kind of reconfigurable transcendental function generator. The generator is of a reconfigurable array structure composed of 30 processing elements (PEs). The coordinate rotational digital computer (CORDIC) algorithm is implemented on this structure. Different functions, such as sine, cosine, inverse tangent, logarithmic, etc., can be calculated based on the structure by reconfiguring the functions of PEs. The functional simulation and field programmable gate array (FPGA) verification show that the proposed method obtains great flexibility with acceptable performance.
For uniform linear antenna array (ULA) based millimeter wave communications, the maximum capacity can be achieved by the optimal antenna separation product (ASP). However, due to the practical size limitation, it is necessary to decrease the ULA length. In this paper, an optimization problem is formulated to minimize the ULA length for millimeter wave communications with maximum capacity. We decompose the problem into two subproblems: length selection optimization and orientation deployment optimization. The optimal length selection can be obtained when the transmit and receive ULAs have equal length. By using the property of trigonometric function, we derive the optimal orientation deployment and study the influence of orientation deviation on ULA length. Simulation results are presented to validate the analyses.
Due to the scattering effect of suspended particles in the atmosphere, foggy day images have reduced visibility and contrast significantly. Considering the loss of details and uneven defogging results of the contrast limited adaptive histogram equalization (CLAHE) algorithm, a curvelet transform and contrast adaptive clip histogram equalization (HE)-based foggy day image enhancement algorithm is proposed. The proposed algorithm transforms an image to the curvelet domain and enhances the image detail information via a nonlinear transformation of high frequency curvelet coefficients. After curvelet reconstruction, the contrast adaptive clip HE method is adopted to enhance the total image contrast and the foggy day image contrast and detail information. During the histogram clipping process, the clip limit value is adaptively selected based on image contrast and the sub-block image histogram variance. A comparative analysis of the foggy day image enhancement results are obtained by applying CLAHE, and some classical single image defogging algorithms and the proposed algorithm are also conducted to prove the effectiveness of the proposed algorithm with objective parameters.
White-box cryptography is critical in a communication system to protect the secret key from being disclosed in a cryptographic algorithm code implementation. The stream cipher is a main dataflow encryption approach in mobile communication. However, the research work about white-box cryptographic implementation for stream cipher is rare. A new white-box Zu Chongzhi-128 (ZUC-128) cryptographic implementation algorithm named WBZUC was proposed. WBZUC adopts lookup table and random coding in the non-linear function to make the intermediate value chaos without changing the final encryption result. Thus, the WBZUC algorithm's security gets improved compared with the original ZUC-128 algorithm. As for the efficiency, a test experiment on WBZUC shows that average speed of key generation, encryption, and decryption can reach at 33.74 kbit/ s, 23.31 kbit/ s, 24.06 kbit/ s respectively. Despite its running speed is relative a bit lower than original ZUC-128 algorithm, WBZUC can provide better security and comprehensive performance in mobile communication system environment.
An optimized Neumann series ( NS ) approximation is described based on Frobenius matrix decomposition, this method aims to reduce the high complexity, which caused by the large matrix inversion of detection algorithm in the massive multiple input multiple output (MIMO) system. The large matrix in the inversion is decomposed into the sum of the hollow matrix and a Frobenius matrix, and the Frobenius matrix has the diagonal elements and the first column of the large matrix. In order to ensure the detection performance approach to minimum mean square error (MMSE) algorithm, the first three terms of the series approximation are needed, which results in high complexity as O(K3), where K is the number of users. This paper further optimize the third term of the series approximation to reduce the computational complexity from O(K3) to O(K2). The computational complexity analysis and simulation results show that the performance of proposed algorithm can approach to MMSE algorithm with low complexity O(K2).
Due to its inherent characteristics of flexible mobility, unmanned aerial vehicle (UAV) is exploited as a cost-efficient mobile platform to assist remote data collection in the 5th generation or beyond the 5th generation (5G/ B5G) wireless systems. Compared with static terrestrial base stations, the line-of-sight (LoS) link between UAVs and ground nodes are stronger due to their flexibility in three-dimensional (3D) space. Due to the fact that flexible mobility of UAVs requires high propulsion power, the limited on-board energy constrains the performance of UAV-assisted data collection. It is worth noting that UAVs can be categorized into rotary-wing UAVs and fixed-wing UAVs, either has its own characteristics in propulsion energy consumption. In this article, a comprehensive review of state-of-art studies on trajectory design schemes for rotary-wing UAVs, as well as aerodynamic-aware attitude control strategies for fixed-wing UAVs was provided. Then, two case studies for energy-efficient data collection using rotary-wing UAVs and fixed-wing UAVs were presented, respectively. More specifically, an age-energy aware data collection scheme was demonstrated for rotary-wing UAVs to optimize the timeliness of collected data. Moreover, an aerodynamic-aware attitude control strategy for fixed-wing UAVs was also demonstrated under data collection requirements.
The independent hypothesis between frames in vocal effect (VE) recognition makes it difficult for frame based spectral features to describe the intrinsic temporal correlation and dynamic change information in speech phenomena. A novel VE detection method based on echo state network (ESN) is presented. The input sequences are mapped into a fixed-dimensionality vector in high dimensional coding space by reservoir of the ESN. Then, radial basis function (RBF) networks are employed to fit the probability density function (pdf) of each VE mode by using the vectors in the high dimensional coding space. Finally, the minimum error rate Bayesian decision is employed to judge the VE mode. The experiments which are conducted on isolated words test set achieve 79.5% average recognition accuracy, and the results show that the proposed method can overcome the defect of the independent hypothesis between frames effectively.
Compact dual-band bandpass filter (BPF) for the 5th generation mobile communication technology (5G) radio frequency (RF) front-end applications was presented based on multilayer stepped impedance resonators (SIRs). The multilayer dual-band SIR BPF can achieve high selectivity and four transmission zeros (TZs) near the passband edges by the quarter-wavelength tri-section SIRs. The multilayer dual-band SIR BPF is fabricated on a 3-layer FR-4 substrate with a compact dimension of 5.5 mm ×5.0 mm ×1.2 mm. The measured two passbands of the multilayer dual-band SIR BPF are 3.3 GHz -3.5 GHz and 4.8 GHz -5.0 GHz with insertion loss (IL) less than 2 dB respectively. Both measured and simulated results suggest that it is a possible candidate for the application of 5G RF front-end at sub-6 GHz frequency band.
Modeling and Matching texts is a critical issue in natural language processing (NLP) tasks. In order to improve the accuracy of text matching, multi-granularities capture matching features (MG-CMF) model was proposed. The proposed model used convolution operations to construct the representation of text under multiple granularities, used max-pooling operations to filter more reasonable text representations and built a matching matrix at different granularities. Then, the convolution neural network (CNN) was used to capture the matching information in each granularity. Finally, the captured matching features were input into the fully connected neural network to obtain the matching similarity. By making some experiments, the results indicate that the MG-CMF model not only gets multiple granularity representations of sentences but also can obtain matching information from multiple granularities of sentences better than the other text matching models.
In order to specify brain temporal dynamics difference between two representative puns, homonymic and semantic puns, alternate presentation of words and phrase ( APWP) paradigm was proposed. The highlight of APWP paradigm is to make sentences strictly presented in word-phrase-word-phrase-word forms, which helps relieve visual fatigue of the monotonous presentation form and prevent disturbance by the settled position of the ending word. Following the APWP paradigm, participants are invited to read puns presenting in word-phrase-word-phrase-word forms. Meanwhile, event-related potential (ERP) was adopted to record their electroencephalogram (EEG) data. By observing two linguistic cognitive indexes of EEG data, N400 and P600 caused by puns, it was found that there were significant difference of logical mechanisms between homonymic and semantic puns. For homonymic puns, a significant P600 effect without any obvious N400 amplitude was elicited for the pronunciation of heterograph. For semantic puns, an apparent N400 amplitude might reflect ambiguities and comprehensive difficulty of a homonym into its discourse context. This study also conveyed that the APWP paradigm proved to be a good model for sentences research, which can be applied to other linguistic phenomena of complete context, such as metaphor, irony and jokes, sentence pattern and syntactic research.
In this paper, an optimal user power allocation scheme is proposed to maximize the energy efficiency for downlink
non-orthogonal multiple access (NOMA) heterogeneous networks (HetNets). Considering channel estimation errors
and inter-user interference under imperfect channel state information (CSI), the energy efficiency optimization
problem is formulated, which is non-deterministic polynomial (NP)-hard and non-convex. To cope with this
intractable problem, the optimization problem is converted into a convex problem and address it by the Lagrangian
dual method. However, it is difficult to obtain closed-form solutions since the variables are coupled with each
other. Therefore, a Lagrangian and sub-gradient based algorithm is proposed. In the inner layer loop, optimal
powers are derived by the sub-gradient method. In the outer layer loop, optimal Lagrangian dual variables are
obtained. Simulation results show that the proposed algorithm can significantly improve energy efficiency compared
with traditional power allocation algorithms.
To analyze the impact of solar eclipse attacks on the block-chain network and comprehensively and accurately
evaluate the security of the current system, a method based on the Markov differential is proposed to solve the
problem that it is difficult to meet the multi-stage continuous real-time randomness in the current block-chain
network attack and defense process. Block-chain network security situation awareness method based on game
model. This method analyzes the security data generated by the solar eclipse attack, establishes the corresponding
attack graph, and classifies the offensive and defensive strengths of the offensive and defensive parties. Through a
multi-stage offensive and defensive game, the security level of each node of the blockchain system is combined with
the final objective function value comprehensively evaluates the real-time security status of the system. The results of simulation experiments show that the proposed model and algorithm can not only effectively evaluate the overall
security of the blockchain network, but also have feasibility in predicting the future security status.
For studying and optimizing the performance of general-purpose computing on graphics processing units(GPGPU) based on single instruction multiple threads(SIMT) processor about the neural network application, this work contributes a self-developed SIMT processor named Pomelo and correlated assembly program. The parallel mechanism of SIMT computing mode and self-developed Pomelo processor is briefly introduced. A common convolutional neural network(CNN) is built to verify the compatibility and functionality of the Pomelo processor. CNN computing flow with task level and hardware level optimization is adopted on the Pomelo processor. A specific algorithm for organizing a Z-shaped memory structure is developed, which addresses reducing memory access in mass data computing tasks. Performing the above-combined adaptation and optimization strategy, the experimental result illustrates that reducing memory access in SIMT computing mode plays a crucial role in improving performance. A 6.52 times performance is achieved on 4 processing elements case.
This paper focuses on gesture recognition and interactive lighting control. The collection of gesture data adopts the Myo armband to obtain surface electromyography (sEMG). Considering that many factors affect sEMG, a customized classifier based on user calibration data is used for gesture recognition. In this paper, machine learning classifiers k-nearest neighbor (KNN), support vector machines (SVM), and naive Bayesian (NB) classifier, which can be used in small sample sets, are selected to classify four gesture actions. The performance of the three classifiers under different training parameters, different input features, including root mean square (RMS), mean absolute value (MAV), waveform length (WL), slope sign change (SSC) number, zero crossing (ZC) number, and variance (VAR) are tested, and different input channels are also tested. Experimental results show that: The NB classifier, which assumes that the prior probability of features is polynomial distribution, has the best performance, reaching more than 95% accuracy. Finally, an interactive stage lighting control system based on Myo armband gesture recognition is implemented.
With the application of various information technologies in smart manufacturing, new intelligent production mode puts forward higher demands for real-time and robustness of production scheduling. For the production scheduling problem in large-scale manufacturing environment, digital twin (DT) places high demand on data processing capability of the terminals. It requires both global prediction and rea-time response abilities. In order to solve the above problem, a DT-based edge-cloud collaborative intelligent production scheduling (DTECCS) system was proposed, and the scheduling model and method were introduced. DT-based edge-cloud collaboration (ECC) can predict the production capacity of each workshop, reassemble customer orders, optimize the allocation of global manufacturing resources in the cloud, and carry out distributed scheduling on the edge-side to improve scheduling and tasks processing efficiency. In the production process, the DTECCS system adjusts scheduling strategies in real-time, responding to changes in production conditions and order fluctuations. Finally, simulation results show the effectiveness of DTECCS system.
Deep learning has recently been progressively introduced into the field of modulation classification due to its wide
application in image, vision, and other areas. Modulation classification is not only the priority of cognitive radio
and spectrum sensing, but also the link during signal demodulation. Combining the advantages of convolutional
neural network (CNN), long short-term memory (LSTM), and residual network (ResNet), a modulation
classification method based on dual-channel CNN-LSTM and ResNet is proposed to automatically classify the
modulation signal more accurately. Specifically, CNN and LSTM are initially used to form a dual-channel structure
to effectively explore the spatial and temporal features of the original complex signal. It solves the problem of only
focusing on temporal or spatial aspects, and increases the diversity of features. Secondly, the features extracted
from CNN and LSTM are fused, making the extracted features richer and conducive to signal classification. In
addition, a convolutional layer is added within the residual unit to deepen the network depth. As a result, more
representative features are extracted, improving the classification performance. Finally, simulation results on the
radio machine learning (RadioML) 2018.01A dataset signify that the network's classification performance is
superior to many classifiers in the literature.
Stable opto-electronic oscillators (OEOs) are realized using long fiber delay lines and changes in the index of refraction of high quality factor delay line results in temperature sensitivity of OEOs. Temperature sensitivity of various OEOs is measured to compare index of refraction variation of standard (SMF-28) and photonic crystal fiber (PCF). Both hollow-core (HC) and solid-core (SC) versions of PCF are quantified. SC-PCF exhibited a factor of three reductions in the rate of index of refraction change (about +4.7 ppm/°C) with temperature over SMF-28 (about 12 ppm/°C) based OEO. Although HC-PCF have a greater attenuation per unit length, but those fibers have demonstrated a negative rate of change (about 0.6 ppm/°C) in the effective index of refraction with temperature and prospect of thermal stability in the OEO using passive techniques is great when a combination of HC-PCF and SMF-28 are employed as fiber delay lines.
Because of the wide application and great market potential of location-aware services, the research of wireless location techniques for the fourth generation (4G) mobile communications is being paid more attention. Wireless cognitive location (WCL) techniques for next generation wireless networks have been proposed in recent years. This article investigates the changes of the positioning accuracy of WCL algorithm when different methods are adopted to measure the short-range (SR) information. By first completing Cramér-Rao lower bound (CRLB) analysis of the WCL algorithm with SR measurements based on time of arrival (TOA) and received signal strength (RSS), it is discovered that TOA-based or time difference of arrival (TDOA) -based SR measurement can make WCL algorithms achieve higher accuracy than RSS mode, which is also verified by numerical simulation in the article. The conclusions can instruct the design of novel WCL-based location algorithms.
Sensing the spectrum in a reliable and efficient manner is crucial to cognitive radio. To combat the channel fading suffered by the single radio, cooperative spectrum sensing is employed, to associate the detection of multiple radios. In this article, the optimization problem of detection efficiency under the constraint of detection probability is investigated, and an algorithm to evaluate the required radio number and sensing time for maximal detection efficiency is presented. To show the effect of cooperation on the detection efficiency, the proposed algorithm is applied to cooperative sensing using the spectral correlation detector under the Rayleigh flat fading channel.
Customers’ satisfaction with services is reflected by quality of experience (QoE). Insofar, most studies on cooperative communication have been focused on improving the QoE of source users. However, the improvement of a source user’s QoE is obtained at cost of degradation of the relay user’s QoE. On the other hand, cooperative communications can achieve performance similar to that of a conventional multiple-input multiple-output (MIMO) system by forming virtual MIMO arrays. Hence, to improve the QoE performance of relay users, this article proposes the concept of a belief threshold at the destination user and a new cooperative scheme based on the belief threshold destination (BTD) technique, while without decaying the BER performance of the communication system.
In adaptive channel allocation for secondary user (SU) of cognitive radio (CR) system, it is necessary to consider allocation process from the temporal perspective. In this article, a chain store game is modeled to achieve SU’s equilibrium state. Due to the computational complexity of solving equilibrium states, the authors explore the correlated equilibrium (CE) by importing signal mechanisms based on time and sequence number. Also, correlated equilibrium based game algorithms are presented. Simulations show that these algorithms are superior to other allocation algorithms both in channel utilization and communication time.
A wireless sensor network is typically composed of hundreds, even thousands of tiny sensors used to monitor physical phenomena. As data collected by the sensors are often redundant, data aggregation is important for conserving energy. In this paper, we present a new routing protocol with optimal data aggregation. This routing protocol has good performance due to its optimal selection of aggregation point locations. This paper details the optimal selection of aggregation point locations.
Wireless sensor networks are being widely researched and are expected to be used in several scenarios. On the leading edge of treads, on-demand, high-reliability, and low-latency routing protocol is desirable for indoor environment applications. This article proposes a routing scheme called robust multi-path routing that establishes and uses multiple node-disjoint routes. Providing multiple routes helps to reduce the route recovery process and control the message overhead. The performance comparison of this protocol with dynamic source routing (DSR) by OPNET simulations shows that this protocol is able to achieve a remarkable improvement in the packet delivery ratio and average end-to-end delay.
On the basis of the amplify-and-forward relaying mode, a two-hop distributed cooperative multi-relay system is proposed combining with the space-time block coding OFDM (STBC-OFDM) technique. Taking the maximum end-to-end data rate as optimization criterion, the signal-to-noise ratio (SNR) of receiving terminal is deduced. On the basis of the water-filling theory, the optimal power allocation (OPA) is achieved for each subcarrier in each antenna and each relay node (RN) of the two-hop, to realize the resource optimization. Monte Carlo method is adopted in simulation. The simulation results show that compared with the uniform resource allocation scheme, the proposed OPA strategy can improve the system capacity. And the energy consumption of each transmission bit will be decreased, indicating the improvement of resource efficiency. In the scenario that the total power is limited, the system performance can be enhanced further by the distributed cooperative multi-relay through the diversity gain.
In wireless Ad-hoc networks, where mobile hosts are powered by batteries, the entire network may be partitioned because of the drainage of a small set of batteries. Therefore, the crucial issue is to improve the energy efficiency, with an objective of balancing energy consumption. A greedy algorithm called weighted minimum spanning tree (WMST) has been proposed, in which time complexity is . This algorithm takes into account the initial energy of each node and energy consumption of each communication. Simulation has demonstrated that the performance of the proposed algorithm improves the load balance and prolongs the lifetime.
The increasing demand for interactive mobile multimedia service is causing the integration of 3rd generation (3G) cellular systems and wireless broadcast systems. The key challenge is to support data dissemination with low response time, request drop rate, and the unfairness of request drop. This article proposes a novel scheduling algorithm called DAG (on-demand scheduling utilizing analytic hierarchy process (AHP) and grey relational analysis (GRA)), which takes multiple factors—waiting time, number of active requests, deadline—into consideration, and models the data scheduling process as a multiple factors’ decision-making and best option-selecting process. The proposed approach comprises two parts. The first part applies AHP to decide the relative weights of multiple decision factors according to user requests, while the second adopts GRA to rank the data item alternatives through the similarity between each option and the ideal option. Simulation results are presented to demonstrate that DAG performs well in the multiple criterions mentioned above.
The problem of blind channel identification in a multiuser system is considered in this article. For this purpose, a blind identification algorithm is proposed based on the conjugate cyclostationarity of the received signal. The new approach contains a two-stage identification procedure. First, the separation technique in the cyclic domain is used to separate the second-order cyclic statistics for each user. Second, a subspace algorithm based on the rational subspace theory is exploited to estimate the desired channel. Theoretical analysis and simulation results show that this algorithm is suitable for a multiuser system. Compared with other methods, the algorithm shows good performance even in a bad situation when the number of users is large and the diversity condition is unavailable.
The concepts on information and communication technology (ICT) and “intelligence” are defined firstly and the environment and requirements for ICT are then analyzed. Based on the definitions and the analyses given, a survey on intelligence approaches for ICT is thus made. The major conclusion drawn from the survey is a recommendation saying that intelligence approaches are becoming the nucleus for further development of the entirety of ICT and therefore should receive much more attentions from ICT researchers in the coming years.
This article analyzes the diversity order of several proposed schemes, where the transmit antenna selection (TAS) strategies are combined with low-complexity decode-and-forward (DF) protocols in the multiple-input multiple-output (MIMO) relaying scenario. Although antenna selection is a suboptimal form of beamforming, it enjoys the advantages of tractable optimization and low feedback overhead. Specifically, this article proposes schemes that combine TAS strategies with fixed decode-and-forward (FDF) and selection decode-and-forward (SDF) protocols. Following that, the asymptotic expressions of outage probabilities are derived and the diversity order of the proposed schemes analyzed. These kinds of combination of transmit antenna selection strategies and low-complexity decode-and-forward protocols can achieve partial diversity order in the MIMO relaying scenario. The numerical simulations verify the analysis.
This article covers laser configurations, design and experiments of photonic microelectromechanical systems (MEMS) tunable laser sources. Three different types of MEMS tunable lasers such as MEMS coupled-cavity lasers, injection-locked laser systems and dual-wavelength tunable lasers are demonstrated as examples of natural synergy of MEMS with photonics. The expansion and penetration of the MEMS technology to silicon optoelectronic creates on-chip optical systems at an unprecedented scale of integration. While producing better integration with robustness and compactness, MEMS improves the functionalities and specifications of laser chips. Additionally, MEMS tunable lasers are featured with small size, high tuning speed, wide tuning range and CMOS compatible integration, which broaden their applications to many fields.
This article proposes a time/frequency synchronization algorithm in the multiple input multiple output (MIMO) systems, in which the perfect complete generalized complementary orthogonal loosely synchronous code groups are used as the synchronization sequence. The synchronization algorithm is divided into four stages: 1) synchronization in time domain by signal autocorrelation; 2) synchronization in frequency domain by fast Fourier transform (FFT); 3) multipath dissociation using coherent detection and fine time synchronization; 4) fine frequency offset estimation by phase rotation. As per the perfect complete generalized complementary orthogonal loosely synchronous code groups, the cross-correlation and out-of-phase auto-correlation for any relative shift between any two codes is always zero. This ideal property makes the time/frequency synchronization algorithm simple and efficient. The simulation results show that even in the multipath fast fading channel with low signal noise ratio (SNR), the MIMO system can get synchronized both in the time domain and frequency domain with high stability and reliability.
This article describes a new model of a cooperative file sharing system in a wireless Mesh network. The authors’ approach is to develop an efficient and cooperative file sharing mechanism based on opportunistic random linear Network Coding. Within this mechanism, every node transmits random linear combination of its packets according to cooperative priority, which is computed in a distributed manner according to the node-possible contribution to its neighbor nodes. With this mechanism, the more a node contributes to others, the more the node has chances to recover the entire file first. The performance metrics of interest here are: the delay until all the packets in a file have been delivered to all nodes, and an ideal packet size, by the use of which the authors can get the minimum transmission delay. Through extensive simulation the authors compare their mechanism with the current transmission process in a wireless Mesh network without random linear Network Coding. The authors found that using their mechanism, the nodes can cooperatively share the entire file with less transmission time and delay than the current transmission process without random linear network.
Cooperative spectrum sensing (CSS) is an approach to confront fading environment. However, in conventional cooperative spectrum sensing, the difference among the secondary users (SUs) is ignored when SUs suffer from different fading. In this paper, we propose a Signal-to-Noise Ratio (SNR)-based weighted spectrum sensing scheme to improve the sensing performance. Then the sensing performance of the weighted spectrum sensing scheme is firstly derived which previous work never related. Considering the minor contribution of the SUs with small weighted factor, we propose a selective cooperative spectrum sensing scheme where the SUs with low SNR are not selected into cooperative spectrum sensing. The simulation results confirm the analytical results. And the performance of weighted scheme is better than that of conventional schemes. In the case that the SNR of SUs are random distributed, the performance of selective scheme is almost the same as the weighted scheme while the number of cooperative SUs is reduced to save the consumption of system resource in cooperation with little additional complexity.
In this article a bridge between the expected complexity and performance of sphere decoding (SD) is built. The expected complexity of SD for infinite lattices is then investigated, which naturally is the upper-bound of those for all the finite lattices if given by the same channel matrix and signal noise ratio (SNR). Such expected complexity is an important characterization of SD in multi-antenna systems, because no matter what modulation scheme is used in practice (generally it has finite constellation size) this upper-bound holds. Above bridge also leads to a new method of determining the radius for SD. The numerical results show both the real value and upper-bound of average searched number of candidates in SD for 16-QAM modulated system using the proposed sphere radius determining method. Most important of all new understandings of expected complexity of SD are given based on above mentioned theoretic analysis and numerical results.
The problem of improving the performance of min-sum decoding of low-density parity-check (LDPC) codes is considered in this paper. Based on min-sum algorithm, a novel modified min-sum decoding algorithm for LDPC codes is proposed. The proposed algorithm modifies the variable node message in the iteration process by averaging the new message and previous message if their signs are different. Compared with the standard min-sum algorithm, the modification is achieved with only a small increase in complexity, but significantly improves decoding performance for both regular and irregular LDPC codes. Simulation results show that the performance of our modified decoding algorithm is very close to that of the standard sum-product algorithm for moderate length LDPC codes.
Fractional frequency reuse (FFR) is an effective technique to mitigate co-channel interference in orthogonal frequency division multiple access (OFDMA)-based broadband cellular systems. In this paper, we present a generalized model for FFR under which all existing FFR schemes can be considered as its special cases. Additionally, quality factor has been proposed to indicate the quality of the subband. An interesting conclusion can be drawn that, as the power ratio in FFR is adjusted continuously, the corresponding quality factor varies smoothly. Subsequently, simulation is conducted based on worldwide interoperability for microwave access (WiMAX), and the result agrees well with our theoretical analysis. Finally, an effective range for power ratio is presented, which is very instructive to practical system design
The digital proportion control is introduced to improve the performance of the analog adaptive interference cancellation system (ICS). For the high frequency parts of the signals after multiplier are not required, the sampling frequency need not satisfy the sampling theorem for high frequency. Because the sampling, calculation and output expend time in digital control, the ideal condition, delay condition and delay-wait condition are taken into account. Through analyzing the system model with three conditions, we gain the stable conditions of the system, the optimization step factors that can make the system converge fastest and the formulas of the interference cancellation ratios (ICRs). One step convergence can be accomplished under ideal condition, whereas the system can not converge in one step under delay condition and delay-wait condition. The calculation results show the convergence speed of delay-wait condition is slower than that of delay condition. The ICR is improved with the increase of the step factor which is in stable bound, but the convergence speed is decreased if the step factor exceeds the optimization step factor. In order to avoid that confine, the method of amending the steady state weight to improve the ICR is proposed. The analyses are in agreement with the computer simulations
Multiple input multiple output (MIMO) relaying techniques can greatly improve the spectral efficiency and extend network coverage for future wireless systems. This article investigates a multiuser MIMO relay channel, where a base station (BS) with multiple antennas communicates with multiple mobile stations (MS) via a relay station (RS) with multiple antennas. The RS applies linear processing to the received signal and then forwards the processed signal. The dual channel conditions between MIMO relay multiple access channel (MAC) and broadcast channel (BC) are first developed for single-relay scenario with white Gaussian noise. Then the MAC-BC duality for MIMO relay systems is established by proving that the capacity region of MIMO relay MAC is equal to that of dual MIMO relay BC under the same total network transmit power constraint. In addition, the duality is also extended to multi-relay scenario with arbitrary noise. Finally, several simple general numerical examples are provided to better illustrate the effectiveness of the MIMO relay MAC-BC duality
Data generated in wireless multimedia sensor networks (WMSNs) may have different importance and it has been claimed that the network exert more efforts in servicing applications carrying more important information. Nevertheless, importance of packets cannot generally be accurately represented by a static priority value. This article presents a dynamic priority based congestion control (DPCC) approach that makes two major innovations in WMSNs. First, DPCC employs dynamic priority to represent packet importance. Second, it prioritizes the local traffic of motes near the base station when WMSN is highly congested. Simulation results confirm the superior performance of the proposed approach with respect to energy efficiency, loss probability and latency as well.
Cooperative relaying is emerging as an effective technology to fulfill requirements on high data rate coverage in next-generation cellular networks, like long term evolution-advanced (LTE-Advanced). In this paper, we propose a distributed joint relay node (RN) selection and power allocation scheme over multihop relaying cellular networks toward LTE-Advanced, taking both the wireless channel state and RNs’ residual energy into consideration. We formulate the multihop relaying cellular network as a restless bandit system. The first-order finite-state Markov chain is used to characterize the time-varying channel and residual energy state transitions. With this stochastic optimization formulation, the optimal policy has indexability property that dramatically reduces the computational complexity. Simulation results demonstrate that the proposed scheme can efficiently enhance the expected system reward, compared with other existing algorithms
This paper proposes a distributed relay and modulation and coding scheme (MCS) selection in wireless cooperative relaying networks where the adaptive modulation and coding (AMC) scheme is applied. First-order finite-state Markov channels (FSMCs) are used to model the wireless channels and make prediction. The objective of the relay policy is to select one relay and MCS among different alternatives in each time-slot according to their channel state information (CSI) with the goal of maximizing the throughput of the whole transmission period. The procedure of relay and MCS selection can be formulated as a discounted Markov decision chain, and the relay policy can be obtained with recent advances in stochastic control algorithms. Simulation results are presented to show the effectiveness of the proposed scheme.
Power allocation (PA) plays an important role in capacity improvement for cooperative multiple-input multiple- output (Co-MIMO) systems. Many contributions consider a total power constraint (TPC) on the sum of transmit power from all nodes in addressing PA problem. However, in practical implementations, each transmit node is equipped with its own power amplifier and is limited by individual power constraint (IPC). Hence these PA methods under TPC are not realizable in practical systems. Meanwhile, the PA problem under IPC is essential, but it has not been studied. This paper extends the traditional non-cooperative water-filling PA algorithm to IPC-based Co-MIMO systems. Moreover, the PA matrix is derived based on the compound channel matrix from all the cooperative nodes to the user. Therefore, the cooperative gain of the IPC-based Co-MIMO systems is fully exploited, and further the maximal instantaneous capacity is achieved. Numerical simulations validate that, under the same IPC conditions, the proposed PA scheme considerably outperforms the non-cooperative water-filling PA and uniform PA design in terms of ergodic capacity.
The long term evolution advanced (LTE-advanced) standards target at high system performance comparable or superior to the requirements of the International mobile telecommunications advanced (IMT-advanced). In order to support backward compatibility with LTE, most of the key technologies have been retained in LTE-advanced, one of which is the discontinuous reception mechanism (DRX). LTE-advanced adopts carrier aggregation technology to extend the system bandwidth, which requires the LTE DRX applied in single-transceiver scenario to be adapted to multi-transceiver scenario with multiple component carriers. Apparently, carrier aggregation will influence the performance of DRX severely, so it’s worth studying the impact brought by the coexistence of LTE DRX and carrier aggregation on the system performance, e.g., the system delay. In this paper, first an overview of DRX in carrier aggregation scenario is given. Then it is modeled as a Markov process based on the queuing theory. Simulation results show that the independent component carrier configuration with a uniform Inactivity Timer achieves a superior service delay performance compared with other reference schemes.
Femtocell is a promising technology to improve network performance with low-power and cost-beneficial small base stations. However, the interference-limited reality in femtocell networks makes interference and resource management the key to achieving the benefits of femtocell networks. In this paper, the following contributions are made step by step: first, on the basis of the interference temperature model (ITM) in cognitive radio (CR) technique and the network architecture of the third generation partnership project (3GPP) long term evolution advanced (LTE-A), the problem model of optimizing the capacity of the femtocell-reused subchannel is established under the frequency partitioning strategy, jointly considering the average interference constraint and the instantaneous interference constraint. Second, utilizing the convex theory, optimal power allocation of the femtocell-reused subchannel is derived. Third, under Rayleigh fading channel, closed-form expressions of the subchannel reusable probability and capacity are derived. At last, numerical results are conducted to confirming our analytical results, which could provide theoretical guidance for frequency resource allocation of femtocell network deployments.
This paper investigates the tradeoff between energy-efficiency capacity and spectrum sensing under hybrid spectrum sharing model, where the spectrum sharing method is based on sensing results of secondary user (SU). The metric ‘bits per joule’, which captures the effect of energy overhead in spectrum sensing, is adopted to evaluate energy-efficiency capacity. We first formulize the tradeoff between energy-efficiency capacity and spectrum sensing as an optimization problem with mixture constraint of sensing time and detection threshold. Under some certain condition on the domain of detection threshold, i.e. in which we can’t improve energy-efficiency capacity through increasing the detection probability, the original optimization problem can be reduced to a new unconstrained one, which only relates to sensing time. Then the existence and uniqueness of optimal sensing time to achieve maximum energy-efficiency capacity are discussed and a low-complexity algorithm is proposed to obtain the optimal solution. Finally, numerical simulation is performed to verify the theoretical analysis results. The simulation results indicate that hybrid spectrum sharing is remarkably beneficial to energy-efficient transmission in cognitive radio networks (CRN). And the proposed algorithm can quickly converge to the optimal solution.
In this paper, we focus on antenna array design for mobile phone with finite volume and propose a novel antenna element structure by capacitive feeding and capacitive loading method based on the planar inverted F antenna (PIFA). State-of-the-art development on this issue is reviewed. Then, a novel capacitively fed and capacitively loaded PIFA structure is proposed and studied. The results of the experiments showed that our structure can reduce the coupling of antenna elements from 13.4 dB to 24.5 dB. Finally, a design with a bandwidth of 100 MHz centered at 2.35 GHz and envelopment correlation coefficient of 0.01 2 is provided and the diversity performance of the dual-element modified PIFA array is evaluated in both simulation and measurement. In a word, our novel design reaches broadband, miniaturization, high isolation and offers excellent diversity performance.
One of the remarkable features of the next generation network is the integration of heterogeneous wireless networks, which enables mobile users with multi-mode terminals access to the best available network seamlessly. However, most of previous work only takes account of either maximizing single user’s utility or the whole network’s payoff, rarely considers the negotiation between them. In this paper, we propose a novel network selection approach using improved Multiplicative Multi-attribute Auction (MMA). At first, an improved MMA method is put forward to define the user’s utility. Additionally, user cost is defined by considering allocated bandwidth, network load intensity and cost factor parameter. And last the best suitable network is selected according to the user’s performance-cost-ration. Simulation results confirm that the proposed scheme outperforms the existing scheme in terms of network selection’s fairness, user’s performance-cost-ration, load balancing and the number of accommodated users.
In this paper, firstly presented a geometrically based statistical channel model with scatterers that are with inverted parabolic spatial distribution around mobile station (MS) within a circle wherein the base station (BS) and MS are included. This paper presented a technique to simply derive PDFs of AOA, TOA and Doppler spectra to characterize the outdoor macrocell and microcell wireless environments by employing various distances between BS and MS, or different size of circular region. Employing this channel model, we analyze the impacts of a directional antenna with the main-lobe width at BS on the fading and the Doppler spectra.
To take advantage of the multiuser diversity resulted from the variation in channel conditions among the users, it has become an interesting and challenging problem to efficiently allocate the resources such as subcarriers, bits, and power. Most of current research concentrates on solving the resource-allocation problem for all users together in a centralized way, which brings about high computational complexity and makes it impractical for real system. Therefore, a coalitional game framework for downlink multi-user resource allocation in long term evolution (LTE) system is proposed, based on the divide-and-conquer idea. The goal is to maximize the overall system data rate under the constraints of each user’s minimal rate requirement and maximal transmit power of base station while considering the fairness among users. In this framework, a coalitional formation algorithm is proposed to achieve optimal coalition formation and a two-user bargaining algorithm is designed to bargain channel assignment between two users. The total computational complexity is greatly reduced in comparison with conventional methods. The simulation results show that the proposed algorithms acquire a good tradeoff between the overall system throughout and fairness, compared to maximal rate and max-min schemes.
This paper proposes a chip correlation indicator (CCI)-based link quality estimation mechanism for wireless sensor networks under non-perceived packet loss. On the basis of analyzing all related factors, it can be concluded that signal-to-noise rate (SNR) is the main factor causing the non-perceived packet loss. In this paper, the relationship model between CCI and non-perceived packet loss rate (NPLR) is established from related models such as SNR versus packet success rate (PSR), CCI versus SNR and CCI-NPLR. Due to the large fluctuating range of the raw CCI, Kalman filter is introduced to do de-noising of the raw CCI. The cubic model and the least squares method are employed to fit the relationship between CCI and SNR. In the experiments, many groups of comparison have been conducted and the results show that the proposed mechanism can achieve more accurate measurement of the non-perceived packet loss than existing approaches. Moreover, it has the advantage of decreasing extra energy consumption caused by sending large number of probe packets.
Piecewise companding transform is a flexible and efficient way to solve the high peak-to-average power ratio (PAPR) problem for orthogonal frequency division multiplexing (OFDM) systems. A novel threshold-based piecewise companding transform is proposed in this paper. Based on the statistical characteristics of amplitudes, OFDM signals are classified into three groups (i.e., small, average and large signals). Different from conventional approaches, two dedicated designed thresholds are set to amplify the small signals and compress the large signals, respectively. Simulation results verify the improvement in PAPR reduction of the proposed scheme. Moreover, a lower bit error rate (BER) performance loss can be obtained by introducing the iterative detection with a moderate increase in complexity.
With the increasing energy consumption, energy efficiency (EE) has been considered as an important metric for wireless communication networks as spectrum efficiency (SE). In this paper, EE optimization problem for downlink multi-user multiple-input multiple-output (MU-MIMO) system with massive antennas is investigated. According to the convex optimization theory, there exists a unique globally optimal power allocation achieving the optimal EE, and the closed-form of the optimal EE only related to channel state information is derived analytically. Then both the approximate and accurate power allocation algorithms with different complexity are proposed to achieve the optimal EE. Simulation results show that the optimal EE obtained by the approximate algorithm coincides to that achieved by the accurate algorithm within the controllable error limitation, and these proposed algorithms perform better than the existing equal power allocation algorithm. The optimal EE and corresponding SE increase with the number of antennas at base station, which is promising for the next generation wireless communication networks.
This paper investigates the resource allocation problem for the cluster-based cooperative multicast in orthogonal frequency division multiplexing (OFDM)-based cognitive radio (CR) systems. Aiming at maximizing the system sum rate, an efficient clustering scheme is proposed. It begins with the clustering phase where secondary users (SUs) with good channel conditions are selected as cluster heads, while others decide to which cluster they belong. When the clusters are organized, it turns to a two-stage data transmission phase: in stage 1, the secondary base station (BS) transmits data to the cluster heads; in stage 2, the cluster heads forward the received data to their cluster members. Based on this scheme, a joint subcarrier and power allocation algorithm is proposed. Simulation results show that the proposed scheme significantly outperforms the conventional multicast (CM) as well as the multiple description coding multicast (MDCM) in terms of the system sum rate.
In the future, the wireless communication networks can be visualized as the integration of different radio access technologies (RATs), which are referred to as heterogeneous wireless networks (HWNs). In this paper, the traffic split scheme in the HWNs integrating the long term evolution (LTE) and the high speed downlink packet access (HSDPA) networks is investigated. Assuming that the networks can support multi-homing access and the user can be served by both networks simultaneously, the traffic split problem is described as an optimization problem with the aim of maximizing the throughput. By solving the problem, the dynamic traffic split scheme is proposed. The split ratios in the scheme should be proportional to the transmission rates in theory, which are hard to be described in the closed forms. Then the adaptive algorithm is proposed to obtain the split ratios. Simulation results show that the scheme with the adaptive algorithm provides better performance than the scheme without it over the additive white Gaussian noise (AWGN) fading channel and Rayleigh fading channel.
Due to its opportunistic spectrum sharing capability, cognitive radio (CR) has been proposed as a fundamental solution to alleviate the contradiction between spectrum scarcity and inefficient utilization of licensed spectrum. In CR system (CRS), to efficiently utilize the spectrum resource, one important issue is to allocate the sensing and transmission duration reasonably. In this paper, the evaluation metric of energy efficiency, which represented the total number of bits that were delivered with per joule of energy consumed, is adopted to evaluate the proposed scheme. We study a joint design of energy efficient sensing and transmission durations to maximize energy efficiency capacity (EEC) of CRS. The tradeoff between EEC and sensing and transmission durations are formulized as an optimization problem under constraints on target detection probability of secondary users (SUs) and toleration interference threshold of primary users (PUs). To obtain the optimal solution, optimizing sensing duration and transmission duration will be first performed separately. Then, a joint optimization iterative algorithm is proposed to search the optimal pair of sensing and transmission durations. Analytical and simulation results show that there exists a unique duration pair where the EEC is maximized, and that the EEC of the proposed joint optimization algorithm outperforms that of existed algorithms. Furthermore, the simulation results also reveal that the performance of the proposed low complexity iterative algorithm is comparable with that of the exhaustive search scheme.
Two-dimensional (2D) multiple-input multiple-output (MIMO) is currently concentrated on propagation in horizontal plane, but the impact of elevation angle is not considered. However, due to the three-dimensional (3D) character of the real MIMO channel, 2D MIMO cannot achieve the optimal system throughput. A multiple-user MIMO (MU-MIMO) user pairing scheme was proposed, in which the vertical dimension was taken into consideration. In the proposed scheme, a 3D codebook based on full dimension MIMO channel was designed; then two 3D MU-MIMO user’s pairing schemes are proposed combining the proposed joint and separate 3D codebook. Simulation evaluates the proposed 3D codebook aided user pairing scheme and compares with the previous 2D MU-MIMO user pairing technology. Owing to the additional spatial degree of freedom in vertical dimension, the proposed 3D MU-MIMO user pairing schemes can effectively improve the overall system performance.
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving. The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.
In wireless cellular networks, the interference alignment (IA) is a promising technique for interference management. A new IA scheme for downlink cellular network with multi-cell and multi-user was proposed. In the proposed scheme, the interference in the networks is divided into inter-cell interference (ICI) among cells and inter-user interference (IUI) in each cell. The ICI is aligned onto a multi-dimensional subspace by multiplying the ICI alignment precoding matrix which is designed by the singular value decomposition (SVD) scheme at the base station (BS) side. The aligned ICI is eliminated by timing the interference suppression matrix which is designed by zero-forcing (ZF) scheme at the user equipment (UE) side. Meanwhile, the IUI is aligned by multiplying the IUI alignment precoding matrix which is designed based on Nash bargaining solution (NBS) in game theory. The NBS is solved by the particle swarm optimization (PSO) method. Simulations show that, compared with the traditional ZF IA scheme, the proposed scheme can obtain higher data rate and guarantee the data rate fairness of UEs with little additional complexity.
A simplified parametric channel estimation approach was proposed for orthogonal frequency division multiplexing (OFDM) systems. Based on parametric channel model, this algorithm is composed of two parts: the estimation of channel parameters and channel interpolation. The exponentially embedded family (EEF) criterion is exploited to determine the number of channel paths as well as the multipath time delays. Consequently, the channel frequency responses is acquired via the estimated parameters. Additionally, the authors’ scheme is computationally efficient owing to the needless of the eigenvalue decomposition or the estimation of signal parameters by the rotational invariance technique (ESPRIT). Simulations are provided to validate the performance of this algorithm from perspectives of the probability of correct estimation and the mean square error (MSE). It is demonstrated that this approach exhibits a superior performance over the existing algorithms.
Massive multiple-input multiple-output (MIMO) requires a large number (tens or hundreds) of base station antennas serving for much smaller number of terminals, with large gains in energy efficiency and spectral efficiency compared with traditional MIMO technology. Large scale antennas mean large scale radio frequency (RF) chains. Considering the plenty of power consumption and high cost of RF chains, antenna selection is necessary for Massive MIMO wireless communication systems in both transmitting end and receiving end. An energy efficient antenna selection algorithm based on convex optimization was proposed for Massive MIMO wireless communication systems. On the condition that the channel capacity of the cell is larger than a certain threshold, the number of transmit antenna, the subset of transmit antenna and servable mobile terminals (MTs) were jointly optimized to maximize energy efficiency. The joint optimization problem was proved in detail. The proposed algorithm is verified by analysis and numerical simulations. Good performance gain of energy efficiency is obtained comparing with no antenna selection.
The authors in this article investigated how to cancel multi-user interference with low feedback amount over 3-user multiple input multiple output (MIMO) interference channel using space-time code and precoders. Space-time block code with coding rate 2 was designed, also zero vectors were introduced into each codeword. The multi-user interference is mitigated by pre-coding at the transmitters and nonlinear operation and unidirectional cooperation link at the receivers. Compared with the existing scheme for the same scene, the proposed scheme greatly reduces feedback amount and improves the sum degrees of freedom (DOF). Simulation demonstrates the validity of the proposed scheme.
Financial and environment considerations present new trends in wireless network known as green communication. As one of the most promising network architectures, the device-to-device (D2D) communication should take seriously account to the energy-efficiency. Most of the existing work in the area of D2D communication only focus on the direct communication, however, the direct link D2D communication has to be limited in practice because of long distance, poor propagation medium and cellular interference, etc. A new energy-efficient multi-hop routing algorithm was investigated for multi-hop D2D system by jointly optimizing channel reusing and power allocation. Firstly, the energy-efficient multi-hop routing problem was formulated as a combinatorial optimization problem. Secondly, to obtain a desirable solution with reasonable computation cost, a heuristic multi-hop routing algorithm was presented to solve the formulated problem and to achieve a satisfactory energy-efficiency performance. Simulation shows the effectiveness of the proposed routing algorithm.
A distributed power allocation scheme was presented to maximize the system capacity in dense small cell networks. A new signaling called inter-cell-signal to interference plus noise ratio (ISINR) as well as its modification was defined to show the algebraic properties of the system capacity. With the help of ISINR, we have an easy way to identify the local monotonicity of the system capacity. Then on each subchannel in iteration, we divide the small cell evolved node B’s (SeNBs) into different subsets. For the first subset, the sum rate is convex with respect to the power domain and the power optimally was allocated. On the other hand, for the second subset, the sum rate is monotone decreasing and the SeNBs would abandon the subchannel in this iteration. The two strategies are applied iteratively to improve the system capacity. Simulations show that the proposed scheme can achieve much larger system capacity than the conventional ones. The scheme can achieve a promising tradeoff between performance and signaling overhead.
This paper presents a wide supply voltage range, high speed true random number generator (TRNG) based on ring oscillators, which have different prime number of inverters. And a simple Von Neumann corrector as post processing is also realized to improve data randomness. Prototypes have been implemented and fabricated in 0.18 μm complementary metal oxide semiconductor (CMOS) technology with a wide range of supply voltage from 1.8 V to 3.6 V. The circuit occupies 4 500 μm2, and dissipates minimum 160 μW of power with sampling frequency of 20 MHz. Output bit rate range is from 100 kbit/s to 20 Mbit/s. Statistical test results, which were achieved from the dieHard battery of tests, demonstrate that output random numbers have a well characteristic of randomness
Based on the fractional discrete cosine transform (DCT) via polynomial interpolation (PI-FrDCT) and the dependent scrambling and diffusion (DSD), an image encryption algorithm is proposed. Under certain conditions, the introduction of PI-FrDCT reduces computational complexity compared with fractional DCT (FrDCT). By using a sigmoid function, the encrypted results are limited within a range from 0 to 255. The real-valued output of PI-FrDCT is beneficial to the storage, display and transmission of the cipher-text. During the stage of confusion and diffusion, the values of all PI-FrDCT coefficients change simultaneously as their locations are replaced. DSD enhances the scrambling and diffusion level of encrypted images and provides nonlinearity to the encryption system. Simulation results demonstrate that the proposed encryption algorithm is feasible, effective and secure.
As for safety applications of vehicular Ad-hoc network (VANET), many valuable broadcast protocols have been proposed nowadays, most of which are based on either senders or receivers. In fact, sender-based protocols would fall into invalidation due to high mobility of network, while receiver-based ones would generate extra delay. Combining both the advantages of the two schemes, this paper proposes an efficient and reliable broadcast protocol based on quality of forwarding (ERBPQF) of candidate nodes. In ERBPQF, a double-phase relay scheme is presented to reach fast message dissemination in the first phase and to ensure high packet delivery ratio (PDR) in the second phase. Then, considering signal fading, channel contention, queuing delay, broadcast interference and high mobility of vehicles, a new metric called quality of forwarding (QoF) for relay selection is further proposed. The simulation results show that the delay and dissemination efficiency (DE) of ERBPQF outperforms slotted-1 protocol, accompanying with the achievement of more than 95% PDR.
The output of each individual channel in multi-carrier system can be processed to detect moving targets by the approach used in tradition narrowband pulse Doppler (PD) radar and then using non-coherent integration to promote signal noise ratio (SNR). However, due to the difference of Doppler on sub-carriers, there occurs Doppler dispersion during non-coherent integration, which causes attenuation and extension on target’s amplitude. Especially, it can deteriorate performance of target detection under wideband multicarrier system or fast-moving target scene. In this paper, a modified Fourier transform kernel is proposed to solve Doppler dispersion for multi-carrier chirp signal. It can achieve accumulation at the same frequency point for the target’s Doppler of each subcarrier. The simulation results indicate that this method can effectively eliminate the integral loss caused by Doppler dispersion.
This paper propose a new secure oblivious transfer protocol from indistinguishability obfuscation. The main technical tool in this paper is the candidate indistinguishability obfuscation introduced recently and a dual-mode cryptosystem.. Following their steps, a new k-out-of-l oblivious transfer protocol is presented here, and its realization from DDH is described in this paper, in which we combined indistinguishability obfuscation with the dual-mode cryptosystem. The security of our scheme mainly relies on the indistinguishability of the obf-branches(corresponding to the two modes in dual-mode model). Our paper explores a new way for the application of indistinguishability obfuscation.
In distributed systems, it is important to adjust load distribution dynamically based on server performance and load information. Meanwhile, gray release and rapid expansion are the basic requirements to ensure reliability and stability for systems with short version iteration cycles. The traditional Hash algorithm performs poorly in gray release, rapid expansion, and load distribution. To solve these problems, a novel Hash-based dynamic mapping (HDM) load balancing algorithm was proposed. On the one hand, this algorithm can adjust the load distribution dynamically based on server performance and load information. On the other hand, it implements gray release by controlling the ratio of requests assigned to the changed nodes. Additionally, HDM has a higher expansion efficiency. Experiments show that the HDM distributes the load more reasonably, provides a more stable gray release ratio, and has a higher expansion efficiency.
To improve error-correcting performance, an iterative concatenated soft decoding algorithm for reed-solomon (RS) codes is presented in this article. This algorithm brings both complexity as well as advantages in performance over presently popular soft decoding algorithms. The proposed algorithm consists of two powerful soft decoding techniques, adaptive belief propagation (ABP) and box and match algorithm (BMA), which are serially concatenated by the accumulated log-likelihood ratio (ALLR). Simulation results show that, compared with ABP and ABP-BMA algorithms, the proposed algorithm can bring more decoding gains and a better tradeoff between the decoding performance and complexity.
This article proposes a spatial multiplexing and diversity joint transmit structure for multiple input multiple output (MIMO)  orthogonal frequency division multiplexing (OFDM) system. The structure uses space-frequency block coding (SFBC) over all its spatial multiplexing layers to obtain both space diversity and frequency diversity gain. The article derives a monte carlo probabilistic data association (PDA) detector to obtain a better bit error rate (BER) performance when compared to the generic PDA detector. Computer simulation results show that the proposed detector can reduce the inter-symbol interference (ISI) greatly, improve the system performance significantly, and lower the computation complexity to a proper lever.
The impact of mutual coupling on the performance of multiple-input multiple-output (MIMO) systems with compact antenna arrays is analyzed. This article aims to use the S-parameter modeling approach to examine the impact of mutual coupling on three system-related performance measures: antenna correlation, efficiency and bandwidth. It is shown that implementing a good matching network can drastically improve the system performance in the presence of strong mutual coupling. Experiment results indicate the superiority of cross-shaped antenna to dipole antenna.
This paper describes an onboard computer with dual processing modules. Each processing module is composed of 32 bit ARM reduced instruction set computer processor and other commercial-off-the-shelf devices. A set of fault handling mechanisms is implemented in the computer system, which enables the system to tolerate a single fault. The onboard software is organized around a set of processes that communicate among each other through a routing process. Meeting an extremely tight set of constraints that include mass, volume, power consumption and space environmental conditions, the fault-tolerant onboard computer has excellent data processing capability that can meet the erquirements of microsatellite missions. 16 Refs. In English.
The design of media access control (MAC) protocol for wireless sensor networks (WSNs) with the idea of cross layer attracts more and more attention. People can improve the MAC protocol by obtaining certain information regarding the network layer and physical layer. This article synthesizes and optimizes certain cross-layer protocols which have existed. On the basis of the routing, topology information in the network layer, and transmission power information in the physical layer, the time slot assignment algorithm has been improved in the MAC layer. By using geographical adaptive fidelity algorithm (GAF) to divide the grids, controlling of transmission power and scheduling the work/sleep duty cycle for sensor nodes, a new MAC protocol has been proposed to decrease energy consumption and enlarge the lifetime of WSNs. Simulation results show that the MAC protocol functions well.
One of the biggest challenges in ultra-wideband (UWB) radio is the accurate timing acquisition for the receiver. In this article, we develop a novel data-aided synchronization algorithm for pulses amplitude modulation (PAM) UWB systems. Pilot and information symbols are transmitted simultaneously by an orthogonal code division multiplexing (OCDM) scheme. In the receiver, an algorithm based on the minimum average error probability (MAEP) of coherent detector is applied to estimate the timing offset. The multipath interference (MI) problem for timing offset estimation is considered. The mean-square-error (MSE) and the bit-error-rate (BER) performances of our proposed scheme are simulated. The results show that our algorithm outperforms the algorithm based on the maximum correlator output (MCO) in multipath channels.
Cognitive radio is a new intelligent wireless communication technique for remedying the shortage of spectrum resource in recent years. Secondary users have to pay when they share available spectrum with primary users while price is an important factor in the spectrum allocation. Based on the game theory, an improved pricing function is proposed by considering the expectation of primary users. In this article, expectation represents the positivity of sharing spectrum with primary users. By introducing the positivity, price not only becomes different for different secondary users, but also can be adjusted according to the positivity. It is proved that the Nash Equilibrium of the new utility function exists. The simulation results show that spectrum sharing can not only be determined by the channel quality of secondary users, but also can be adapted according to the expectation of primary users. Besides, the proposed algorithm improves the fairness of sharing.
This article studies the closed-form expressions of outage performance for opportunistic relay under aggregate power constraint in decode-and-forward (DF) relay networks over Rayleigh fading channels, assuming that multiple antennas are available at the relay node. According to whether instantaneous signal-to-noise ratio (SNR) or average SNR can be utilized for relay selection, two opportunistic relay schemes, opportunistic multi-antenna relay selection (OMRS) and average best relay selection (ABRS) are proposed. The performances of both two schemes are evaluated by means of theoretical analysis and simulation. It is observed that OMRS is outage-optimal among multi-antenna relay selection schemes and closely approaches the beamforming (BF) scheme known as theoretical outage-optimal. Compared with previous single-antenna opportunistic relaying (OR) scheme, OMRS brings remarkable performance improvement, which is obtained from maximum ratio combining (MRC) and beamforming techniques. It is also shown that the performance of ABRS in asymmetric channels is close to OMRS in the low and median SNR range.
The multi-cell uplink power allocation problem for orthogonal frequency division multiplexing access (OFDMA) cellular networks is investigated with the uplink transmission power allocation on each co-frequency subchannel being defined as a multi-cell non-cooperative power allocation game (MNPG). The principle of the design of the utility function is given and a novel utility function is proposed for MNPG. By using this utility function, the minimum signal to interference plus noise ratio (SINR) requirement of a user can be guaranteed. It can be shown that MNPG will converge to the Nash equilibrium and that this Nash equilibrium is unique. In considering the simulation results, the effect of the algorithm parameters on the system performance is discussed, and the convergence of the MNPG is verified. The performance of MNPG is compared with that of traditional power allocation schemes, the simulation results showing that the proposed algorithm increases the cell-edge user throughput greatly with only a small decrease in cell total throughput; this gives a good tradeoff between the throughput of cell-edge users and the system spectrum efficiency.
In order to realize the reduction of equipment cost and the demand of higher capacity, Wireless Mesh network (WMN) router devices usually have several interfaces and work on multi-channels. Jointing channel allocation, interface assignment and routing can efficiently improve the network capacity. This paper presents an efficient channel assignment scheme combined with the MR-LQSR routing protocol, which is called channel assignment with MR-LQSR (CA-LQSR). In this scheme, a physical interference model is established: Calculated Transmission Time (CTT) is proposed as the metric of channel assignment, which can reflect the real network environment and channel interference best, and Enhanced Weighted Cumulative Expected Transmission Time (EWCETT) is proposed as the routing metric, which preserves load balancing and bandwidth of links. Meantime, the expression of EWCETT contains the value of CTT, thus the total cost time of channel assignment and routing can be reduced. Simulation results show that our method has advantage of higher throughput, lower end-to-end time delay, and less network cost over some other existing methods.
Technology of cognitive radio networks has emerged as an effective method to enhance the utilization of the radio spectrum where the primary users have priority to use the spectrum, and the secondary users try to exploit the spectrum unoccupied by the primary users. In this paper, considering the non-saturated condition, the performance analysis for the IEEE 802.11-based cognitive radio networks is presented with single-channel and multi-channel, respectively. For the single-channel case, an absorbing Markov chain model describing the system transitions is constructed, and one-step transition probability matrix of the Markov chain is given. By using the method of probability generating function, the non-saturated throughput of the secondary users is obtained. For the multi-channel case, taking into account the negotiation-based sensing policy, the mean number of unused channels perceived by the second users is given, and then the non-saturated aggregate throughput of the secondary users is derived. Finally, numerical examples are provided to show the influences of the non-saturated degree, the number of the secondary users and the channel utilization of the primary users on the performance measures for the non-saturated throughput with single-channel and the non-saturated aggregate throughput with multi-channel.
A new variable step-size (VSS) affine projection algorithm (APA) (VSS-APA) was proposed for adaptive feedback cancellation suitable for hearing aids. So, a nonlinear function between step-size and estimation error is established and automatically adjusted according to the change of the estimation error, which leads to low misalignment and fast convergence speed. Analysis of the proposed algorithm offers large capacities in converging to the objective system. Simulation shows that the proposed algorithm achieves lower misalignment and faster convergence speed compared to fixed step-size APA and conventional adaptive algorithms.
In order to make full use of the radio resource of heterogeneous wireless networks (HWNs) and promote the quality of service (QoS) of multi-homing users for video communication, a bandwidth allocation algorithm based on multi-radio access is proposed in this paper. The proposed algorithm adopts an improved distributed common radio resource management (DCRRM) model which can reduce the signaling overhead sufficiently. This scheme can be divided into two phases. In the first phase, candidate network set of each user is obtained according to the received signal strength (RSS). And the simple additive weighted (SAW) method is employed to determine the active network set. In the second phase, the utility optimization problem is formulated by linear combining of the video communication satisfaction model, cost model and energy efficiency model. And finding the optimal bandwidth allocation scheme with Lagrange multiplier method and Karush-Kuhn-Tucker (KKT) conditions. Simulation results show that the proposed algorithm promotes the network load performances and guarantees that users obtain the best joint utility under current situation.
This article proposes a multistage soft decision equalization (SDE) technique for block transmission over frequency selective multi-input multi-output (MIMO) channels. Using the Toeplitz structure, the general signal model can be converted into a series of small-sized sub-signal models. For each sub-signal model, soft interference cancellation (SIC) is used firstly to remove partial effects of interfering symbols, then max-log-MAP sphere decoder is performed to get the desired a posteriori information. Simulation shows that with lower complexity the proposed method outperforms the probability data association SDE and the Schnorr-Euchner sphere decoder.
A novel topology scheme, cell with multiple mobile sinks method (CMMSM), is proposed in this article for the collection of information and for the environment monitoring in wireless sensor networks. The system consists of many static sensors, scattered in a large scale sensing field and multiple mobile sinks, cruising among the clusters. Conservation of energy and simplification of protocol are important design considerations in this scheme. The noninterference topology scheme largely simplifies the full-distributed communication protocol with the ability of collision avoidance and random routing. The total number of cluster heads in such a topology was analyzed, and then an approximate evaluation of the total energy consumption in one round was carried out. Simulation results show that CMMSM can save considerable energy and obtain higher throughput than low-energy adaptive clustering hierarchy (LEACH) and geographical adaptive fidelity (GAF).
A new distributed node localization algorithm named mobile beacons–improved particle filter (MB-IPF) was proposed. In the algorithm, the mobile nodes equipped with globe position system (GPS) move around in the wireless sensor network (WSN) field based on the Gauss-Markov mobility model, and periodically broadcast the beacon messages. Each unknown node estimates its location in a fully distributed mode based on the received mobile beacons. The localization algorithm is based on the IPF and several refinements, including the proposed weighted centroid algorithm, the residual resampling algorithm, and the markov chain monte carlo (MCMC) method etc., which were also introduced for performance improvement. The simulation results show that our proposed algorithm is efficient for most applications.
We firstly review the efforts in the literature on ultra-wideband (UWB)-over-fiber systems. Secondly, we present experimental results on photonic generation of high-speed UWB signals by both direct modulation and external optical injecting an uncooled semiconductor laser. Furthermore, we introduce the use of digital signal processing (DSP) technology to receive the generated UWB signal at 781.25 Mbit/s. Error-free transmission is achieved.
The rapid variation of channel can induce the intercarrier interference in orthogonal frequency-division multiplexing (OFDM) systems. Intercarrier interference will significantly increase the difficulty of OFDM channel estimation because too many channel coefficients need be estimated. In this article, a novel channel estimator is proposed to resolve the above problem. This estimator consists of two parts: the channel parameter estimation unit (CPEU), which is used to estimate the number of channel taps and the multipath time delays, and the channel coefficient estimation unit (CCEU), which is used to estimate the channel coefficients by using the estimated channel parameters provided by CPEU. In CCEU, the over-sampling basis expansion model is resorted to solve the problem that a large number of channel coefficients need to be estimated. Finally, simulation results are given to scale the performance of the proposed scheme.
Coordinated multi-point transmission and reception (CoMP) for single user, named as SU-CoMP, is considered as an efficient approach to mitigate inter-cell interference in orthogonal frequency division multiple access (OFDMA) systems. Two prevalent approaches in SU-CoMP are coordinated scheduling (CS) and joint processing (JP). Although JP in SU-CoMP has been proved to achieve a great link performance improvement for the cell-edge user, efficient resource allocation (RA) on the system level is quite needed. However, so far limited work has been done considering JP, and most existing schemes achieved the improvement of cell-edge performance at cost of the cell-average performance degradation compared to the single cell RA. In this paper, a two-phase strategy is proposed for SU-CoMP networks. CS and JP are combined to improve both cell-edge and cell-average performance. Compared to the single cell RA, simulation results demonstrate that, the proposed strategy leads to both higher cell-average and cell-edge throughput.
A novel network protocol, enhanced cooperative medium access control (ECoop MAC), is present in this article. Its function is to guarantee the quality of service (QoS) in wireless local area networks. For the sake of supporting different application scenarios, two proposed schemes, namely E-scheme I for lower priority traffic and E-scheme II for higher priority traffic can be adopted independently or in combination. ECoop MAC takes into account request failure problems, and utilizes cooperative protocol information to boost the system performance as well as to effectively cut control packets overhead. Simulation results show that the proposed algorithm can not only improve network throughput, but also lead to reduced network delays for individual packets.
This paper proposes a compressed sensing (CS) scheme to reconstruct and estimate the signals. In this scheme, the framework of CS is used to break the Nyquist sampling limit, making it possible to reconstruct and estimate signals via fewer measurements than that is required traditionally. However, the reconstruction algorithms based on CS are normally non-deterministic polynomial hard (NP-hard) in mathematics, which makes difficulties in obtaining real-time analysis-results. Therefore, a new compressed sensing scheme based on back propagation (BP) neural network is proposed under an assumption that every sub-band is the same. In this new scheme, BP neural network is added into detection process, replacing for signal reconstruction and decision-making. By doing this, heavy calculation cost in reconstruction is moved into pre-training period, which can be done before the real-time analysis, bringing about a sharp reduction in time consuming. For simplify, 1-bit quantification is taken on compressed signals. Simulations demonstrate the performance enhancement in the proposed scheme: compared with normal CS-based scheme, the proposed one presents a much shorter response time as well as a better robustness performance to noise via fewer measurements.
In long term evolution (LTE) uplink single carrier frequency division multiple access (SC-FDMA) system, the restriction that multiple resource blocks (RBs) allocated to a user should be adjacent, makes the resource allocation problem hard to solve. Moreover, with the practical constraint that perfect channel state information (CSI) cannot be obtained in time-varying channel, the resource allocation problem will become more difficult. In this paper, an efficient resource allocation algorithm is proposed in LTE uplink SC-FDMA system with imperfect CSI assumption. Firstly, the resource allocation problem is formulated as a mixed integer programming problem. Then an efficient algorithm based on discrete stochastic optimization is proposed to solve the problem. Finally, simulation results show that the proposed algorithm has desirable system performance.
This paper considers cooperative amplify-and-forwards (AF) two-way relay networks (TWRNs) with opportunistic relay selection (ORS) in two-wave with diffuse power (TWDP) fading channels. To investigate the system performance, we first derive an easy-to-computer approximated expression for the exact outage probability to reduce computational cost. Furthermore, we presented compact expressions for the asymptotic outage probability and asymptotic symbol error rate, which characterizes two factors governing the network performance at high signal-to-noise ratio (SNR) in terms of diversity order and coding gain. Additionally, based on the asymptotic outage probability, we determine the optimal power allocation solution between the relay and the sources to minimize the overall outage probability under the assumption that both the sources have identical transmit power. The correctness of the analysis is validated through Monte Carlo simulations. Our derived results can be applied to general operating scenarios with distinct TWDP fading parameters which encompass Rayleigh and Rician fading as special cases and arbitrary number of relays.
To ensure the integrity and security of cloud tenants’ workload, and to prevent unexpected interference among tenants, cloud platform must make sure that system behaviors are trusted. By analyzing threats that exist in the cloud platform, a novel trusted domain hierarchical model (TDHM) based on noninterference theory was proposed in this paper to solve these problems. First of all, the abstraction modeling of tenants’ computing environment and trusted domain (TD) were introduced for designing TDHM with formal methods. Secondly, corresponding constraints for trusted running were given to satisfy security requirements of tenants’ TD, and security properties of TDHM ware analyzed. After that, trusted behavior of TD was defined based on these properties, and the decision theorem of that was proved. It illustrated that the design and implementation of TD in cloud followed the model with characteristics of trusted behavior. Finally, the implementation of prototype system was introduced based on our previous work, and evaluation results showed that the performance loss was in the acceptable range.
Non-orthogonal time-frequency division multiplexing (NTFDM) transmission scheme has been proposed to further improve the bandwidth efficiency and overcome the drawbacks of the conventional orthogonal frequency division multiplexing (OFDM) method. Based on such approach, the fast signal detection algorithm, semidefinite programming (SDP) detection, has been studied. As the coefficient matrix tends to be ill conditioned, the modified SDP algorithm combined with successive interference cancellation (SIC) has been developed. The improved algorithm is a good tradeoff between performance and detection complexity. Simulation results show that the proposed algorithm can achieve better performance than cutting plane aided SDP method.
Wireless networks contain an inherent distributed spatial diversity that can be exploited by relays. Relay networks can take advantage of the broadcast-oriented nature of wireless transmission, but require more radio resource to transmit data for their multi-hop traits. Fortunately, incremental relaying technique, which can choose direct or multi-hop transmission adaptively, can efficiently utilize resource. In this article, the incremental transmission with amplify-and-forward (AF) relays is focused on. A practical hybrid-automatic retransmission request (HARQ) protocol is designed, and the related optimal relay selection strategy is proposed. To analyze the cooperative diversity of system with the proposed protocol, the capacity lower bound is deduced. Simulation and analytical results indicate that by adopting the optimal relay selection strategy, the system with the proposed HARQ protocol can achieve an order of cooperative diversity that equals the aggregated number of the relay and source nodes.
A technique named overlapped frequency-time division multiplexing (OVFTDM)) is proposed in this article. The technique is derived from Nyquist system and frequency-time division multiplexing system. When the signals are compactly overlapped without the orthogonality in time domain, the technique is named overlapped time division multiplexing (OVTDM), whereas when signals are compactly overlapped without the orthogonality in frequency domain, the technique is called overlapped frequency division multiplexing (OVFDM). To further improve spectral efficiency, the OVFTDM in which signals are overlapped both in frequency domain and in time domain is explored. OVFTDM does not depend on orthogonality whatever in time domain or in frequency domain like Nyquist system or OFDM system, but on the convolutional constraint relationship among signals. Therefore, not only the spectral efficiency but also the reliability is improved. The simulations verify the validity of this theory.
As is well known, the MIMO technology plays an important role for the link transmissions. This paper considers the general case for the ergodic capacity in doubly correlated frequency-selective MIMO channel. In the study, the geometrical MIMO channel model is presented. Based on the formula of MIMO ergodic capacity, the capacity limits are investigated with arbitrary finite number of antennas in the frequency-selective MIMO channel. It first derives the exact expressions for the upper bound and lower bound in doubly correlated MIMO channel. The results for the single-ended correlation and independent identically distributed (i.i.d.) MIMO channel are also obtained as special cases. Then the simple expressions of the capacity bounds are attained at high SNR. Finally, some results are provided by Monte Carlo simulations to verify the tightness of the derived bounds.
In the two-tier femtocell network, a central macrocell is underlaid with a large number of shorter range femtocell hotspots, which is preferably in the universal frequency reuse mode. This kind of new network architecture brings about urgent challenges to the schemes of interference management and the radio resource allocation. Motivated by these challenges, three contributions are made in this paper: 1) A novel joint subchannel and power allocation problem for orthogonal frequency division multiple access (OFDMA) downlink based femtocells is formulated on the premise of minimizing radiated interference of every Femto base station. 2) The pseudo-handover based scheduling information exchange method is proposed to exchange the co-tier and cross-tier information, and thus avoid the collision interference. 3) An iterative scheme of power control and subchannel is proposed to solve the formulated problem in contribution 1), which is an NP-complete problem. Through simulations and comparisons with four other schemes, better performance in reducing interference and improving the spectrum efficiency is achieved by the proposed scheme.
This article analyzes the energy-efficiency performances of fixed relaying schemes, selection relaying schemes and incremental relaying schemes in the three-node relay network. The closed-form asymptotic energy per good-bit (EPG) expressions for the state-of-the-art relaying protocols at high signal-to-noise ratio (SNR) regime are derived. In the formulation of the energy consumption model, the transmission, circuit and retransmission energies are all taken into account. To facilitate the comparison of energy-efficiency performances between different relaying protocols, the link reliabilities and retransmission probabilities are determined by the asymptotic outage probabilities at high SNR regime under the Rayleigh fading assumption. Computer simulations are carried out in both symmetric and asymmetric relay networks. The simulation results show the differences of system energy expenditure between these state-of-the-art relaying protocols. Finally some practical implications can be made from the observation.
In the analysis of overlaid wireless Ad-hoc networks, the underlying node distributions are commonly assumed to be two independent homogeneous Poisson point processes. In this paper, by using stochastic geometry tools, a new inhomogeneous overlaid wireless Ad-hoc network model is studied and the outage probability are analyzed. By assuming that primary (PR) network nodes are distributed as a Poisson point process (PPP) and secondary (SR) network nodes are distributed as a Matern cluster processes, an upper and a lower bounds for the transmission capacity of the primary network and that of the secondary network are presented. Simulation results show that the transmission capacity of the PR and SR network will both have a small increment due to the inhomogeneity of the SR network.
The hybrid mobile satellite system operating in a single frequency network (SFN) mode is increasingly becoming attractive. The combination of satellite component (SC) and terrestrial component (TC) promises a better quality of service (QoS). Multimedia broadcast and multicast services (MBMS) are expected to be prevailed in this kind of system. Several space frequency (SF) or space time (ST) codes have been proposed to enhance the system performance due to the lack of reverse link and omni-directional transmission. However, they mostly consider the system with only one SC and one TC and fail to make full use of available diversities. This paper presents a novel way to realize the dual polarization multiple input multiple output (MIMO) transmission by using the space time frequency (STF) code. The theoretical analysis and simulations indicate that the application of STF code can improve the system performance dramatically. A higher diversity gain can be achieved due to the cooperative transmission of SC and TC, while the coding gain can be enhanced by the reusing of STF code between SCs or TCs. Even if some of the links are lost, it can still work properly and benefit from the STF code. The relative relay can result in a degradation up to 0.5 dB in the coding gain.
Considering the joint channel estimation and data detection in time-varying orthogonal frequency division multiplexing (OFDM) and addressing transmission performance degradation induced by the severe inter-carrier interference (ICI) at very high speed, a new progressive iterative channel estimation scheme is proposed. To alleviate the error propagation of the inaccurate data due to ICI, the measurement subcarriers in the Kalman filter is designed to be extended from pilots subcarriers to all the subcarriers progressively through the iterations. Furthermore, in iteration process, the interference of the non-pilot data to the measurement subcarriers is considered to be part of noise in the modified Kalman filter, which improves the estimation accuracy. Simulation indicates that the proposed scheme improves the performance in fast time-varying situation.
It is necessary to estimate channel quality in order to put Bluetooth adaptive packet selection strategies into practice. However, the current Bluetooth channel quality estimation algorithms are either poor at timeliness or not applicable to systems which only support basic rate (BR) data packets (Gaussian frequency shift keying (GFSK) modulation scheme). It is investigated to apply the channel quality estimation algorithm based on power spectrum to Bluetooth adaptive packet selection strategies in this paper. Simulation results and analysis show that the proposed channel quality estimation algorithm based on power spectrum can achieve the accuracy less than 0.2 dB in the estimation range required by Bluetooth adaptive packet selection strategies. It has simple calculation and strong timeliness. The algorithm can also be suitable for different modulation schemes of Bluetooth data packets. It provides a good precondition for the achievement of Bluetooth adaptive packet selection strategies.
The IEEE 802.15.4 is one of the low-layer communication standards for personal area networks (PANs) and wireless sensor networks (WSNs) , which may be interfered by other wireless devices in the industrial, scientific and medical (ISM) frequency bands, especially in home environment, such as devices of IEEE 802.11b, Bluetooth, cordless telephone, and microwave oven radiation. This article examines the mutual interference effects of 2.4 GHz devices widely deployed at home, via both theoretical analysis and real-life experiment. An analytical model is proposed to estimate the packet error rate (PER) of radio frequency (RF) coexistent networks. The model is verified through a series of experiments. The experimental results also show that Bluetooth has little impact of interference on IEEE 802.15.4 sensor networks, and that the effect of microwave oven radiation on IEEE 802.15.4 sensor devices is tolerable if the device is a few meters away from the oven. Whereas, IEEE 802.11b wireless networks can cause problems to IEEE 802.15.4, however the effects can be significantly reduced by a proper channel selection. This article also proposes the interference duration model, which will be helpful in modeling of coexistence simulation. Simulation results show that the stationary scenario obeys the experiments result very well.
Target tracking is one of the most important applications of wireless sensor networks. Optimized computation and energy dissipation are critical requirements to save the limited resource of sensor nodes. A new robust and energy-efficient collaborative target tracking framework is proposed in this article. After a target is detected, only one active cluster is responsible for the tracking task at each time step. The tracking algorithm is distributed by passing the sensing and computation operations from one cluster to another. An event-driven cluster reforming scheme is also proposed for balancing energy consumption among nodes. Observations from three cluster members are chosen and a new class of particle filter termed cost-reference particle filter (CRPF) is introduced to estimate the target motion at the cluster head. This CRPF method is quite robust for wireless sensor network tracking applications because it drops the strong assumptions of knowing the probability distributions of the system process and observation noises. In simulation experiments, the performance of the proposed collaborative target tracking algorithm is evaluated by the metrics of tracking precision and network energy consumption.
In this paper, a network scenario of two-way relaying over orthogonal frequency division multiplexing (OFDM) is considered, in which two nodes intend to exchange the information via a relay using physical-layer network coding (PLNC). Assuming that the full channel knowledge is available, an optimization problem, which maximizes the achievable sum rate under a sum-power constraint, is investigated. It is shown that the optimization problem is non-convex, which is difficult to find the global optimum solution in terms of the computational complexity. In consequence, a low-complexity optimal power allocation scheme is proposed for practice implementation. A link capacity diagram is first employed for power allocation on each subcarrier. Subsequently, an equivalent relaxed optimization problem and Karush-Kuhn-Tucker (KKT) conditions are developed for power allocation among each subcarrier. Simulation results demonstrate that the substantial capacity gains are achieved by implementing the proposed schemes efficiently with a low-complexity computational effort.
In order to improve the efficiency and fairness of radio resource utilization, a scheme of dynamic cooperative subcarrier and power allocation based on Nash bargaining solution (NBS-DCSPA) is proposed in the uplink of a three-node symmetric cooperative orthogonal frequency division multiple access (OFDMA) system. In the proposed NBS-DCSPA scheme, resource allocation problem is formulated as a two-person subcarrier and power allocation bargaining game (SPABG) to maximize the system utility, under the constraints of each user’s maximal power and minimal rate, while considering the fairness between the two users. Firstly, the equivalent direct channel gain of the relay link is introduced to decide the transmission mode of each subcarrier. Then, all subcarriers can be dynamically allocated to the two users in terms of their selected transmission mode. After that, the adaptive power allocation scheme combined with dynamic subcarrier allocation is optimized according to NBS. Finally, computer simulation is conducted to show the efficiency and fairness performance of the proposed NBS-DCSPA scheme.
In this paper a new automatic frequency control (AFC) scheme was proposed, which could be used for the receiver of low earth orbit (LEO) satellite communication system in continuous transmitting scenario. By employing the time varying characteristic of particle filter technique, the new scheme combined the preamble based estimating step and data based estimating step to provide initial probability density recursively. Theoretical analysis proved that the proposed AFC scheme could provide better performance than the two-step scheme. The same conclusion was achieved by computer simulations with the criteria of root-mean square (RMS) frequency estimating performance and bit error rate performance.
In multi-cell cooperative MIMO systems, base station (BS) can exchange and utilize channel state informations (CSI) of adjacent cell users to manage co-channel interference. Users quantize the CSIs of desired channel and interference channels using finite-rate feedback links, then BS can generate cooperative block diagonalization (BD) precoding matrices using the obtained quantized CSI at transmitter to supress co-channel interference. In this paper, a novel adaptive bit allocation scheme is proposed to minimize the rate loss due to imperfect CSI. We derive the closed-form expression of rate loss caused by both channel delay and limited feedback. Based on the derived rate loss expression, the proposed scheme can adaptively allocate more bits to quantize the better channels with smaller delays and fewer bits to worse channels with larger delays. Simulation results show that the proposed scheme yields higher performance than other allocation schemes.
Distributed cloud architecture which consists of many cloud computing-storage resources (CCSRs) distributed across a geographic large-area has been widely implemented. It has received significant attention from academia. However, little effort has been taken to examine changes in operating cost-structure brought by distributed cloud scheme, or explore how to reap economic benefits from its geo-diversity. To tackle such issue, this paper formulated cost optimizations for cloud platforms based on a generic expense model of distributed cloud, taking into account major components of operating cost. The best deployment schemes were obtained through numerical simulation. The optimal amount of edge CCSRs and their corresponding placements were found to be determined by the ratio among various overhead components. Both model study and numerical simulation shed light on practical deployment of distributed cloud with high cost-effectiveness.
This paper investigates the power allocation issues for joint transmission in heterogeneous network (HetNet), which is characterized by severe cross-tier interference. The optimization problem of maximizing the HetNet throughput is formulated. The original problem turns out to be a non-convex problem, the global optima of which cannot be obtained by conventional optimization methods. This paper develops a novel method to achieve the global optima by turning the original problem into quasi-convex problem. In addition, this paper considers a constant power allocation scheme, as a tradeoff between the system throughput and computational complexity. Based on duality gap theory, the bound of constant power allocation scheme is mathematically derived. Numerical results under different system parameters indicate that both the proposed schemes outperform conventional interference coordination schemes.
Various cognitive network technologies are developed rapidly. In the article, the power and spectrum allocation in multi-hop cognitive radio network (CRN) with linear topology is investigated. The overall goal is to minimize outage probability and promote spectrum utility, including total reward and fairness, while meeting the limits of total transmit power and interference threshold to primary user simultaneously. The problem is solved with convex optimization and artificial bee colony (ABC) algorithm jointly. Simulation shows that the proposed scheme not only minimizes outage probability, but also realizes a better use of spectrum.
An amplify-and-forward (AF) based multi-relay network is studied. In order to minimize the system outage probability with a required transmission rate, a joint power allocation (PA) and multi-relay selection scheme is proposed under both total and individual power constraints (TIPC). In the proposed scheme, the idea of ordering is adopted to avoid exhaustive search without losing much system performance. Besides the channel quantity, the ordering algorithm proposed in this article also takes relays’ maximal output ability into consideration, which is usually ignored in traditional relay ordering algorithms. In addition, simple power reallocation method is provided to avoid repetitive PA operation during the process of searching all possible relay subsets. By Adopting the idea of ordering and using the proposed power reallocation method lead to remarkable decrease of the computation complexity, making the scheme easier and more feasible to implement in practical communication scenarios. Simulations show that the proposed multi-relay selection scheme provides similar performance compared to the optimal scheme with optimal PA and exhaustive search (OPAES) but with much lower complexity
Multiple-input multiple-output (MIMO) interference broadcast channel (IBC) plays an important role in the modern wireless communications. The upper bound of degrees of freedom (DoF) and its corresponding achievable schemes was investigated. However, all the achievable schemes require perfect channel state information at transmitters (CSIT). In absence of CSIT, the DoF value is still unknown. This article mainly focuses on the G-cell K-user MIMO IBC, where there are M antennas at each transmitter and N antennas at each receiver. The transmitters only know channel coherence time internals rather than the values of channel coefficients. The users in the same cell are assumed to be able to share the channel information. Based on a heterogeneous semi-staggered block fading model, a blind interference alignment (IA) scheme was proposed for this scenario. It is shown that when and , a total of DoF can be achieved. The inner bound is same with the decomposition DoF upper bound. Since the complexity is an important performance index to evaluate the achievable scheme, the quantitative analysis for the complexity is presented.
Focusing on the load balancing problem among multi-cells in long term evolution (LTE) networks with mixed users, a new multi-objective optimization modeling strategy, which integrates the guaranteed bit rate (GBR) and the best effort (BE) users, was proposed. In consideration of quality of service (QoS) priorities of different users, a decomposition method was presented to solve the original model. Derivations such as applying Lagrange multiplier method, sub-optimal solutions for mixed users were deduced. Based on derived solutions, including resource allocation schemes, a practical multi-objective load balancing algorithm jointly dealing with mixed users was given. Simulation shows a significant improvement of GBR users’ satisfaction level and BE users’ throughput in LTE networks by using the proposed algorithm.
The transmission antennas of cooperative systems are spatially distributed on multiple nodes, so the received signal can be asynchronous due to propagation delays. A receiving scheme for cooperative relay networks is proposed, multiple asynchronous signals are reconstructed at the receiver by forward and backward interference cancellation, which can obtain gains of cooperative transmission diversity with obvious delay and with no requiring timing synchronization or orthogonal channelization between relays. Analysis and simulation show that the bit error rate (BER) of the proposed scheme is similar to Alamouti code, and the scheme has the diversity order of orthogonal transmission scheme accompanied by minimal BER losses. It is demonstrated that the performance can be further improved by adding an error correcting code (ECC).
随着绿色通信技术的发展,无线网络的能量效率也变得越来越重要。但是,在多中继系统中,相关的工作绝大部分是基于端到端的性能,而中继端的能量效率问题还没有得到足够的重视。在本文中,我们考虑在基于放大转发模式的正交频分复用多中继系统中,以节点的能量效率和剩余能量为参数,设计中继是否参与协作的判定标准;提出了基于能量效率的异步功率迭代方法,并证明了该方法中纳什均衡的存在性和收敛性。进一步的,提出了一种子载波配对,中继选择和能量分配的联合优化算法,并加入了遗传算法和迭代方法来进一步提高算法的收敛速度。仿真结果表明,使用该算法后,在最小数据传输速率的约束下,在中继端可以显著提高能量效率,降低功率消耗,增加节点生存时间。
Machine learning has a powerful potential for performing the template attack (TA) of cryptographic device. To improve the accuracy and time consuming of electromagnetic template attack (ETA), a multi-class directed acyclic graph support vector machine (DAGSVM) method is proposed to predict the Hamming weight of the key. The method needs to generate K(K 1)/2 binary support vector machine (SVM) classifiers and realizes the K-class prediction using a rooted binary directed acyclic graph (DAG) testing model. Further, particle swarm optimization (PSO) is used for optimal selection of DAGSVM model parameters to improve the performance of DAGSVM. By exploiting the electromagnetic emanations captured while a chip was implementing the RC4 algorithm in software, the computation complexity and performance of several multi-class machine learning methods, such as DAGSVM, one-versus-one (OVO)SVM, one-versus-all (OVA)SVM, Probabilistic neural networks (PNN), K-means clustering and fuzzy neural network (FNN) are investigated. In the same scenario, the highest classification accuracy of Hamming weight for the key reached 100%, 95.33%, 85%, 74%, 49.67% and 38% for DAGSVM, OVOSVM, OVASVM, PNN, K-means and FNN, respectively. The experiment results demonstrate the proposed model performs higher predictive accuracy and faster convergence speed.
In vehicular Ad-hoc networks (VANETs), beacon message is designed for the purpose of periodically broadcasting the status information (velocity, direction, etc.), which enable neighbor awareness and support some safety applications. However, under high density scenarios, fixed rate beaconing can cause severe congestion and lower the deliver rate of beacons and other kinds of messages. In this paper, we describe beaconing rate control approach with an one-dimensional Markov model, and based on this, an optimized beacon rate control scheme is proposed which aims to mitigate the congestion and maximize the deliver efficiency of beaconing. Analytical and simulation results show that our proposed scheme can achieve higher adaptability and beaconing efficiency compared with other schemes in various environments.
Coordinated multi-point (CoMP) joint transmission is considered in the 3rd generation partnership project (3GPP) long term evolution (LTE)-advanced as a key technique to mitigate inter-cell interference and improve the cell-edge performance. To effectively apply CoMP joint transmission, efficient frequency reuse schemes need to be designed to support resource management cooperation among coordinated cells. However, most of the existing frequency reuse schemes are not suitable for CoMP systems due to not considering multi-point joint transmission scenarios in their frequency reuse rules. In addition, the restrictions of frequency resources in those schemes result in a high blocking probability. To solve the above two problems, a multi-beam cooperative frequency reuse (MBCFR) scheme is proposed in this paper, which reuses all the available frequency resources in each sector and supports multi-beam joint transmission for cell-edge users. Besides, the blocking probability is proved to be efficiently reduced. Moreover, a frequency-segment-sequence based MBCFR scheme is introduced to further reduce the inter-cell interference. System level simulations demonstrate that the proposed scheme results in higher cell-edge average throughput and cell-average throughput with lower blocking probability.
In this paper we consider interference-aware uplink transmission schemes for multicell multiple-input multiple-output (MIMO) system. Unlike conventional transmission schemes without considering the interference probably caused to other cell, we jointly optimize the transceiver beamforming vectors to maximize the desired signals while removing the intercell interference. Specifically, for a two-cell system where each transmitter is equipped with two antennas, we derive the closed-form expression for the transmit scheme called coordinated beamforming (CBF) via generalized-eigen analysis. Moreover, when asymmetric interference is considered, we give a balancing beamforming (BBF) scheme where the interfering transmitter is to strike a compromise between maximizing the desired signal and minimizing the generated interference. Simulation results show that both schemes perform better than conventional schemes under different scenarios
This paper proposes a novel adaptive time division multiple access (TDMA) slot assignment protocol (ATSA) for vehicular ad-hoc networks. ATSA divides different sets of time slots according to vehicles moving in opposite directions. When a node accesses the networks, it choices a frame length and competes a slot based on its direction and location to communication with the other nodes. Based on the binary tree algorithm, the frame length is dynamically doubled or shortened, and the ratio of two slot sets is adjusted to decrease the probability of transmission collisions. The theoretical analysis proves ATSA protocol can reduce the time delay at least 20% than the media access control protocol for vehicular ad-hoc networks (VeMAC) and 30% than the ad-hoc. The simulation experiment shows that ATSA has a good scalability and the collisions would be reduced about 50% than VeMAC, channel utilization is significantly improved than several existing protocols.
Ring signature enables the members to sign anonymously without a manager, it has many online applications, such as e-voting, e-money, whistle blowing etc. As a promising post-quantum candidate, lattice-based cryptography attracts much attention recently. Several efficient lattice-based ring signatures have been naturally constructed from lattice basis delegation, but all of them have large verification key sizes. Our observation finds that a new concept called the split-SIS problem introduced by Nguyen et al. at PKC’15 is excellent in reducing the public key sizes of lattice-based ring signature schemes from basis delegation. In this research, we first define an extended concept called the extended split-SIS problem, and then prove that the hardness of the extended problem is as hard as the approximating SIVP problem within certain polynomial factor. Moreover, we present an improved ring signature and prove that it is anonymous and unforgeable against the insider corruption. Finally, we give two other improved existing ring signature schemes from lattices. In the end, we show the comparison with the original scheme in terms of the verification key sizes. Our research data illustrate that the public key sizes of the proposed schemes are reduced significantly.
In order to predict traffic flow more accurately and improve network performance, based on the multifractal wavelet theory, a new traffic prediction model named exo-LSTM is proposed. Exo represents exogenous sequence used to provide a detailed sequence for the model, LSTM represents long short-term memory used to predict unstable traffic flow. Applying multifractal traffic flow to the exo-LSTM model and other existing models, the experiment result proves that exo-LSTM prediction model achieves better prediction accuracy.
This article proposes a new space-time cooperative diversity scheme called full feedback-based cooperative diversity scheme (FFBCD). In contrast to the conventional adaptive space-time cooperative diversity schemes that utilize the feedback from only the destination node, the new scheme utilizes the feedback from both the destination node and the cooperation node. With the feedback from the destination node, the occasional successful reception of the destination node in the information distribution stage can be detected, thus avoiding unnecessary retransmissions in the information delivery stage. The feedback from the cooperation node indicates the receiving state of the cooperation node in the information distribution stage, and the source node and the cooperation node will not perform cooperative retransmission during the information delivery stage unless the cooperation node is received successfully in the information distribution stage. In this way the new scheme can reduce the number of transmission attempt and improve the channel utilization. The expressions of the average number of transmission attempt are given. Numerical approximations and simulation results both show that the new scheme performs better than the non-cooperative scheme and the conventional adaptive space-time cooperative diversity scheme.
It is well known that it is impossible for complex orthogonal space-time block codes with full diversity and full rate to have more than two transmit antennas while non-orthogonal designs will lose the simplicity of maximum likelihood decoding at receivers. In this paper, we propose a new quasi-orthogonal space-time block code . The code is quasi-orthogonal and can reduce the decoding complexity significantly by employing zero-forced and minimum mean squared error criteria. This paper also presents simulation results of two examples with three and four transmit antennas respectively.
Introducing multiple-input multiple-output (MIMO) relay channel could offer significant capacity gain. And it is of great importance to develop effective power allocation strategies to achieve power efficiency and improve channel capacity in amplify-and-forward relay system. This article investigates a two-hop MIMO relay system with multiple antennas in relay node (RN) and receiver (RX). Maximizing capacity with antenna selection (MCAS) and maximizing capacity with eigen-decomposition (MCED) schemes are proposed to efficiently allocate power among antennas in RN under first and second hop limited scenarios. The analysis and simulation results show that both MCED and MCAS can improve the channel capacity compared with uniform power allocation (UPA) scheme in most of the studied areas. The MCAS bears comparison with MCED with an acceptable capacity loss, but lowers the complexity by saving channel state information (CSI) feedback to the transmitter (TX). Moreover, when the RN is close to RX, the performance of UPA is also close to the upper bound as the performance of first hop is limited.
By deducing the distribution of the normalized channel covariance matrix, a novel limited feedback scheme is proposed under multiple users (MU) multiple-input multiple-output (MIMO) broadcast channel (BC) system. The proposed scheme has advantages in three aspects. First, it has no constraints on the number of users or antennas. Second, each user’s feedback bits are independent of the number of receiving antennas. Third, the proposed scheme avoids the storage of large-size codebook on the transceivers. Simulation results show that the performance of the proposed scheme is close to the perfect channel state information (CSI) case and it just needs a small number of feedback bits.
This article presents the genetic algorithm (GA) as an autonomic approach for the joint radio resource management (JRRM) amongst heterogeneous radio access technologies (RATs) in the end-to-end reconfigurable systems. The joint session admission control (JOSAC) and the bandwidth allocation are combined as a specific decision made by the operations of the genetic algorithm with certain advisable modifications. The proposed algorithm is triggered on the following two conditions. When a session is initiated, it is triggered for the session to camp on the most appropriate RAT and select the most suitable bandwidth for the desired service. When a session terminates, it is also used to adjust the distribution of the ongoing sessions through the handovers. This will increase the adjustment frequency of the JRRM controller for the best system performance. Simulation results indicate that the proposed autonomic JRRM scheme not only effectively reduces the handover times, but also achieves well trade-off between the spectrum utility and the blocking probability.
The authors pay focus on the K user multiple-input multiple-output (MIMO) Gaussian interference channel (IC) with M transmitting antennas and N receiving antennas, in which and . The channel coefficients are variable, time varying or frequency selectively drawn from a continuous distribution. Based on ergodic interference alignment (IA), an achievable scheme was proposed to achieve a total of degrees of freedom (DoF). The ergodic IA scheme can reach the optimal DoF value with simply linear beamforming and finite symbols. Furthermore, the achievable rate of the ergodic IA scheme was derived at any signal-to-noise ratio (SNR). With numerical simulation, the performance of the proposed scheme is evaluated.
This article studies downlink subcarrier assignment problem to maximize rate-sum capacity subject to total power and proportional rate constraints in orthogonal frequency division multiplexing (OFDM) systems. Previous algorithms assume that the initial power is equally distributed over all subcarriers. The presence of path loss makes the assumption not correct any more. This article proposes a novel subcarrier assignment algorithm which makes full use of path loss and rate proportionality information to improve rate-sum capacity. The proposed algorithm determines optimal initial power allocation according to path losses and rate proportionalities of different users, assigns subcarriers to users in a greedy fashion, and then exchanges subcarriers between users to obtain fairer rate distribution. Simulation results show that the proposed algorithm approximately achieves double the capacity of static assignment schemes, such as fixed frequency band approach, and obtains better performance than previous subcarrier assignment algorithms in the presence of different path losses and proportional rate requirements.
A novel sequential Monte Carlo (SMC) algorithm is provided for the multiple maneuvering Ad-hoc network terminals direction of arrival (DOA) tracking. A nonlinear mobility and observation model is adopted, which can describe the motion features of the Ad-hoc network terminal more practically. The algorithm does not need any additional measurement equipment. Simulation result shows its significant tracking accuracy.
Lithium niobate (LiNbO3) is a useful photonic material for its electro-optic and nonlinear optical properties. In this paper, I will report developments of LiNbO3 based optical devices for fiber communication, including high-performance modulators and high efficiency wavelength converters.
According to the property rights model of cognitive radio, primary users who own the spectral resource have the right to lease or trade part of it to secondary users in exchange for appropriate profit. In this paper, an implementation of this framework is investigated, where a primary link can lease the owned spectrum to secondary nodes in exchange for cooperation (relaying). A novel pricing model is proposed that enables the trading between spectrum and cooperation. Based on the demand of secondary nodes, the primary link attempts to maximize its quality of service (QoS) by setting the price of spectrum. Taking the price asked by primary link, the secondary nodes aim to obtain most profits by deciding the amount of spectrum to buy and then pay for it by cooperative transmission. The investigated model is conveniently cast in the framework of seller/buyer (Stackelberg) games. Analysis and numerical results show that our pricing model is effective and practical for spectrum leasing based on trading spectral resource for cooperation.
This paper considers a frequency-division duplex (FDD) two-way channel with channel estimation error, where channel gains are independent of each other. It derives the exact closed-form outage probability expressions in the FDD system with analog network coding (ANC) by use of probability theory. To provide more insights, an approximated version for the exact outage probability is also developed in the medium-to-high signal-to-noise ratio (SNR) region. The simulation results show that the derived exact outage probabilities match the results of Monte Carlo simulations in all SNR regions, while the approximated outage probabilities also approach the simulation results when the channel condition is good. It is interesting that ANC in the FDD two-way channel is proved to outperform that of in the time-division duplex (TDD) channel by the computer simulation.
Most existing Ad-hoc routing protocols use the shortest path algorithm with a hop count metric to select paths. It is appropriate in single-rate wireless networks, but has a tendency to select paths containing long-distance links that have low data rates and reduced reliability in multi-rate networks. This article introduces a high throughput routing algorithm utilizing the multi-rate capability and some mesh characteristics in wireless fidelity (WiFi) mesh networks. It uses the medium access control (MAC) transmission time as the routing metric, which is estimated by the information passed up from the physical layer. When the proposed algorithm is adopted, the Ad-hoc on-demand distance vector (AODV) routing can be improved as high throughput AODV (HT-AODV). Simulation results show that HT-AODV is capable of establishing a route that has high data-rate, short end-to-end delay and great network throughput.
Energy saving and fast responding of data gathering are two crucial factors for the performance of wireless sensor networks. A dynamic tree based energy equalizing routing scheme (DTEER) was proposed to make an effort to gather data along with low energy consumption and low time delay. DTEER introduces a dynamic multi-hop route selecting scheme based on weight-value and height-value to form a dynamic tree and a mechanism similar to token passing to elect the root of the tree. DTEER can simply and rapidly organize all the nodes with low overhead and is robust enough to the topology changes. When compared with power-efficient gathering in sensor information systems (PEGASIS) and the hybrid, energy- efficient, distributed clustering approach (HEED), the simulation results show that DTEER achieves its intention of consuming less energy, equalizing the energy consumption of all the nodes, alleviating the data gathering delay, as well as extending the network lifetime perfectly.
Parallel interference cancellation (PIC) assisted with recursive least squares (RLS) algorithm is proposed to cancel the interference due to the carrier frequency offset (CFO) in orthogonal frequency division multiplexing (OFDM) system. The proposed algorithm is composed of two stages, which are RLS scheme and PIC scheme. RLS scheme is selected to compensate the frequency offset in the time domain in the first stage, and the interference induced by residual frequency offset is canceled by the PIC scheme in the frequency domain in the second stage. The result of bit error rate (BER) shows that its performance is robust for cancellation as comparison criteria, even though the frequency offset is 0.45. The 16QAM constellation is also simulated to observe the improvements from the proposed suppression schemes.
Cooperative diversity is a new technology to improve bit error rate (BER) performance in wireless communications. A new power allocation algorithm to improve BER performance in cellular uplink has been proposed in this paper. Some existing power allocation schemes were proposed for the purpose of maximizing the channel capacity or minimizing the outage probability. Different from these schemes, the proposed algorithm aims at minimizing the BER of the systems under the constraint of total transmission power. Besides this characteristic, the proposed algorithm can realize a low complexity real-time power allocation according to the fluctuation of channels. Simulation results show that the proposed algorithm can decrease the BER performance of the systems effectively.
In this paper, we propose a new modulation classification method based on the combination of clustering and neural network, in which a new algorithm is introduced to extract key features. In order to recognize modulation types based on the constellation diagram such as phase shift keying (PSK) and quadrature amplitude modulation (QAM), fuzzy C-means (FCM) clustering is adopted for recovering the constellation under different number of clusters. Then cluster validity measure is applied to extract key features which discriminate between different modulation types. The features are sent to neural network so that modulation types can be recognized. In order to conquer the disadvantages of standard back propagation (BP) neural network, conjugate gradient learning algorithm of Polak-Ribiere update is employed to improve the speed of convergence and the performance of modulation recognition. Simulation results show that classification rates of the algorithm proposed in this paper are much higher than those of clustering algorithm.
Aiming to the estimation of source numbers, mixing matrix and separation of mixing signals under underdetermined case, the article puts forward a method of underdetermined blind source separation (UBSS) with an application in ultra-wideband (UWB) communication signals. The method is based on the sparse characteristic of UWB communication signals in the time domain. Firstly, finding the single source area by calculating the ratio of observed sampling points. Then an algorithm called hough-windowed method was introduced to estimate the number of sources and mixing matrix. Finally the separation of mixing signals using a method based on amended subspace projection. The simulation results indicate that the proposed method can separate UWB communication signals successfully, estimate the mixing matrix with higher accuracy and separate the mixing signals with higher gain compared with other conventional algorithms. At the same time, the method reflects the higher stability and the better noise immunity.
Multicarrier code division multiple access (MC-CDMA) has the ability to combat with frequency selective fading and antenna array can enhance the performance of system. The paper proposes a novel joint spatial-frequency blind multiuser detection for antenna array MC-CDMA based on linear constraint constant modulation algorithm (LCCMA), which has robust performance and can ensue the weight vectors to converge to that of the desired user. Simulation indicates the proposed algorithm has better bit error ratio (BER) performance than that of the traditional beamforming-based two-step algorithm.
This article investigates the performance of hybrid automatic repeat request (HARQ) with code combining over the ideally interleaved Nakagami-m fading channel. Two retransmission protocols with coherent equal gain code combining are adopted, where the entire frame and several selected portions of the frame are repeated in protocols I and II, respectively. Protocol II could be viewed as a generalization of the recently proposed reliability-based HARQ. To facilitate performance analysis, an approximation of the product of two independent Nakagami-m distributed random variables is first developed. Then the approximate analysis is utilized to obtain exact frame error probability (FEP) for protocol I, and the upper bound of the FEP for protocol II. Furthermore, the throughput performance of both two protocols is presented. Simulation results show the reliability of the theoretical analysis, where protocol II outperforms protocol I in the throughput performance due to the reduced amount of transmitted information.
Cooperative relaying techniques can greatly improve the capacity of the multiple input and multiple output (MIMO) wireless system. The transmit power allocation (TPA) strategies for various relaying protocols have become very important for improving the energy efficiency. This article proposes novel TPA schemes in the MIMO cooperative relaying system. Two different scenarios are considered. One is the hybrid decode-and-forward (HDF) protocol in which the zero-forcing (ZF) process is operated on relays, and the other is the decode-and-forward (DF) protocol with relay node and antenna selection strategies. The simulation results indicate that the proposed schemes can bring about significant capacity gain by exploiting the nature of the relay link. Additionally, the proposed TPA scheme in the HDF system can achieve the same capacity as the equal TPA with fewer relay nodes used. Finally, the capacity gain with the proposed schemes increases when the distribution range of relay nodes expands.
An optimal power allocation (OPA) method with mean channel gains is proposed for a multinode amplify-and-forward cooperative communication system. By making use of M-PSK modulation, a closed-form symbol-error-rate (SER) formulation and corresponding upper bound are first derived. Subsequently the OPA method is utilized to minimize the SER. Comparison of the SER of the proposed OPA method with that of the equal power allocation (EPA) method, shows that the SER of both methods, which is approximately optimal performance, is almost the same when relays are near the source. OPA outperforms the EPA when the relays are near the middle between the source and destination or near the destination. The proposed OPA method depends only on the ratio of mean channel gains of the source-to-relay to those of the relay-to-destination. Extensive simulations are performed to validate the theoretical results.
Utility based resource allocation strategy in multi-cell orthogonal frequency-division multiplexing (OFDM) system is vital in next generation mobile communication system. Based on the analysis of risk aversion utility functions, this paper proposed the system utility based utility, which is named the Customer Satisfaction (CS) utility. Compared with the Proportional Fairness (PF) utility, the CS utility reflects the user demands better, and enables the system to adjust its resource allocation according to both the traffic requirements and the resource situation.
High-speed high-resolution analog-to-digital (A/D) conversion demanded by ultra wideband (UWB) signal processing is a very challenging problem. This paper proposes a parallel random projection method for UWB signal acquisition. The proposed method can achieve high sampling rate, high resolution and technical feasibility of hardware implementation. In the proposed method, an analog UWB signal is projected over a set of random sign functions. Then the low-rate high-resolution analog-to-digital convertors (ADCs) are used to sample the projection coefficients. The signal can be reconstructed by simple linear calculation with the sampling matrix, without complying with optimization algorithm and prior knowledge. In other aspects, unlike other approaches that need to utilize an accurate time-shift at extremely high frequency, or design a hybrid filter bank, or generate specific basis functions or work for signals with prior knowledge, the proposed method is a universal sampling approach and easy to apply. The simulation results of signal to noise ratio (SNR) and spurious-free dynamic range (SFDR) validate the efficiency of the proposed method for UWB signal acquisition.
As one of the key use cases in Long Term Evolution Self-Organization Network (LTE SON), the coverage and capacity optimization (CCO) is the technology which provides the optimal coverage and capacity performance support high-data-rate service and decrease the operator capital expenditures (CAPEX) and operational expenditures (OPEX). In LTE system, some factors (e.g. load, traffic type, user distribution, uplink power setting, inter-cell interference and etc) limit the coverage and capacity performance. From the view of single cell, it always pursuits maximize performance of coverage and capacity by optimizing the uplink power setting and intra-cell resource allocation, but this may result in decreasing the performance of its neighbor cells. Therefore, the benefit of every cell conflicts each other. In order to tradeoff the benefit of every cell and maximize the performance for the whole network, this paper proposes a multi-cell uplink power allocation scheme based on non-cooperative games. The scheme aims to make the performance of coverage and capacity balanced by negotiation of the uplink power parameters among multi-cells. So the performance of every cell can reach the Nash equilibrium, making it feasible to reduce the inter-cell interference by setting an appropriate uplink power parameter. Finally, the simulation shows the proposed algorithm can effectively enhance the performance of coverage and capacity in LTE network.
Energy efficiency is a critical issue in wireless sensor networks (WSNs). In order to minimize energy consumption and balance energy dissipation throughout the whole network, a systematic energy-balanced cooperative transmission scheme in WSNs is proposed in this paper. This scheme studies energy efficiency in systematic view. For three main steps, namely nodes clustering, data aggregation and cooperative transmission, corresponding measures are put forward to save energy. These measures are well designed and tightly coupled to achieve optimal performance. A half-controlled dynamic clustering method is proposed to avoid concentrated distribution of cluster heads caused by selecting cluster heads randomly and to get high spatial correlation between cluster nodes. Based on clusters built, data aggregation, with the adoption of dynamic data compression, is performed by cluster heads to get better use of data correlation. Cooperative multiple input multiple output (CMIMO) with an energy-balanced cooperative cluster heads selection method is proposed to transmit data to sink node. System model of this scheme is also given in this paper. And simulation results show that, compared with other traditional schemes, the proposed scheme can efficiently distribute the energy dissipation evenly throughout the network and achieve higher energy efficiency, which leads to longer network lifetime span. By adopting orthogonal space time block code (STBC), the optimal number of the cooperative transmission nodes varying with the percentage of cluster heads is also concluded, which can help to improve energy efficiency by choosing the optimal number of cooperative nodes and making the most use of CMIMO.
This paper presents a probabilistic greedy pursuit (PGP) algorithm for compressed wide-band spectrum sensing under cognitive radio (CR) scenario. PGP relies on streaming compressed sensing (CS) framework, which differs from traditional CS processing way that only focuses on fixed-length signal’s compressive sampling and reconstruction. It utilizes analog-to-information converter (AIC) to perform sub-Nyquist rate signal acquisition at the radio front-end (RF) of CR, the measurement process of which is carefully designed for streaming framework. Since the sparsity of wide-band spectrum is unavailable in practical situation, PGP introduces the probabilistic scheme by dynamically updating support confidence coefficient and utilizes greedy pursuit to perform streaming spectrum estimation, which gains sensing performance promotion progressively. The proposed algorithm enables robust spectrum estimation without the priori sparsity knowledge, and keeps low computational complexity simultaneously, which is more suitable for practical on-line applications. Various simulations and comparisons validate the effectiveness of our approach.
This paper studies the achievable rate for three-node discrete memoryless relay channel. Specifically in this mode, we explore two generalized feedbacks simultaneously: the source node actively collects feedback signals from the channel; and at the same time, the destination node actively transmits feedback signals to the relay node. These two feedback signals, which are called generalized feedback overheard from the channel that is likely to be noisy, induce that all the three nodes are in full duplex mode. The basic coding strategies of Cover and El Gamal are applied to the relay-source feedback transmission by the source forwarding the compressions of the channel output sequences at the relay node to the destination, and are also applied to the destination-relay feedback transmission to improve the decoding ability at the relay. Based on Cover and El Gamal coding, a new coding scheme adopting rate splitting and four-block Markov superposition encoding is proposed and the corresponding achievable rate is achieved. The proposed scheme is able to exploit two feedbacks simultaneously which can effectively eliminate underlying transmission bottlenecks for the channels. The derived achievable rate result generalizes several previously known results by including them as special cases.
In this paper, the joint resource allocation (RA) problem with quality of service (QoS) provisioning in downlink heterogeneous cellular networks (HCN) is studied. To fully exploit the network capacity, the HCN is modeled as a K-tier cellular network where each tier's base stations (BSs) have different properties. However, deploying numbers of low power nodes (LPNs) which share the same frequency band with macrocell generates severe inter-cell interference. Enhancement of system capacity is restricted for inter-cell interference. Therefore, a feasible RA scheme has to be developed to fully exploit the resource efficiency. Under the constraint of inter-cell interference, we formulate the RA problem as a mixed integer programming problem. To solve the optimization problem we develop a two-stage solution. An integer subchannel assignment algorithm and Lagrangian-based power allocation algorithm are designed. In addition, the biasing factor is also considered and the caused influence on system capacity is evaluated. Simulation results show that the proposed algorithms achieve a good tradeoff between network capacity and interference. Moreover, the average network efficiency is highly improved and the outage probability is also decreased.
In order to reveal the intrinsic properties of scientific collaboration networks, a new local-world evolution model on a scientific collaboration network is proposed by analysing the network growth mechanism. The act degree as the measurement of preferential attachment is taken, and the local-world information of nodes is taken into account. Analysis and simulation show that the node degree and the node strength obey the power-law distribution. Low average path length and high clustering coefficient are approved. Experiment indicates that the model can depict efficiently the topological structure and statistical characteristics of real-life scientific collaboration networks.
With rapid development communication system, high signal to noise ratio (SNR) system is required. In high frequency bandwidth, high loss, low Q inductors and high noise figure is a significant challenge with on-chip monolithic microwave integrated circuits (MMICs). To overcome this problem, high Q, low loss transmission line characteristics was analyzed. Compared with the same inductor value of the lumped component and the transmission line, it has a higher Q value and lower loss performance in high frequency, and a 2-stage common-source low noise amplifier (LNA) was presented, which employs source inductor feedback technology and high Q low loss transmission line matching network technique with over 17.6 dB small signal gain and 1.1 dB noise figure in 15 GHz–18 GHz. The LNA was fabricated by WIN semiconductors company 0.15 μm gallium arsenide (GaAs) P high electron mobility transistor (P-HEMT) process. The total current is 15 mA, while the DC power consumption is only 45 mW.
A three-dimensional (3D) Von Mises Fisher (VMF) distribution model was derived in multiple-input and multiple-output (MIMO) antenna communication environment. The azimuth of arrival and elevation of arrival are distributed for VMF distribution instead of the uniform or other traditional distributions. In particular the MIMO uniform Y-shaped array (UYA) and the uniform circular array (UCA) antenna topology are considered at mobile station and base station. The developed spatial fading correlation of the VMF model is determined by parameters of the concentration parameter, antenna spacing, mean azimuth of arrival, mean elevation of arrival. Using the channel model, the effects of the concentration parameter and the mean elevation angle on the capacity of MIMO antenna systems was analyzed. It is shown that the mean elevation of arrival must be taken into account in 3D MIMO communication environment.
Fixed-point algorithms are widely used for independent component analysis (ICA) owing to its good convergence. However, most existing complex fixed-point ICA algorithms are limited to the case of circular sources and result in phase ambiguity, that restrict the practical applications of ICA. To solve these problems, this paper proposes a two-stage fixed-point ICA (TS-FPICA) algorithm which considers complex signal model. In this algorithm, the complex signal model is converted into a new real signal model by utilizing the circular coefficients contained in the pseudo-covariance matrix. The algorithm is thus valid to noncircular sources. Moreover, the ICA problem under the new model is formulated as a constrained optimization problem, and the real fixed-point iteration is employed to solve it. In this way, the phase ambiguity resulted by the complex ICA is avoided. The computational complexity and convergence property of TS-FPICA are both analyzed theoretically. Simulation results show that the proposed algorithm has better separation performance and without phase ambiguity in separated signals compared with other algorithms. TS-FPICA convergences nearly fast as the other fixed-point algorithms, but far faster than the joint diagonalization method, e.g. joint approximate diagonalization of eigenmatrices (JADE).
Routing is one of the most important supporting parts in wireless sensor networks (WSNs) application that directly affects the application efficiency. Routing time and energy consumption are two major factors used to evaluate WSNs routing. This article proposes a minimum routing time and energy consumption (MiniTE) routing, which can ensure feasibility of the routing protocol both in routing time and energy consumption. Based on the MiniTE, WSNs can be partitioned into different regions according to the received signal strength indication (RSSI). Messages are sent by nodes in the region to their parent node and again up to their parent node until finally to the sink node. Theoretic evaluation and simulation results are given to verify the features of the protocol.
Performance analysis is presented for multiple-input multiple-output (MIMO) relay channels employing transmit antenna diversity with orthogonal space-time block codes (OSTBCs), where the source and the destination are equipped with Ns and Nd antennas, and communicate with each other with the help of a multiple-antenna relay operating in decode-and-forward (DF) mode. Over independent, not necessarily identical Rayleigh fading channels, exact closed-form symbol error rate (SER) expressions are derived for various digital modulation formats for both the OSTBC transmission with and without the direct link. The moment generating functions (MGFs) for overall system signal-noise ratios (SNRs) are also derived, based on which we present a unified SER analysis. The analysis shows that full spatial diversity order can be achieved for the DF MIMO relay channel by adopting OSTBC transmissions and maximal ratio combining (MRC) receptions. All the analytical results are confirmed through comparison with the results obtained using Mento Carlo simulations.
For cooperative relay multicast networks, the general cross-layer optimization approaches converge to the global optimal value slowly because of the large quantity of relay terminals. However, the mobility of relay terminals requires quick converging optimization strategies to refresh the relay links frequently. Based on the capacity analysis of multiple relay channels, an improved cross-layer optimization scheme is proposed to resolve this problem, in which the bound of the relay selecting region is determined as a pre-processing. Utilizing the primal-dual algorithm, a cross-layer framework with pre-processing optimizes both the relay terminal selection and power allocation with quick convergence. The simulation results prove the effectiveness of the proposed algorithm.
Coordinated Multi-Point (CoMP) transmission is a promising technique to improve both cell average and cell edge throughput for Long Term Evolution-Advanced (LTE-A). For CoMP-JT (Joint Transmission) in heterogeneous scenario, if JT users are firstly scheduled, other non-JT users will not be allocated sufficient resources, i.e., scheduling relevancy exists in the users under different cells in the same coordination cluster. However, the CoMP system throughput will decline remarkably, if the impact of scheduling relevancy is not considered. To address this issue, this paper proposes a novel scheduling scheme for CoMP in heterogeneous scenario. The principles of the proposed scheme include two aspects. Firstly, this scheme gives priority to user fairness, based on an extended proportional fairness (PF) scheduling algorithm. Secondly, the throughput of the coordination cluster should be maintained at a high level. By taking the non-CoMP system as a baseline, the proposed scheme is evaluated by comparing to random PF (RPF) and orthogonal PF (OPF) scheme. System-level simulation results indicate that, the proposed scheme can achieve considerable performance gain in both cell average and cell edge throughput.
This paper details on the uplink scheduling algorithm for long term evolution advanced (LTE-A) system with relays. While emulating quality of service (QoS)-aware services with different bit-rate and delay budget requirements for the upstream direction, a new QoS-aware scheduling algorithm for in-band relays is proposed. In this work, an improved scheduling metric calculation method and bit-rate guarantee scheme is applied. Moreover, this algorithm proposes an efficient scheme for the backhaul link allocation which allows information of the most backlogged users to be transmitted first. Finally, this paper concludes with simulation results to demonstrate how the proposed resource allocation strategy improves the performance of the system.
This paper investigates the performance of an underlay cognitive relay system where secondary users (SUs) suffer from a primary outage probability constraint and spectrum-sharing interference imposed by a primary user (PU). In particular, we consider a secondary multi-relay network operating in the selection decode-and-forward (SDF) mode and propose a best-relay selection criterion which takes into account the spectrum-sharing constraint and interference. Based on these assumptions, the closed-form expression of the outage probability of secondary transmissions is derived. We find that a floor of the outage probability occurs in high signal-to-noise ratio (SNR) regions due to the joint effect of the constraint and the interference from the PU. In addition, we propose a generalized definition of the diversity gain for such systems and show that a full diversity order is achieved. Simulation results verify our theoretical solutions.
Algebraic immunity is an important cryptographic property of Boolean functions. The notion of algebraic immunity of Boolean functions has been generalized in several ways to vector-valued functions over arbitrary finite fields. In this paper, the results of Ref. [25] are generalized to arbitrary finite fields. We obtain vector-valued functions over arbitrary finite fields such that their algebraic immunities can reach the upper bounds. Furthermore, all the component functions, together with their some nonzero linear combinations, of vector-valued Boolean functions achieved by this construction have optimal algebraic immunities simultaneously.
The paper proposes a prediction-mode-based filtering mechanism (PMF) to solve the problems of transmission energy wasting caused by time-redundant data in wireless sensor networks (WSN), according to the characteristic of spatio-temporal correlations on sampling series in data-collection. Prior works have suggested several approaches to decrease energy cost during data transmission process via data aggregation tree structure. Distinguish from those methods in above researches, our proposed scheme mainly focus on reducing the temporal redundant degree in event-source to achieve energy-saving effect via self-adaptive filtering structure. The framework of PMF for energy-efficient collection is composed of prediction module for mining the change law of time domain, self-learning module for updating model, and driving module for controlling data filtering operation. Combined with the design of error driving rule and threshold distributing rule, which is the middleware in the above filtering mechanism, the quantity of transmission load in networks can be greatly inhibited on the premise of quality of service (QoS) assurance and energy consumption can be reduced consequently. Finally, the experimental results show that the performance of PMF can significantly outperform some classical data-collection algorithms on energy-saving effect and self-adaptability
The discrete Fourier transform (DFT)-based codebook is employed in this paper to quantize channel state information so that the amount of feedback can be reduced in the multiple input multiple output (MIMO) downlink of long term evolution (LTE) system. And a novel beamforming (BF) scheme based on the proposed channel quality-to-interference (QIR) quantizing criteria is developed, which uses only the index of the optimal codebook for the beamforming at the base station (BS), and dramatically reduces the amount of feedback. The proposed BF scheme jointly considers the influences of the quality of the quantized channels and the mutual interference among the sub-channels. The extensive simulation results verify that throughput of the proposed BF scheme is better than that of the random BF with a little feedback, and that of the eigen-beamforming even under low signal noise ratio (SNR) scenario.
In order to take advantage of the asynchronous sensing information, alleviate the sensing overhead of secondary users (SUs) and improve the detection performance, a sensor node-assisted asynchronous cooperative spectrum sensing (SN-ACSS) scheme for cognitive radio (CR) network (CRN) was proposed. In SN-ACSS, each SU is surrounded by sensor nodes (SNs), which asynchronously make hard decisions and soft decisions based on the Bayesian fusion rule instead of the SU. The SU combines these soft decisions and makes the local soft decision. Finally, the fusion center (FC) fuses the local soft decisions transmitted from SUs with different weight coefficients to attain the final soft decision. Besides, the impact of the statistics of licensed band occupancy on detection performance and the fact that different SNs have different sensing contributions are also considered in SN-ACSS scheme. Numerical results show that compared with the conventional synchronous cooperative spectrum sensing (SCSS) and the existing ACSS schemes, SN-ACSS algorithm achieves a better detection performance and lower cost with the same number of SNs.
A signal detection scheme was proposed for two-way relaying networks (TWRNs) using distributed differential space-time coding (DDSTC) under imperfect synchronization. Unlike most existing work perfect with synchronization assumed, a relative delay between the signals transmitted from both sources to the relay was considered. Since perfect channel state information (CSI) is difficult to be acquired in fast fading, the scenarios and computation complexity will be increased especially when there appear multiple relays, CSI is assumed unavailable at all nodes. Therefore, the article proposes a differential signal detection scheme based on estimating and cancelling the imperfect synchronization component in the received signal at the two source nodes, followed by a least square (LS) decoder. Simulations, using the Nakagami-m fading channel due to its versatile statistical distribution property, show that the proposed scheme for both source nodes are effective in suppressing the inter-symbol interference (ISI) caused by imperfect synchronization while neither the source nodes nor the relay nodes have any knowledge of CSI.
In order to reduce the computational overhead of proof of retrievability (POR) scheme, a new POR scheme based on low-density parity-check (LDPC) codes is proposed, noted as LDPC-POR. In the model of PORs scheme, the client preprocesses the data and sends it to an untrusted server for storage, only keeping some metadata. Then, the client sends a challenge to the server to prove that the data stored at the server has neither been tampered nor deleted. In the setup phase of this scheme, the client uses LDPC code to encode the data, and blinds data with permutation and pseudo-random stream. In the challenge phase, the server generates the proof completely based on exclusive OR (XOR), after that the client makes use of the LDPC code to prove the validity of proof. The theoretical analysis shows that this scheme not only reduces the computational overhead, but also saves storage space compared with the classical scheme. In the meantime security proof is also provided in this paper showing that this scheme is feasible.
Multi-cell multi-user multiple-input multiple-output (MC-MU-MIMO) is a promising technique to eliminate inter-user interference and inter-cell cochannel interference in wireless telecommunication systems. As the large number of users in the system and the limited number of simultaneously supportable users with MC-MU-MIMO, it is necessary to select a subset of users to maximize the total throughput. However, the fully centralized user selection algorithms used in single cell system, which will incur high complexity and backhaul load in multi-cell cooperative processing (MCP) systems, are not suitable to MC-MU-MIMO systems. This article presents a two cascaded user selection method for MCP systems with multi-cell block diagonalization. In this paper, a local optimal subset of users, which can maximize the local sum capacity, is first chosen by the greedy method in every cooperative base station in parallel. Then, all the cooperative base stations report their local optimal users to the central unit (CU). Finally, the global optimal users, which can maximize the global sum capacity of MCP systems, are selected from the aggregated local optimal users at the CU. The simulation results show that the proposed method performs closely to the optimal and centralized algorithm. Meanwhile, the complexity and backhaul load are reduced dramatically.
Cooperative communication systems can effectively increase channel capacity and combat fading. Effective cooperation requires synchronization impairments such as multiple timing offsets and multiple carrier frequency offsets to be accurately estimated and mitigated. This paper seeks to address the joint estimation of synchronization impairments in multi-relay decode-and-forward (DF) cooperative networks. Firstly, a simple yet effective estimation method based on the devised training signals is presented for achieving synchronization. Then, an iterative algorithm is further derived in order to improve the performance associated with the estimation of synchronization impairments. Our proposed algorithm converts the difficult multiple parameter estimation problem into more tractable sub-problems of estimating many individual impairments pairs for the independent relays. Simulations indicate that, the proposed estimator can asymptotically achieve the mean square error (MSE) for the perfectly timing or frequency synchronized case.
A hybrid system of the fuzzy c-means (FCM) clustering algorithm and adaptive-two-stage linear approximation was presented for nonlinear distortion cancellation of radio frequency (RF) power amplifier (PA). This mechanism can effectively eliminate noise, adaptively model PA’s instantaneous change, and efficiently correct nonlinear distortion. This article puts forward the FCM clustering algorithm for clustering received signals to eliminate white noise, and then uses the adaptive-two-stage linear approximation to fit the inverse function of the amplitude’s and phase’s nonlinear mapping during the training phase. Parameters of the linear function and similarity function are trained using the gradient-descent and minimum mean-square error criteria. The proposed approach’s training results is directly employed to eliminate sampling signal’s nonlinear distortion. This hybrid method is realized easier than the multi-segment linear approximation and could reduce the received signal’s bit error rate (BER) more efficiently.
As known that the effective capacity theory offers a methodology for exploring the performance limits in delay constrained wireless networks, this article considered a spectrum sharing cognitive radio (CR) system in which CR users may access the spectrum allocated to primary users (PUs). Particularly, the channel between the CR transmitter (CR-T) and the primary receiver and the channel between the CR-T and the CR receiver (CR-R) may undergo different fading types and arbitrary link power gains. This is referred to as asymmetric fading. The authors investigated the capacity gains achievable under a given delay quality-of-service (QoS) constraint in asymmetric fading channels. The closed-form expression for the effective capacity under an average received interference power constraint is obtained. The main results indicate that the effective capacity is sensitive to the fading types and link power gains. The fading parameters of the interference channel play a vital role in effective capacity for the looser delay constraints. However, the fading parameters of the CR channel play a decisive role in effective capacity for the more stringent delay constraints. Also, the impact of multiple PUs on the capacity gains under delay constraints has also been explored.
As the vehicles gain the extensive popularity and increasing demand, traffic accident is one of the most serious problems faced by modern transportation system. Hereinto, crashes between cars and pedestrians cause plenty of injuries and even death. Diverting attention from walking to smartphones is one of the main reasons for pedestrians getting injured by vehicles. However, the traditional measures protecting pedestrians from the vehicles heavily rely on the sound warning method, which do not capable for pedestrians focusing on the smartphones. As the smartphones become ubiquitous and intelligent, they have the capacity to provide alert for the pedestrians with the help of vehicle-to-pedestrian (V2P) communication. In this paper, an efficient vehicle-to-X (V2X) communication system is designed for the vehicle and pedestrian communication to guarantee the safety of people. It achieves the IEEE 802.11p and the WiFi protocols meanwhile on the on-board unit (OBU) designed for vehicles. Extensive evaluation shows that the OBU can provide the reliable communication for vehicle-to-vehicle (V2V) and V2P in terms of packet delivery rate and average delay. Furthermore, two safety applications have been developed to protect the safety of vehicles and pedestrians based on the data transferred from the OBU. The first application is designed to show the driving information and provide the collision forewarning alert on the tablet within the vehicle. The second application is developed for the smartphone to provide forewarning alert information to the smartphone-distracted vulnerable pedestrians. Smartphone states are appreciable to provide the adaptive alert modes. Experimental results show that these applications are capable of alerting the intersection accidents, and the pedestrians can get the adaptive alerts according to smartphone usage contexts.
For leakage-resilient ciphertext-policy attribute-based encryption (CP-ABE) at present, the size of the ciphertexts in most of them relies on the number of attributes. How to overcome this shortcoming is a challenge problem. Based on the Goldreich-Levin theorem and dual system encryption, an efficient CP-ABE scheme with constant size ciphertexts is proposed in this paper. It can tolerate leakage on master secret key and attribute-based secret keys with auxiliary inputs. Furthermore, the proposed scheme can be realized as resilience against continual leakage if keys are periodically updated. Under some static assumptions instead of other strong assumptions, the introduced scheme achieves adaptively security in the standard model.
This article puts forward a partial channel state information (CSI) feedback scheme for fractional frequency reuse (FFR)-based orthogonal frequency division multiple access (OFDMA) systems. Efficient CSI feedback strategy plays an important role in opportunistic scheduling because base station (BS) can employ adaptive modulation and coding (AMC) technique to adaptively change transmission rates according to CSI feedback, and therefore the spectrum efficiency can be improved significantly. On the other hand, FFR is a simple but effective technique to improve the throughput of users at cell edge. To exploit opportunistic scheduling in FFR-based OFDMA systems, both users and spectrum are divided into multiple groups in this article, and specific feedback pattern is designed for each user group on each spectrum sub-band. Simulations results prove that the proposed algorithm can reduce the feedback load significantly, while maintain nearly the same performance as the system with full feedback.
Two-hop relaying systems suffer spectral efficiency loss due to the half-duplex property of relays. This paper proposes an efficient relaying protocol which can recover the spectral efficiency loss but still work with half-duplex relays. However, there exists inter-relay interference which degrades the performance of the protocol. With this consideration, a power control policy is derived to suppress the interference using game theory, and then an algorithm is given to facilitate distributed implementation. Furthermore, impact of deploying more destination antennas on performance of the relaying protocol is investigated. Simulation results show that, with the power control policy, the proposed relaying protocol can achieve high spectral efficiency.
Characterizing the features of user churn is crucial to the sustainable development of peer-to-peer (P2P) systems where peers join and leave at any arbitrary time. This paper analyzes the user churn in a P2P downloading system named QQXuanfeng by using the fine-grained log analysis over 60 days. It shows that the online and offline duration is related to up (arrive) time and down (depart) time respectively. A continuous ON/OFF process, which exhibits the diurnal patterns of users, is simulated using the churn model. In particular, the dynamic departure rate is proposed to give insight into the distribution of online duration. Further more, considering the heterogeneity of users, we cluster users based on the similarity of redefined user availability. As an example of application of this model, a high availability overlay is constructed and evaluated based on the clustering.
In recent years, 10 Gbit/s Ethernet passive optical networks (10G EPON) have been gaining considerable interests because of its high bandwidth capability. To ensure smooth transition from 1 Gbit/s to 10 Gbit/s equipment and to avoid a significant one-time investment into such a cost-sensitive market, coexistence of gigabit Ethernet passive optical networks (GEPON) and 10G EPON system are necessary. In this article, coexistence system architecture and a novel bandwidth allocation algorithm called weight-optimized dynamic bandwidth allocation for coexistence EPON (WOCE-DBA) for the system is proposed. The simulation results show that this algorithm can guarantee fair bandwidth sharing among different optical network unit (ONU) groups, without ignoring the inter-ONU and intra-ONU fairness. Most importantly, it can flexibly adapt to the system composition variations and save efforts needed to modify the bandwidth scheduling mechanism during the migration process from GEPON to 10G EPON.
In this article, the authors consider joint design of a linear precoder and power allocation for uplink multiuser multiple input multiple output (MIMO) communication systems with limited feedback to improve the bit error rate (BER) performance for all users. Precoder selection from the codebook set is directly based on the exact BER performance, instead of other suboptimal criteria, to achieve the optimal precoder matrix, but closed-form expressions may not exist in the view of power allocation based directly on the BER criterion. From this perspective, the authors propose the joint transmitter optimization algorithm for the consideration of precoder design, with total power constraint for asymptotic MBER (AMBER) criterion. In this AMBER criterion, a closed-form solution has been derived for power allocation with an optimal precoder. The simulation results show that the proposed joint design algorithm can achieve a much better performance than precoding with uniform power allocation and only consideration of power allocation.
This paper proposes a novel energy-aware geographical forwarding protocol utilizing adaptive sleeping, in which each node selects its relay based on a new criterion that is based on its residual energy reserves and its geographical location to guarantee energy efficiency. In addition, this paper presents an adaptive sleep mechanism fully integrated into the new relay criterion, in which each node sleeps for a variable duration based on its residual energy reserves. Simulation results show that the proposed protocol significantly reduces the energy consumption of the network and improves its balance, especially when on heavy traffic load in dense networks. Our protocol is 20 times better in balancing the energy consumption compared with geographical random forwarding protocol. 14 Refs. In English.
A novel weighted cooperative routing algorithm (WCRA) is proposed in this article, which was on the basis of a weighted metric with maximal remaining energy (MRE) of the relays and the maximal received SNR (MRS) of the nodes. Moreover, a cooperative routing protocol was implemented on the basis of WCRA. Then simulation is done on network simulation (NS-2) platform to compare the performances of MRS, MRE and WCRA with that of noncooperative destination-sequenced destination-sequenced distance-vector (DSDV) protocol. The simulative results show that WCRA obtains a performance tradeoff between MRE and MRS in terms of delivery ratio and network lifetime, which can effectively improve the network lifetime at an acceptable loss of delivery ratio.
A more accurate correlated multiple input and multiple output (MIMO) channel model for IEEE 802.16n is presented. On one hand, this MIMO channel model can obtain more precise antenna correlation, which is a key character for MIMO channel and important for the research of IEEE 802.16n and MIMO technologies. On the other hand, it maintains a low complexity of simulation.
This article puts forward a novel channel estimation and inter-carrier interference (ICI) suppression method for time-varying orthogonal frequency division multiplexing (OFDM) systems. The proposed adaptive finite impulse response (FIR) filter utilizes the time domain correlation of the subcarriers. Based on the one order auto-regressive (AR) model, a modified Kalman filter is exploited to track the channel variance. By solving the Yule-Walker equation, the coefficient of the AR process can be obtained from zero order Bessel function. However, when the channel is assumed not perfectly known, the above criterion will lead to an impractical application. To deal with it, the coefficient of the AR process is taken as a constant. Subsequently a new state space model is deduced and found. Simulation results show that the proposed method can work effectively at the speed of 144 km/h with a phase noise of 85 dBc/Hz at 100 kHz offset. The proposed method yields an improvement bit error rate (BER) compared with three of the existing algorithms with much lower complexity.
Virtual network embedding (VNE) is an essential part of network virtualization, which is considered as one of the most promising way for the future network. Its main object is to efficiently assign the nodes and links of a virtual network (VN) to a shared substrate network (SN). The NP-hard and exiting studies have put forward several heuristic algorithms. However, most of the algorithms only consider the local resource of nodes, such as CPU and bandwidth (BW), to decide the embedding, and ignore the significant impact of network attributes. Based on the attributes of entire network, a model of the connectivity between each pair of nodes was formulated to measure the resource ranking of the nodes, and a new two-stage embedding algorithm was proposed. Thereafter, the node mapping and link mapping can be jointly considered. Extensive simulation shows that the proposed algorithm improves the performance of VNE by increasing the revenue/cost ratio and acceptance ratio of VN requests while reducing the runtime.
MIMO technology proposed in recent years can effectively combat the multipath fading of wireless channel and can considerably enlarge the channel capacity, which has been investigated widely by researchers. However, its performance analysis over correlated block-fading Rayleigh channel is still an open and challenging objective. In this article, an analytic expression of bit error rate (BER) is presented for multiple phase shift keying (MPSK) space-time code, with differential detection over correlated block-fading Rayleigh channel. Through theoretical analysis of BER, it can be found that the differential space-time scheme without the need for channel state information (CSI) at receiver achieves distinct performance gain compared with the traditional nonspace-time system. And then, the system simulation is complimented to verify the above result, showing that the diversity system based on the differential space-time block coding (DSTBC) outperforms the traditional nonspace- time system with diversity gain in terms of BER. Furthermore, the numerical results also demonstrate that the error floor of the differential space-time system is much lower than that of the differential nonspace-time system.
In urban environment with serious blocking of direct paths, the non-line-of-sight (NLOS) propagation influences the location estimation accuracy. In this article, a novel algorithm is developed, which can mitigate the NLOS errors in location estimation significantly. Utilizing multiantenna array, the information of scatterers that cause the NLOS propagation is obtained. Then, we combine the information with TOA/TDOA based location algorithm to estimate the location of mobile station (MS). The simulation results show that our method can mitigate NLOS errors and enhance the location accuracy greatly.
One of the main requirements of cognitive radio systems is the ability to detect the presence of the primary user with fast speed and precise accuracy. To achieve that, a possible two-stage spectrum sensing scheme is suggested in this paper. More specifically, a fast spectrum sensing algorithm based on the energy detection is introduced focusing on the coarse detection. A complementary fine spectrum sensing algorithm adopts one-order cyclostationary properties of primary user’s signals in time domain. Since the one-order feature detection is performed in time domain, the real-time operation and low-computational complexity can be achieved. Also, it drastically reduces hardware burdens and power consumption as opposed to two-order feature detection. The sensing performance of the proposed method is studied and the analytical performance results are given. The results indicate that better performance can be achieved in proposed two-stage sensing detection compared to the conventional energy detector.
To maximize throughput and to satisfy users’ requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.
This study addresses the problem of jointly optimizing the transmit beamformers and power control in multi-user multiple-input multiple-output (MIMO) downlink. The objective is minimizing the total transmission power while satisfying the signal-to-noise plus interference ratio (SINR) requirement of each user. Before power control, it uses the maximum ratio transmission (MRT) scheme to determine the beamformers due to its attractive properties and the simplicity of handling. For power control it introduces a supermodular game approach and proposes an iterated strict dominance elimination algorithm. The algorithm is proved to converge to the Nash equilibrium. Simulation results indicate that this joint optimization method assures the improvement of performance.
The layered maximum a posteriori (L-MAP) algorithm has been proposed to detect signals under frequency selective fading multiple input multiple output (MIMO) channels. Compared to the optimum MAP detector, the L-MAP algorithm can efficiently identify signal bits, and the complexity grows linearly with the number of input antennas. The basic idea of L-MAP is to operate on each input sub-stream with an optimum MAP sequential detector separately by assuming the other streams are Gaussian noise. The soft output can also be forwarded to outer channel decoder for iterative decoding. Simulation results show that the proposed method can converge with a small number of iterations under different channel conditions and outperforms other sub-optimum detectors for rank-deficient channels.
This paper analyzed multi-user diversity performance for multiple input single output (MISO) amplify-and-forward (AF) relaying network with selection combiner, and the closed-form outage probabilities for variable gain relaying and fixed gain relaying network are derived. Based on these results, diversity order is presented for variable gain relaying network. Simulation results validate the derived theoretical results, and the diversity order of variable gain relaying network with available relays is in users’ scenario (where is the number of source transmitter antennas).
This paper proposes a Tomlinson-Harashima precoding (THP) transceiver for multiple-input multiple-output (MIMO) system, where the spatial correlation information at the transmitter is included in the channel state information (CSI) model. It derives the total mean square error (MSE) and its lower bound as a function of precoding matrix. Then, a precoding matrix and the closed-form expression of minimum MSE lower bound are obtained by use of optimization and matrix theory. By right-multiplying a proper unitary matrix to the above precoding matrix, the paper develops the optimal precoding matrix, thus the optimal transceiver matrices are achieved. Simulation results show that the total MSE performance of the proposed method outperforms the existing linear method and the naive THP method.
This paper studies the problem of effective resource allocation for multi-radio access technologies (Multi-RAT) nodes in heterogeneous cognitive wireless networks (HCWNs). End-to-end utility, which is defined as the delay of end-to-end communication, is taken into account in this paper. In the scenario of HCWNs, it is assumed that the cognitive radio nodes have the ability of Multi-RAT and can communicate with each other through different paths simultaneously by splitting the arrival packets. In this paper, the problem is formulated as the optimization of split ratio and power allocation of the source cognitive radio node to minimize the delay of end-to-end communication, and a low complexity step-by-step iterative algorithm is proposed. Numerical results show good performance of the proposed algorithm over two other conventional algorithms.
The performance of massive multiple-input multiple-output (MIMO) system is limited by pilot contamination. To reduce the pilot contamination, uplink and downlink precoding algorithms are put forward based on interference alignment criterion. In the uplink receiving processing, the target function aligns the pilot contamination and the interference signals to the same null space and acquires the maximal space degree of the desired signals. The uplink receiving precoding matrix is solved on maximal signal to interference plus noise ratio (SINR) criterion considering the impact of the pilot contamination on channel estimations. The uplink receiving precoding matrix is used as the downlink transmitting precoding matrix. Exploiting the channel reciprocity, it is proved that, if the uplink receiving precoding matrix achieves maximal SINR, the identical precoding matrix can be used in the downlink transmission and acquires maximal signal to leakage plus noise ratio (SLNR). Simulations show that the spectrum efficiency of the proposed algorithm can reach about 1.5 times higher than that of popular matched filtering (MF) precoding algorithm, and about 1.1 times higher than multi-cell minimum mean square error (MMSE) precoding algorithm. The performance of the proposed algorithm can be improved approximately linearly with the increasing of the number of antennas.
In downlink multi-user multi-input multi-output (MU-MIMO) system, not every user (user equipment (UE)) can calculate accurately signal to interference and noise ratio (SINR) without prior knowledge of the other users’ precoding vector. To solve this problem, this article proposes a channel inversion precoding scheme by using the lower bound of SINR and zero-forcing (ZF) algorithm. However, the SINR mismatch between lower bound SINR and actual SINR causes the inaccurateness of adaptive modulation and coding (AMC). As a result, it causes degradation in performance. Simulation results show that channel inversion precoding provides lower throughput than that of single user multi-input multi-output (SU-MIMO) at high signal-to-noise ratio (SNR) (>14 dB), due to the SINR mismatch, although the sum-rate of channel inversion precoding is higher than that of SU-MIMO at full SNR regime.
This article investigates transmitter design in Rayleigh fading multiple input multiple output (MIMO) channels with spatial correlation when there are channel uncertainties caused by a combined effect of channel estimation error and limited feedback. To overcome the high computational complexity of the optimal transmit power allocation, a simple and suboptimal allocation is proposed by exploiting the transmission constraint and differentiating a bound based on Jensen inequality on the channel capacity. The simulation results show that the mutual information corresponding to the proposed power allocation closely approaches the channel capacity corresponding to the optimal one and meanwhile the computational complexity is greatly reduced.
A complementary metal oxide semiconductor (CMOS) voltage controlled ring oscillator for ultra high frequency (UHF) radio frequency identification (RFID) readers has been realized and characterized. Fabricated in charter 0.35 ?m CMOS process, the total chip size is 0.47×0.67 mm2. While excluding the pads, the core area is only 0.15×0.2 mm2. At a supply voltage of 3.3 V, the measured power consumption is 66 mW including the output buffer for 50 ? testing load. This proposed voltage-controlled ring oscillator exhibits a low phase noise of 116 dBc/Hz at 10 MHz offset from the center frequency of 922.5 MHz and a lower tuning gain through the use of coarse/fine frequency control.
In this paper, the dynamic control approaches for spectrum sensing are proposed, based on the theory that prediction is synonymous with data compression in computational learning. Firstly, a spectrum sensing sequence prediction scheme is proposed to reduce the spectrum sensing time and improve the throughput of secondary users. We use Ziv-Lempel data compression algorithm to design the prediction scheme, where spectrum band usage history is utilized. In addition, an iterative algorithm to find out the optimal number of spectrum bands allowed to sense is proposed, with the aim of maximizing the expected net reward of each secondary user in each time slot. Finally, extensive simulation results are shown to demonstrate the effectiveness of the proposed dynamic control approaches of spectrum sensing.
This work investigates a novel semi-blind channel estimation for multiple-input multiple-output (MIMO) space-time block coding (STBC) systems. Algorithms for channel estimation based on whitening-rotation (WR) decomposition that provides a combined quality and spatial scalability is utilized. Using a space-time code-constrained input design, our approach exploits the orthogonality of the signal and noise subspaces in conjunction with orthogonal procrustes (OP) technique to obtain an accurate estimate of the unitary rotation matrix and, consequently, of the channel parameters. Unitary rotation matrices are parameterized a much fewer number of parameters, and signi?cant estimation gains can then be achieved by estimation of such orthogonal matrices. Furthermore, the proposed semi-blind MIMO channel estimation approach is conducted to reduce the complexity of system design when the number of the receive antennas is no less than the number of transmit antennas. Computer simulations are conducted to corroborate the effectiveness of the proposed channel estimation, and they demonstrate the improved performance compared to the existing training-based estimation.
A scenario where one ‘dumb’ radio and multiple cognitive radios communicating simultaneously with a common receiver is considered. In this paper, we derive an achievable rate region of the multiple-user cognitive multiple-access channel (MUCMAC) under both additive white Gaussian noise (AWGN) channel and rayleigh fading channel, by using a combination of multiple user dirty paper coding (DPC) and superposition coding. Through cognition, it is assumed that the secondary users (SUs) are able to obtain the message of the primary user (PU) non-causally beforehand. Using this side information, the SUs can perform multiple user DPC to avoid the interference from the SU. Besides, the SUs can also allocate part of their transmit power to aid the PU, using superposition coding. Therefore, the capacity region of traditional multiple-access channel (MAC) can be enlarged. Moreover, some asymptotic results are shown as the number of SUs increases. In the AWGN case, it is illustrated that the maximum achievable rate of the PU grows logarithmically with the increase of the number of SUs, whereas in the Rayleigh case, we show that the cognitive gain will increase with the decreasing of the channel signal to noise ratio (SNR).
This paper discusses about the optimal mode allocation for the heterogeneous networks, in which the network can schedule users working in the device-to-device (D2D) mode or cellular mode. The D2D user is allowed to reuse the uplink resource of cellular system and the problem is formed as a sum-capacity optimization issue with outage constraints for both cellular and D2D links. The method for the optimal user proration is proved to be divided into three cases according to the total user density: when the total user density is small, the optimal proration trends to all users utilizing one mode; when the total user density is large, the optimal proration is all of users choosing D2D mode; and when the total user density situates in the between, there is a unique optimal transmission mode proportion for the hybrid networks to maximize its sum- capacity. The simulation results demonstrate the validity of the conclusions in the analysis part.
It is known that the social network is an excellent source for gathering the emotions of people. There are thousands of micro-blogs posted in every second and every micro-blog that may contain a variety of user’s emotions. The users’ collective emotional behaviors are with great impacts on today’s societies, so it is good to find groups for society management based on users’ emotional behavior. This article focuses on analyzing multivariate emotional behavior of users in social network and the goal is to cluster the users from a fully new perspective-emotions. The following tasks are completed: firstly, the multivariate emotion of Chinese micro-blog with vector is analyzed, and multivariate time series to describe the user’s emotional behavior are constructed. Seconedly, considering principal component analysis (PCA) in similarity and distance similarity, the similarity of the multivariate emotion time series is measured. The contribution could be summarized as follows: groups of users though different emotions in social network are discovered. The emotional fluctuation and intensity of users are considered as well. Experiment in clustering effectively illustrates the emotional behavior characteristics of the users in different groups.
This article mainly investigates the combining schemes for hybrid automatic retransmission request (HARQ) protocols in multiple-input multiple-output (MIMO) wireless communication systems. A novel scheme, which joins MIMO detection and HARQ combining, called mid-combining, is presented in this article. Based on the position of HARQ combining, we classify the HARQ combining schemes into three types, named pre-combining, mid-combining, and post-combining. The simulation results show that mid- combining can increase the system throughput for all SNRs.
There are correlations of data in adjacent sensor nodes in wireless sensor networks (WSNs). Distributed source coding (DSC) is an idea to improve the energy efficiency in WSNs by compressing the sensor data with correlations to others. When utilizing the DSC, the network architecture that, deciding which nodes to transmit the side information and which nodes to compress according to the correlations, influences the compression efficiency significantly. Comparing with former schemes that have no adaptations, a dynamic clustering scheme is presented in this article, with which the network is partitioned to clusters adaptive to the topology and the degree of correlations. The simulation indicates that the proposed scheme has higher efficiency than static clustering schemes.
This article puts forward a new solution to the bound of the outage probability and transmission capacity of Ad-hoc networks. For the proofs of the upper and lower bounds are too complex, a much easier way is introduced to get the same results, and by using Taylor series, the asymptotic bound is derived. By comparing with the simulation results, we found that the asymptotic bound is sufficient accurate when the network parameters are selected properly, and is tighter than the upper and lower bounds.
A hybrid system of cellular mode and device-to-device (D2D) mode is considered in this paper, where the cellular resource is reused by the D2D transmission. With the objective of capacity maximization, the power optimization of D2D sub-system is considered, taking into account quality of service (QoS) requirement. The power optimization problem is divided into two stages: The first stage is the admission control scheme design based on the QoS requirement of D2D users, and the second is power allocation to maximize aggregate throughput of admissible D2D users. For the D2D admission control problem, a heuristic sorting-based algorithm is proposed to index the admissible D2D links, where gain to Interference ratio (GIR) sorting criterion is used. Applying an approximate form of Shannon capacity, the power allocation problem can be solved by convex optimization and geometric programming tools efficiently. Based on the theoretical analysis, a practical algorithm is proposed. The precision can reach a trade-off between complexity and performance. Numerical simulation results confirm that combining with GIR sorting method, the proposed scheme can significantly improve the D2D system's capacity and fairness.
In this paper, the design of linear leakage-based precoders is considered for multiple-input multiple-output (MIMO) downlinks. Our proposed scheme minimizes total transmit power under each user’s signal-to-leakage-plus-noise ratio (SLNR) constraint. When the base station knows perfect channel state information (CSI), suitable reformulation of design problem allows the successful application of semidefinite relaxation (SDR) techniques. When the base station knows imperfect CSI with limited estimation errors, the design problem can be solved using semidefinite program (SDP). At the same time, it can dynamically allocate each user’s SLNR threshold according to each user’s channel state, so it is more feasible than other similar SINR-based precoding methods. Simulation results show that using large SLNR thresholds, the proposed design has better bit error rate (BER) performance than maximal-SLNR precoding method at high signal-to-noise ratio (SNR). Moreover, when the base station knows imperfect channel state information, the proposed precoder is robust to channel estimation errors, and has better BER preformance than other similar SINR-based precoding methods.
This article puts forward a scalar weighting information fusion (IF) smoother with modified biased Kalman filter (BKF) and maximum likelihood estimation (MLE) to mitigate the ranging errors in ultra wide band (UWB) systems. The information fusion algorithm uses both the time of arrival (TOA) and received signal strength (RSS) measurement data to improve the ranging accuracy. At first, the ranging protocol of IEEE 802.15.4a acts as a multi-sensor system with multi-scale sampling. Then the scalar-based IF smoother accurately estimates the range measurement in the line of sight (LOS) and non-line of sight (NLOS) condition of UWB sensor network,during which the effectiveness of the IF in mitigating errors is especially focused during the LOS/NLOS transitions. Simulation results show that the proposed hybrid TOA-RSS fusion approach indicates a performance improvement compared with the usual TOA-only and other IF method, and the estimated ranging metrics can be used for achieving higher accuracy in location estimation and target tracking.
Multi-objective parameter adjustment plays an important role in improving the performance of the cognitive radio (CR) system. Current research focus on the genetic algorithm (GA) to achieve parameter optimization in CR, while general GA always fall into premature convergence. Thereafter, this paper proposed a linear scale transformation to the fitness of individual chromosome, which can reduce the impact of extraordinary individuals exiting in the early evolution iterations, and ensure competition between individuals in the latter evolution iterations. This paper also introduces an adaptive crossover and mutation probability algorithm into parameter adjustment, which can ensure the diversity and convergence of the population. Two applications are applied in the parameter adjustment of CR, one application prefers the bit error rate and another prefers the bandwidth. Simulation results show that the improved parameter adjustment algorithm can converge to the global optimal solution fast without falling into premature convergence.
This paper focuses on the local stability of a classical congestion control model in the Internet, namely AVQ (Adaptive Virtual Queue) algorithm with feedback delay. Firstly, necessary and sufficient stability conditions in terms of key tuning parameters are given, which can provide exact guidelines for setting system parameters. Furthermore, by computing the rightmost characteristic root, the optimal parameter configuration for AVQ is derived, which can guarantee superior stability performance. Finally, some simulation examples are given to illustrate the correctness of the theoretical analysis.
Content center networking (CCN) is one of the most promising future network architectures. Current researches on CCN routing scheme mainly focus on finding the best single routing path, which may induce to low usage of the in-network caches. In order to overcome this problem, a reverse trace routing (RTR) scheme is proposed in this paper, in which Interest packet is sent to the edge-cache along with the reverse trace of the corresponding former Data packet. By doing this, the Interest packets will have better chances to be routed to the promising in-network caches before reaching the source server, which could increase the in-network hit rate, while decrease the server stress. The simulation results show clearly that the RTR scheme decreases the source server load, while reducing the mean hops of entire data retrieval process under certain circumstances.
This article introduces the classic locating method based on the receiving signal strength in the cognitive radio and puts forward a cognitive radio-receiving signal strength (CR-RSS) localization algorithm which solves the problem of secondary users locating the primary user and succeeds in estimating the primary user’s location and transmission power. Through the establishment of cognitive radio network, evaluating the number of secondary users ,sampling and the environmental factors to the results in CR-RSS approach. The consequence shows that this approach can effectively locate the primary user and the technology of localization in cognitive radio can assist network optimization.
In vehicular ad hoc network (VANET), misbehaviors of internal nodes, such as discarding packets, may lead to a rapid decline in packet delivery ratio. To solve this problem, an improvement of greedy perimeter stateless routing (GPSR) protocol is presented. In the new protocol, trustworthiness is considered in the route selection process. The trustworthiness is measured by an objective trust model based on the subjective trust model DyTrust. And the reputation value which reflects the trustworthiness of each node is calculated and broadcasted by the intersection nodes. Specially, besides resisting the packet-discarding behavior of selfish nodes, this protocol also includes a location detection process to resist the location-faking behavior of malicious nodes. As a result, the selfish nodes and the malicious nodes can be excluded from the network. In addition, compared with improved GPSR protocol, the presented one is able to resist one kind of reputation-faking attack and has better performance in simulation.
This paper proposed a novel wireless location algorithm based on distance geometry (DG) constraint filtering for the time of arrival (TOA) of the signal (namely as DG-TOA). Filtering and processing of the observed data and leading to the mathematical formulas based on DG-TOA algorithm are applied to location, also play crucial rules. Simulation results show that the proposed DG-TOA algorithm can provide more valid observation data and be more precise than least square estimate (LSE) algorithm in dense, multi-route, indoor circumstances with the ranging estimation error.
As the traditional character-oriented frame synchronization methods are no longer applicable to the byte-misaligned stream, and the efficiency of the bit-oriented method is hardly acceptable, a character-oriented bit-shift stream frame synchronization (COBS-FS) method is presented. In order to measure the performance of the given method, a bit-oriented frame synchronization method, based on Knuth-Morris-Pratt (KMP-FS) algorithm, is used for comparison. It is proven in theory that the COBS-FS has a much lower cost in frame header searching. Experiment shows that the COBS-FS method is with better performance than the KMP-FS algorithm in both computational effort and execution time.
This paper proposes rate-maximized (MR) joint subcarrier pairing (SP) and power allocation (PA) (MR-SP&PA), a novel scheme for maximizing the weighted sum rate of the orthogonal-frequency-division multiplexing (OFDM) relaying system with a decode-and-forward (DF) relay. MR-SP&PA is based on the joint optimization of both SP and power allocation with total power constraint, and formulated as a mixed integer programming problem in the paper. The programming problem is then transformed to a convex optimization problem by using continuous relaxation, and solved in the Lagrangian dual domain. Simulation results show that MR-SP&PA can maximize the weighted sum rate under total power constraint and outperform equal power allocation (EPA) and proportion power allocation (PCG).
A parabolic equation method (PEM)-based discrete algorithm is proposed and is used to obtain the field distribution in the evaporation duct space. This method not only improves the computing speed, but also provides the flexibility to adjust the simulation accuracy. Numerical simulation of the wave propagation in the oceanic waveguide structure is done. In addition, the initial field distribution and progressive steps are determined. The loss model in the waveguide is solved through the numerical solution. By comparing the characteristics of the radio wave propagation in the duct and in the normal atmospheric structure, we analyses the radio transmission over the horizon detection in the oceanic waveguide.
Particle filtering (PF) algorithm has the powerful potential for coping with difficult non-linear and non-Gaussian problems. Aiming at non-linear, non-Gaussian and time-varying characteristics of power line channel, a time-varying channel estimation scheme combined PF algorithm with decision feedback method is proposed. In the proposed scheme, firstly the indoor power line channel is measured using the pseudo-noise (PN) correlation method, and a first-order dynamic autoregressive (AR) model is set up to describe the measured channel, then, the channel states are estimated dynamically from the received signals by exploiting the proposed scheme. Meanwhile, due to the complex noise distribution of power line channel, the performance of channel estimation based on the proposed scheme under the Middleton class A impulsive noise environment is analyzed. Comparisons are made with the channel estimation scheme respectively based on least square (LS), Kalman filtering (KF) and the proposed algorithm. Simulation indicates that PF algorithm dealing with this power line channel estimation difficult non-linear and non-Gaussian problems performance is superior to those of LS and KF respectively, so the proposed scheme achieves higher estimation accuracy. Therefore, it is confirmed that PF algorithm has its own unique advantage for power line channel estimation.
It’s well known that the mobile stations will comprise a wide range of radio access technologies (RAT), providing user with flexible and efficient access to multi-media service and high data rate communications. Although much work has been done for coexistence analysis between different systems base stations (BS), most of them have not addressed the interference within multi-mode terminal. Hence, for filling the gap,?The authors in the article present coexistence studyies of digital cellular system at 1 800 MHz (DCS1800) and time division duplex long term evolution (TDD-LTE) network in multi-mode terminal with multi-service provisioning. A new system model for coexistence was introduced and how deterministic analysis can be done within the terminal was explained. The interference evaluation model was given based on relations between reference sensitivity and signal-to-noise ratio (SNR), which is also deduced. The system simulation methodology was provided and assumption used in simulation was given. Simulation results were shown with different system parameters. Numeric results indicate that multi-mode terminal is mainly affected by local interference. The minimum antenna isolation required for a health system operation was provided.
A proposed resource allocation (RA) scheme is given to device-to-device (D2D) communication underlaying cellular networks from an end-to-end energy-efficient perspective, in which, the end-to-end energy consumptions were taken into account. Furthermore, to match the practical situations and maximize the energy-efficiency (EE), the resource units (RUs) were used in a complete-shared pattern. Then the energy-efficient RA problem was formulated as a mixed integer and non-convex optimization problem, extremely difficult to be solved. To obtain a desirable solution with a reasonable computation cost, this problem was dealt with two steps. Step 1, the RU allocation policy was obtained via a greedy search method. Step 2, after obtaining the RU allocation, the power allocation strategy was developed through quantum-behaved particle swarm optimization (QPSO). Finally, simulation was presented to validate the effectiveness of the proposed RA scheme.
Mobile data traffic is going through an explosive growth recently as mobile smart devices become more and more ubiquitous, causing huge pressure on cellular network. Taking advantage of its low cost and easy-to-deploy feature, wireless local-area networks (WLAN) becomes increasingly popular to offload data streams from cellular network, followed by higher and higher density of its deployment. However, the high density of WLAN will cause more interference, which results in degradation of its performance. Therefore, in order to enhance the performance of the network, we aim to minimize the interference caused by high density of WLAN. In this paper, we propose a novel power control scheme to achieve the above aim. We use the quality of experience (QoE) evaluation to coordinate the power of each access point (AP) and finally realize the optimization of the entire network. According to the simulation results, our scheme improves the performance of the network significantly in many aspects, including throughput and QoE.
In this article, a novel transmit-reference (TR) signaling scheme is proposed for ultra-wideband (UWB) system, where by invoking m-sequence codes the reference and data pulses can be transmitted side by side to increase the data rate. This structure enables demodulation with a simple and practical autocorrelation receiver despite existence of severe inter-pulse interference (IPI). To evaluate detection performance of the new designs, closed-form expression for bit error probability (BRP) is theoretically derived and simulation results confirm the analytical results. In addition, the proposed scheme can be extended to M-ary systems to further improve the data rate and power efficiency.
This paper investigates the relay selection and power allocation problem in multi-user based cooperative networks, where intermediate relay nodes help source forward information to destination using decode-and-forward (DF) relaying protocol. Specifically, we propose a novel multi-relay nodes selection strategy taking both instantaneous channel state information (I-CSI) and residual energy into consideration, by which ‘emergence’ diversity gain can be achieved and the imbalance of resource utilization can be overcome. Besides, using Largangian dual-primal decomposition and subgradient projection approach, an optimal power allocation algorithm at source and cooperative relay nodes is presented with the constraints of each user’s individual quality of service (QoS) requirements and system total transmit power. Theoretical analysis and simulation results demonstrate that the proposed scheme can significantly improve energy efficiency, while guaranteeing a good balance between achievable data rate and average network lifetime with relatively low implementation complexity.
Heterogeneous network for long term evolution advanced (LTE-A) creates severe interference. It is an urgent task to overcome the interference in macro cellular with low-power base stations (BSs), such as relay, pico, and femto called subnet nodes. In this paper, the cognitive interference model in interference zone (IZ) of the practical heterogeneous scenario is proposed. Based on investigation of interaction between the macro BS and subnet nodes in this model, the strategy framework of the cognitive critical ratio and power reward factor is set up for interference management aiming to get the maximum net saving power. The study of interference management is transformed into a multiple objective non-linear programming (MONLP) of the maximum saving power for the macro BS and subnet nodes. To facilitate the best compromise solution for both, the MONLP is changed into single objective programming and genetic algorithm (GA) is employed to obtain the global optimum solution. In addition, the practical implementation using the proposed algorithm in heterogeneous network for LTE-A is designed. Finally, numerical evaluation is used to test the applicability of the proposed algorithm, and system level simulation results demonstrate the effectiveness of the proposed interference management scheme.
Coordinated multiple point transmission/reception (CoMP) has been investigated recently as a promising technology to increase the cell-edge user performance of LTE-Advanced, and channel estimation is a crucial technology for CoMP systems. In this paper, we consider a reduced-complexity Minimum Mean Square Error (MMSE) channel estimator for CoMP systems. The estimator uses Space Alternating Generalized-EM (SAGE) algorithm to avoid the inverse operation of the joint MMSE estimator. In the proposed scheme, the Base Stations (BSs) in the CoMP system estimate the channels of all the coordinated users serially and iteratively. We derive the SAGE-based estimators and analyze complexity. Simulation results verify that the performance of the proposed algorithm is close to the joint MMSE estimation algorithm while reducing the complexity greatly.
Intruder detection and border surveillance are amongst the most promising applications of wireless sensor networks. Barrier coverage formulates these problems as constructing barriers in a long-thin region to detect intruders that cross the region. Existing studies on this topic are not only based on simplistic binary sensing model but also neglect the collaboration employed in many systems. In this paper, we propose a solution which exploits the collaboration of sensors to improve the performance of barrier coverage under probabilistic sensing model. First, the network width requirement, the sensor density and the number of barriers are derived under data fusion model when sensors are randomly distributed. Then, we present an efficient algorithm to construct barriers with a small number of sensors. The theoretical comparison shows that our solution can greatly improve barrier coverage via collaboration of sensors. We also conduct extensive simulations to demonstrate the effectiveness of our solution.
This paper focuses on the energy efficiency of cognitive relay (CR) networks with cooperative sensing, joint optimization of the sensing time and the signal-to-noise ratio (SNR) is studied to maximize the energy efficiency of CR network. Theoretical analysis shows that there exists an optimal sensing time and optimal SNR to make the energy efficiency maximized under a constraint of detection probability. Simulation results illustrate that the optimal fusion rule performs better than the OR rule and the AND rule in terms of the energy efficiency. By properly designing the fusion rule threshold as well as the number of cooperative sensing users, the energy efficiency of CR networks can be further improved.
Power spectrum estimation is to use the limited length of data to estimate the power spectrum of the signal. In this paper, we study the recently proposed tunable high-resolution estimator (THREE), which is based on the best approximation to a given spectrum, with respect to different notions of distance between power spectral densities. We propose and demonstrate a different distance for the optimization part to estimate the multivariate spectrum. Its effectiveness is tested through Matlab simulation. Simulation shows that our approach constitutes a valid estimation procedure. And we also demonstrate the superiority of the method, which is more reliable and effective compared with the standard multivariate identification techniques.
With the wide application of radio frequency identification (RFID) technology, its security privacy issues are becoming more and more prominent. A RFID mutual-authentication protocol with synchronous updated-keys based on Hash function is proposed to tackle those problems in this paper. An updated mechanism of the dynamic tag keys is introduced and a self-synchronized scheme is designed in the protocol, which has achieved the second verification for the tags. The security of the protocol is verified and analyzed by the Burrows-Abadi-Needham (BAN) logic and the attack models, moreover it has been compared with the existing schemes of security properties and its storage and computational performance. The results show that the proposed protocol has the ability to considerably reduce the amount of computation between tags and back-end database, and enhance the search efficiency of the whole system without the additional cost of the tags. It can satisfy safe requirements of RFID system effectively, also improve the authentication efficiency. The proposed protocol is more suitable for the low cost RFID system.
It is the premise of accessing and controlling cloud environment to establish the mutual trust relationship between users and clouds. How to identify the credible degree of the user identity and behavior becomes the core problem? This paper proposes a user abnormal behavior analysis method based on neural network clustering to resolve the problems of over-fitting and flooding the feature information, which exists in the process of traditional clustering analysis and calculating similarity. Firstly, singular value decomposition (SVD) is applied to reduce dimension and de-noise for massive data, where Map-Reduce parallel processing is used to accelerate the computation speed, and neural network model is used for softening points. Secondly, information entropy is added to hidden layer of neural network model to calculate the weight of each attribute. Finally, weight factor is used to calculate the similarity to make the cluster more accuracy. For the problem of analyzing the mobile cloud user behaviors, the experimental results show that the scheme has higher detection speed and clustering accuracy than traditional schemes. The proposed method is more suitable for the mobile cloud environment.
In orthogonal frequency division multiplexing (OFDM) systems, time and frequency synchronization are two critical elements for guaranteeing the orthogonality of OFDM subcarriers. Conventionally, with the employment of pseudonoise (PN) sequences in preamble design, the preamble information is not fully utilized in both symbol timing offset acquisition and carrier frequency offset estimation. In this article, a new synchronization algorithm is proposed for jointly optimizing the time and frequency synchronization. This algorithm uses polynomial sequences as synchronization preamble instead of PN sequences. Theoretical analysis and simulation results indicate that the proposed algorithm is much more accurate and reliable than other existing methods.
This paper proposes a sectorized distributed antenna system for the orthogonal frequency division multiplexing system in order to both reduce the co-channel interference introduced by frequency reuse and maintain high spectral efficiency. The proposed system is composed of many 120-degree sectorized antennas that are uniformly distributed throughout the whole coverage area. Three adjacent sectors from different antennas share the same frequency band, which can be reused on the adjacent antennas. The new structure provides downlink interference divergence that greatly reduces co-channel interference and improves the system capacity, compared to the traditional cell structure with frequency reuse factor equal to 3. Multiple antennas broadcasting the same signal for a user using the same frequency, known as simulcasting, has been widely studied in distributed antenna systems. However,this paper demonstrates that simulcasting is not suitable for the close interfering structure. Multiple input multiple output distributed antenna systems might be a better choice. 18 Refs. In English.
The integration of cellular network (CN) and wireless local area network (WLAN) is the trend of the next generation mobile communication systems, and nodes will handoff between the two kinds of networks. The received signal strength (RSS) is the dominant factor considered when handoff occurs. In order to improve the handoff efficiency, this study proposes an adaptive decision algorithm for vertical handoff on the basis of fast Fourier transform (FFT). The algorithm makes handoff decision after analyzing the signal strength fluctuation which is caused by slow fading through FFT. Simulations show that the algorithm reduces the number of handoff by 35%, shortens the areas influenced by slow fading, and enables the nodes to make full use of WLAN in communication compared with traditional algorithms.
The scheduling algorithm based on the three-way handshaking scheme in IEEE 802.16d-2004 standard has some serious problems because of the complexity of the algorithm and low scheduling efficiency. To enhance the scheduling efficiency and improve the performance of multi-hop wireless mesh networks (WMNs), one distributed scheduling algorithm that can maximize the spatial and time reuse with an interference-based network model is proposed. Compared to the graph-based network model, the proposed network model can achieve a better throughput performance with maximal spatial reuse. Furthermore, this proposed scheduling algorithm also keeps fairly scheduling to all links, with a priority-based polling policy. Both the theoretical analysis and simulation results show that this proposed distributed scheduling algorithm is simple and efficient.
This article proposes a simple pilot-aided channel estimation method based on correlation in time domain for multiple-input and multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. Pilot symbols in all transmit antennas are generated from different circular shifting of a certain sequence. Through once correlation, the receiver can obtain time-domain pulse responses for channel fading from all transmit antennas to a certain receive antenna, from which channel estimation in frequency domain can be obtained. Beyond 3G time-division duplex (B3G-TDD) uplink is introduced, and the channel estimation method is used in it. Theoretical analysis and simulation are both carried out. Mean square error (MSE) performance shows that the method can exhibit precise estimation. Complexity analysis proves it requires very low complexity. System simulation result shows that it guarantees the performance of B3G-TDD uplink very well.
In the transmitting, beamforming, and receiving combing (TBRC) MIMO system, a codebook based feedback strategy is usually used to provide the transmitter with the beamforming vector. The adopted codebook affects the system performance considerably. Therefore, the codebook design is a key technology in the TBRC MIMO system. In this article, the unitary space vector quantization (USVQ) codebook design criterion is proposed to design optimal codebooks for various spatial correlated MIMO channels. And the unitary space K-mean (USK) codebook generating algorithm is provided to generate the USVQ codebooks. Simulations show that the capacities of the feedback based TBRC systems using USVQ codebooks are very close to those of the ideal cases.
This article proposes a new fault location mechanism in optical network. In this mechanism, a network alarm packet format with time-stamp is introduced to implement fast restoration. In locating the fault, the existing schemes are usually complex and inaccessible when solving the multifailure location problem. For multifailures, the proposed mechanism using time-stamps is more efficient in locating the fault and decreasing computational complexity.
Cell search is an important aspect for 3G long-term evolution (LTE). This article deals with cell search in the time-division-synchronou code-division multiple access (TD-SCDMA) LTE system. On the basis of the synchronization channel (SCH) and cell specific reference symbols (CSRSs), the proposed cell search procedure includes five stages: frame detection and coarse timing, coarse carrier frequency offset (CFO) estimation, fine timing, fine CFO estimation, and cell identification. The key features of the proposed method are as follows: first, the neighboring three cells’ CSRSs are frequency division multiplexed (FD) to mitigate inter-cell interference. Second, the frequency domain differential cross-correlation, computed from CSRSs are maximally ratio combined for cell identification. Finally, the large set Kasami sequences are quadrature phase shift key (QPSK) modulated to be cell specific sequences (CSSs), to support a large number of target cells. Simulations show that the FD method is better than the code division multiplexed (CD) method.
IEEE 802.15.4 protocol has attracted much attention in research and industrial communities as candidate technology for wireless body area sensor networks (WBASNs). IEEE 802.15.4 supports the exclusive use of a wireless channel through guaranteed time slot (GTS). However, on one hand, bandwidth underutilization rate may be lower because of the variance between the guaranteed bandwidth and the arrival rate. On the other hand, the waiting time for transmitting emergency notification is getting longer when the GTSs assigned to the nodes increase in WBASNs. To solve these problems, in this article, a new scheme is proposed to reduce transmission delay for the alarm notification in emergent situations. Simulation results are presented to validate the efficiency of the proposed scheme by comparing it with the medium access control (MAC) protocol of IEEE 802.15.4.
A threshold setting scheme is proposed based on the resource management and limited feedback theory in multiuser orthogonal frequency division multiplexing (OFDM) systems. In adaptive resource allocation, the factors denoting the quality of service (QoS) and fairness are both considered as the user weight. From the aspect of feedback outage probability, the proposed algorithm sets the threshold for each user related to its weight, which brings enough feedback to the user with greater weight. Analysis and simulation results show that, compared with the threshold ignoring weights, the proposed scheme has much lower feedback load while with better QoS.
This article considers a wireless multi-hop/mesh network where single multi-antenna source-destination pair communicates through a selected relay subset using simple relay selection under the constraint of fixed relay’s number. Compared with random selection, the simple relay selection can yield certain capacity advantages while linear zero-forcing (ZF) receiver and linear beamformer are considered at the relay. For match-filter (MF) beamformer and amplified-and-forward (AF) beamformer with a fixed number of relays, the capacities are given. Furthermore, we extend the simple selection methods to the relaying scheme with orthogonal-triangular (QR) beamformer and investigate these linear beamformer schemes over spatially correlated multi-input multi-output (MIMO) links for both the backward and forward channel over the two-hop MIMO relay networks.
This article addresses the multicast resource allocation problem with min-rate requirement constraints in orthogonal frequency division multiplexing (OFDM) systems. Due to the prohibitively high complexity for nonlinear and combinatorial optimization, the original problem is relaxed and reformulated to form a standard optimization problem. By theoretical derivation according to the Karush-Kuhn-Tucker (KKT) conditions, two propositions are presented as the necessary criteria for optimality. Furthermore, a two-step resource allocation scheme, including subcarrier assignment and power allocation, is proposed on a basis of the propositions for practical implementation. With the min-rate based multicast group order, subcarriers are assigned in a greedy fashion to maximize the capacity. When subcarrier assignment is determined, the proposed power allocation can achieve the optimal performance for the min-rate constrained capacity maximization with an acceptable complexity. Simulation results indicate that the proposed scheme approximates to optimal resource allocation obtained by exhaustive search with a negligible capacity gap, and considerably outperforms equal power distribution. Meanwhile, multicast is remarkably beneficial to resource utilization in OFDM systems.
In contention-based satellite communication system, collisions between data packets may occur due to the randomly sending of the packets. A proper delay before each transmission can reduce the data collision rate. As classical random multiple access protocol, the slotted ALOHA (S-ALOHA) reduces the data collision rate through time slot allocation and synchronous measures. In order to improve the stability and throughput of satellite network, a backoff algorithm based on S-ALOHA will be effective. A new adaptive backoff algorithm based on S-ALOHA using grey system was proposed, which calculates the backoff time adaptively according to the network condition. And the network condition is estimated by each user terminal according to the prediction of the channel access success ratio using the model GM (1,1) in grey system. The proposed algorithm is compared to other known schemes such as the binary exponential backoff (BEB) and the multiple increase multiple decrease (MIMD) backoff. The performance of the proposed algorithm is simulated and analyzed. It is shown that throughput of the system based on the proposed algorithm is better than of system based on BEB and MIMD backoff. And there are also some improvements of the delay performance compared to using BEB. The proposed algorithm is especially effective for large number of user terminals in the satellite networks.
This article investigates resource allocation in multi-hop orthogonal frequency division multiplexing (OFDM) system with amplifying-and-forwarding relaying to maximize the end-to-end capacity. Most existing methods for multi-hop system focus on power allocation or subcarrier selection separately, but joint resource allocation is rarely considered due to the absence of effective interaction schemes. In this work, a novel joint resource allocation methodology is proposed based on Partheno genetic algorithm (PGA), which produces excellent subcarrier allocation set (referred to as individual in PGA) with higher capacity by evolution operator generation by generation. In addition, an adaptive power allocation is also designed to evaluate the fitness of PGA and further enhance the system capacity. Both theoretical analysis and simulated results show the effectiveness of the proposed joint strategy. It outperforms the traditional method by as much as 40% capacity improvement for 3-hop relaying system when system power is high, and obtains much more capacity enhancement percent under conditions of low system power.
Coded Overlapped Code Division Multiplexing System with Turbo Product structure (TPC--OVCDM) is introduced, and Trellis coded modulation (TCM) code is employed as error correcting code for uncoded OVCDM system. In such a scheme, row code and column code are TCM and OVCDM spreading code respectively. Data bits are only encoded directly by TCM and transform them into a matrix, each column of this matrix is then permuted by symbol interleaver before being encoded by OVCDM spreading code. During iterative decoding process in the receiver, two constituent decoders use symbol by symbol BCJR algorithm in the log domain. Decoding order of two sub codes can not be exchanged in arbitrary order and depend on the encoding order. Proportion between TCM coding and OVCDM coding is an important factor to affect system performance. For fixed coding structure and symbol interleaver, performance of TPC-OVCDM systems with different proportion on additive white Gaussian noise (AWGN) channel have been simulated. Simulation results show that TPC-OVCDM system with reasonable proportion can achieve significant coding gain compared with uncoded OVCDM system under the condition of same spectral efficiency at bit error rate level of 10-5.
In OFDM-based System such as long term evolution (LTE), the scheduling scheme plays an essential role in not only improving the capacity of system, but also guarantee the fairness among the user equipments (UEs). However, most existing work about scheduling only considers the current throughput in physical layer. Thus in this paper, a cross-layer scheduling with fairness based on restless bandit (CSFRB) scheme with the ‘indexability’ property is proposed for the multi-user orthogonal frequency-division multiplexing (OFDM) system to minimize the distortion in the application layer, to maximize the throughput and to minimize the energy consumption in the physical layer. The scheduling problem is firstly established as a restless bandit problem, which is solved by the primal-dual index heuristic algorithm based on the first order relaxation with low complexity, to yield the CSFRB scheme. Additionally, this scheme is divided into offline computation and online selection, where main work will be finished in former one so as to decrease the complexity further. Finally, extensive simulation results illustrate the significant performance improvement of the proposed CSFRB scheme compared to the existing one in different scenarios.
In this paper, we present a non-transferable utility coalition graph game (NTU-CGG) based resource allocation scheme with relay selection for a downlink orthogonal frequency division multiplexing (OFDMA) based cognitive radio networks to maximize both system throughput and system fairness. In this algorithm, with the assistance of others SUs, SUs with less available channels to improve their throughput and fairness by forming a directed tree graph according to spectrum availability and traffic demands of SUs. So this scheme can effectively exploit both space and frequency diversity of the system. Performance results show that, NTU-CGG significantly improves system fairness level while not reducing the throughput comparing with other existing algorithms.
In this paper, the feedback load reduction problem in wireless systems based on orthogonal frequency division multiplexing (OFDM) is investigated and an opportunistic feedback scheme (OFS) is proposed. The key idea behind OFS is that only the key channel gains which can significantly affect the system throughput are fed back to the BS. Firstly, the key channel gains are proved to belong to a channel gain interval. Secondly, a statistical method is proposed to estimate the channel gain interval. Thirdly, the opportunistic feedback scheme is formulated and the feedback load of OFS is analyzed. The advantage of OFS is threefold: the first is OFS can work in both OFDM-based multicast system and OFDM-based unicast system. The second is the channel fading type of the BS-user link is not required, which is more realistic. The third is OFS can get better feedback load performance compared with other schemes, while achieving almost the same throughput performance compared with that of full feedback scheme.
Recently, Internet energy efficiency is paid more and more attention. New Internet architectures with more energy efficiency were proposed to promote the scalability in energy consumption. The content-centric networking (CCN) proposed a content-centric paradigm which was proven to have higher energy efficiency. Based on the energy optimization model of CCN with in-network caching, the authors derive expressions to tradeoff the caching energy and the transport energy, and then design a new energy efficiency cache scheme based on virtual round trip time (EV) in CCN. Simulation results show that the EV scheme is better than the least recently used (LRU) and popularity based cache policies on the network average energy consumption, and its average hop is also much better than LRU policy.
Channel impulse response (CIR) can be estimated on the basis of cyclic correlation in time-domain for orthogonal frequency division multiplexing (OFDM) systems. This article proposes a generalized channel estimation method to reduce the estimation error by taking the average of different CIRs. Channel impulse responses are derived according to the different starting points of cyclic correlation. In addition, an effective CIR length estimation algorithm is also presented. The whole proposed methods are more effective to OFDM systems, especially to those with longer cyclic prefix. The analysis and the simulation results verify that the mean square error performance is 45 dB better than the conventional schemes under the same conditions.
As the system performance is obviously improved by introducing the concept of relay into the traditional orthogonal frequency division multiple access (OFDMA) systems, resource scheduling in relay-enhanced OFDMA systems is worthy of being studied carefully. To solve the optimization problem of achieving the maximum throughput while satisfying the quality of service (QoS) and guaranteeing the fairness of users, a novel resource scheduling scheme with QoS support for the downlink of two-hop relay-enhanced OFDMA systems is proposed. The proposed scheme, which is considered both in the first time sub-slot between direct link users and relay stations, and the second time sub-slot among relay link users, takes QoS support into consideration, as well as the system throughput and the fairness for users. Simulation results show that the proposed scheme has good performance in maximizing system throughput and guaranteeing the performance in the service delay and the data loss rate.
This paper analyzes the spectrum sensing performance over fading channel, in which a licensee and multiple unlicensed users coexist and operate in the licensed channel in a local area. The overall average probabilities of detection and false alarm by jointly taking the fading and the locations of all secondary users into account are derived, and a statistical model of cumulate interference is constructed. Based on the cumulate interference, a closed-form expression of outage probability at the primary user’s receiver according to a specific distribution of the fading is obtained. Finally, the sensing parameters so as to minimize the total spectrum sensing error and maximize the average opportunistic throughput are obtained. It is noted that the overall average performance analysis and results here enable to benchmark the design of specific spectrum sensing algorithms.
In this paper, to enhance the robustness against link imbalance, a hybrid cooperative protocol is proposed for amplify-and-forward (AF) opportunistic cooperation, where opportunistic relaying and multi-hop cooperation with relay ordering (RO) are dynamically selected to maximize the end to end signal-noise ratio (SNR), and the power allocation coefficient is optimized under total power constraint accordingly. Furthermore, a suboptimum allocation scheme with low complexity is proposed by employing the upper bound of harmonic mean. Simulation results show that the proposed scheme outperforms conventional AF opportunistic cooperation in variety of line type topology. Moreover, the efficiency of the proposed suboptimum allocation is also validated in large SNR region.
In vehicular Ad-hoc network (VANET), many multi-hop broadcast schemes are employed to widely propagate the warning messages among vehicles and the key is to dynamically determine the optimal relay vehicle for retransmission. In order to achieve reliable and fast delivery of warning messages, this paper proposes a delay-aware and reliable broadcast protocol (DR-BP) based on transmit power control technique. First, a comprehensive model is derived to evaluate the transmission in vehicle-to-vehicle communications. This model considers the wireless channel fading, transmission delay and retransmissions characters occurring in the physical layer/medium access control (PHY/MAC) layer. Then, a local optimal relay selection mechanism based on the above model is designed. In DR-BP scheme, only the vehicle selected as the optimal relays can forward warning messages and the transmit power is time-varying. Finally, extensive simulations verify the performance of DR-BP under different traffic scenarios. Simulation results show that DR-BP outperforms the traditional slotted 1-persistence (S1P) and flooding scheme in terms of packets delivery ratio and transmission delay.
This article proposed a new handover algorithm for beyond the third generation (B3G) systems with an orthogonal frequency division multiple access (OFDMA) downlink. In the proposed algorithm, handover mobile termination (MT) chooses a subchannel set in the candidate cells by a subchannel booking rule, based on the terminal speed and the subchannel’s channel state information (CSI). Moreover, the handover decision is made after analyzing if at least one candidate cell can reserve the subchannel set for the handover user. Simulation results show that the algorithm reduces the number of handovers and guarantees the quality of service (QoS) for the handover users. It yields better system performance in the OFDMA systems.
The non-uniform transmission and network topological structure are combined to investigate the spreading behavior of susceptible-infected-susceptible (SIS) epidemic model. Based on the mean-field theory, the analytical and numerical results indicate that the epidemic threshold is correlated with the topology of underlying networks, as well as the disease transmission mechanism. These discoveries can greatly help us to further understand the virus propagation on communication networks.
This work has investigated coordinated multiple-input multiple-output (MIMO) transmission schemes in an interference-limited cellular downlink. It has proposed a novel block diagonalization (BD) coordinated transmission scheme, which combines with zero-forcing (ZF) criterion. In the scheme, the BD technique has advantages in suppressing multi-user interference while the ZF technique enables to mitigate interference among spatial data streams for a user. Based on the proposed coordinated scheme, an efficient power allocation is also put forward. The analyses show that the ergodic capacity of the proposed coordinated scheme is that of the MIMO channel with the maximum transmit power at each transmitter. Computer simulations demonstrate the effectiveness of the proposed coordinated scheme and its corresponding power allocation.
In cognitive radio (CR) systems, efficient spectrum sensing ensures the secondary user (SU) to successfully access the spectrum hole. Typically, the detection problem has been considered separately from the optimization of transmission strategy. However, in practice, due to non-zero probabilities of miss detection and false alarm, the sensing phase has an impact on the throughput of SUs as well as on the transmission of primary user (PU). In this paper, using energy detection, we maximize the total throughput of SUs by jointly optimizing the detection threshold and transmission strategy in multiband CR systems. A set of iteration based algorithms are proposed to solve this mix-integer programming problem, which show better performance compared with uniform detection threshold selection algorithm suggested by IEEE 802.22 standard.
A novel scheme to joint phase noise (PHN) correcting and channel noise variance estimating for Orthogonal Frequency Division Multiplexing (OFDM) signal was proposed. The new scheme was based on the Variational Bayes (VB) method and discrete cosine transform (DCT) approximation. Compared with the least squares (LS) based scheme, the proposed scheme could overcome the over-fitting phenomenon and thus lead to an improved performance. Computer simulations showed that the proposed VB based scheme outperforms the existing LS based scheme
In this paper we have investigated the performance of downlink generalized distributed antenna system (GDAS). Under the assumption of spatial correlated fading conditions, we have derived the numeric expression of correlated coefficients according to series of Bessel function, and have lifted the range restriction of the mean angle of incident. Moreover, the architecture of distributed generalized layered space time codes (GLST) has been considered in order to achieve both multiplexing gain and diversity gain while we have used basis vector from null space instead of orthogonal set to obtain the same system performance but with lower complexity. Furthermore, in order to maximize the capacity, Gerschgorin circles based fast antenna selection algorithms have been evaluated including a discussion of those simulation results.
In co-channel deployment of macro cell and pico cells, cell range extension (RE), a simple and typical cell association scheme, is introduced to achieve better load balancing and improve cell edge performance. In this article, a novel dynamic and distributed bias setting scheme is proposed for RE technique in macro-pico heterogeneous networks. In this strategy, the worst user throughput of each cell during an adjusting time interval T is obtained to change the bias values according to certain procedures, where an introduced indicator is used to freeze the possibility of increasing bias value if needed. Furthermore, silent state and coarse control process are employed to achieve low overheads and computational complexity. Simulation results show that the proposed scheme can greatly improve the cell-edge performance compared with the static bias setting strategies, while maintaining the overall cell performance at the same time.
The final goal of quality of service (QoS) guarantees is to assure high quality of experience (QoE) for users. We cannot control QoE, but it is feasible to use QoS control schemes at lower levels to keep QoE high. We need to know the necessary QoS parameters values which need the minimum cost but could satisfy required QoE. An enhanced method is proposed to get the necessary quality of service (QoS) parameters which cost minimum but could satisfy required Quality of Experience (QoE) well. It abstracts the problem of obtain necessary QoS parameters to be how to get the solution of linear regression equations considering cost coefficient. The method establishes equations utilizing principal component analysis and multiple regression analysis based on normalized data. This paper defines the cost of necessary QoS and introduces the calculation of minimum cost necessary QoS parameter by inverse or generalized inverse matrix operations. The numerical example proves the normalization to be effective and the calculated values are feasible.
Coordinated multi-point transmission/reception (CoMP) was proposed currently as an effective technology to improve cell-edge throughput in next-generation wireless systems. Most of the existing work discussed clustering methods mostly to maximize the edge user throughput while neglecting the problem of energy efficiency, such as those algorithm clustering base stations (BSs) of better channel condition and BSs of worse channel condition together. In addition, BSs usually increase the transmit power to achieve higher throughput without any considering of interference caused to other users, that may result in energy waste. The authors focus on the throughput maximizing problem while fully considering energy saving problem in CoMP systems. A coefficient is defined to describe the fitness of clusters. Then a sub-carrier allocation algorithm with clustering method is put forward for CoMP downlink, which can save the transmit power of BS and increase the throughput. Furthermore a power allocation scheme is proposed based on non-cooperation game; in which the transmit power is decreased by BSs generally to reach the Nash equation (NE). Simulation shows that the proposed sub-carrier allocation scheme and power allocation algorithm are better than the existing ones on users’ throughput while consumes much less energy.
In this paper, a block mapping spatial modulation (BMSM) scheme is proposed to increase the transmit rate of multiple-input multiple-output (MIMO) wireless communication systems. In the BMSM scheme, the information to be transmitted is mapped into different combinations of transmit antenna indices and distinct constellation symbols at each time instant. Multiple transmit antennas are activated, which is different from spatial modulation (SM) and generalised spatial modulation (GSM) techniques, and also the information bits to be mapped to the digital constellation diagram exploits block mapping in the BMSM scheme. The multiplexing gain is obtained and the transmit rate is increased. The simulation results for some cases of channel states are presented which verifies the efficiency of the proposed BMSM scheme.
To avoid the traffic congestion in long term evolution (LTE) networks, a min-max load balancing (LB) scheme is proposed to minimize the demanded radio resources of the maximum loaded cell. For the mixed multicast and unicast services, multicast services are transmitted by single frequency network (SFN) mode and unicast services are delivered with point-to-point (PTP) mode. The min-max LB takes into account point-to-multipoint (PTM) mode for multicast services and selects the proper transmission mode between SFN and PTM for each multicast service to minimize the demanded radio resources of the maximum loaded cell. Based on the solution of this minimization problem, if the maximum loaded cell does not overload, the min-max LB will change PTM mode into SFN mode for multicast services to achieve high quality of service (QoS). Simulation results show that the proposed min-max LB scheme requires less radio resources from the maximum loaded cell than SFN mode for all multicast services.
With the feature size of semiconductor technology reducing and intellectual property (IP) cores increasing, on-chip interconnection network architectures have a great influence on the performance and area of system-on-chip (SoC) design. Focusing on trade-off performance, cost and implementation, a regular network-on-chip (NoC) architecture which is mesh-connected rings (MCR) interconnection network is proposed. The topology of MCR is simple, planar and scalable in architecture, which combines mesh with ring. A detailed theoretical analysis for MCR and mesh is given, and a simulation analysis based on the virtual channel router with wormhole switching is also presented. The results compared with the general mesh architecture show that MCR has better performance, especially in local traffics and low loads, and lower cost.
Due to the constraint of single carrier frequency division multiple access (SC-FDMA) adopted in long term evolution (LTE) uplink, subcarriers allocated to single user equipment (UE) must be contiguous. This contiguous allocation constraint limits resource allocation flexibility and makes the resource scheduling problem more complex. Most of the existing work cannot well meet UE’s quality of service (QoS) requirement, because they just try to improve system performance mainly based on channel condition or buffer size. This paper proposes a novel resource scheduling scheme considering channel condition, buffer size and packet delay when allocating frequency resource. Firstly, optimization function is formulated, which aims to minimize sum of weight for bits still left in UE buffer after each scheduling slot. QoS is the main concern factor here. Then, to get packet delay information, this paper proposes a delay estimation algorithm. Relay node (RN) is introduced to improve overall channel condition. Specific RN selection strategy is also depicted in the scheme. Most important of all, a creative negotiation mechanism is included in the subcarrier allocation process. It can improve the overall system throughput performance in guarantee of user’s QoS requirement. Simulation results demonstrate that the scheme can greatly enhance system performance like delay, throughput and jitter.
The orthogonal frequency division multiplexing (OFDM) is currently used in long term evolution (LTE) system. The time offset estimation (TOE) and frequency offset estimation (FOE) of OFDM is essential in mobile communication base. According to the conventional cross correlation TOE and FOE algorithms, a new cross correlation computation was proposed to estimate the time offset and frequency offset for LTE uplink system, so that the time offset and frequency offset can be estimated simultaneously with low complexity. Compared with the conventional TOE and FOE algorithms, the simulation show that the proposed can reduce complexity and improve performance for FOE with good performance for TOE in additive white Gaussian noise (AWGN) and multipath channel.
A high accuracy frequency synchronization method is proposed for the 3rd generation partnership project (3GPP) long term evolution advanced (LTE-A) downlink receiver in time division duplexing (TDD) mode. In general, cyclic prefix (CP) correlation based fractional frequency offset (FFO) estimation method and primary synchronization signal (PSS) differential correlation based integer frequency offset (IFO) estimation method are applied for LTE-A frequency synchronization. However, the polarity of CP based FFO estimation result may get reversed when system FFO is closer to the edge of frequency estimation range on account of noise interference; PSS based IFO estimation has performance degradation in low signal noise ratio (SNR). We propose polarity detection aided CP based FFO estimation and frequency domain enhanced differential correlation based IFO estimation to obtain higher accuracy of frequency synchronization. Computer simulation shows that the proposed method greatly outperforms the conventional methods, especially in low SNR scenario.
The article investigates how to send perfect space-time codes with low feedback amount and symbol-by-symbol decoding for X channel using precoders. It is assumed that two users are introduced with two antennas and two receivers. Each user employs a rate-2 space-time block code and follows certain rule when sending codewords. The multi-user interference is eliminated by pre-coding at the transmitter and linear processing at the receiver. Compared with the existing scheme for the same scene, the proposed scheme greatly reduces feedback amount and improves the transmission efficiency, while keeping the same decoding complexity. Simulations demonstrate the validity of the proposed scheme.
By virtue of an increase in spectral efficiency by reducing the transmitted pilot tones, the compressed sensing (CS) has been widely applied to pilot-aided sparse channel estimation in orthogonal frequency division multiplexing (OFDM) systems. The researches usually assume that the channel is strictly sparse and formulate the channel estimation as a standard compressed sensing problem. However, such strictly sparse assumption does not hold true in non-sample-spaced multiple channels. The authors in this article proposed a new method of compressed sensing based channel estimation in which an over-complete dictionary with a finer delay grid is applied to construct a sparse representation of the non-sample-spaced multipath channels. With the proposed, the channel estimation was formulated as the model-based CS problem and a modified model-based compressed sampling matching pursuit (CoSaMP) algorithm was applied to reconstruct the discrete-time channel impulse response (CIR). Simulation indicates that the new method proposed here outperforms the traditional standard CS-based methods in terms of mean square error (MSE) and bit error rate (BER).
Research on existing radio frequency identification (RFID) authentication protocols security risks, poor performance and other problems, a RFID security authentication protocol based on dynamic identification (ID) and Key value renewal is proposed. Meanwhile, the security problems based on Hash function RFID security authentication protocol in recent years have been also sorted and analyzed. Then a security model to design and analyze RFID protocols is built. By using the computational complexity, its correctness and security have been proved. Compared with the safety performance, storage overhead, computational overhead and other aspects of other protocols, the protocol for RFID has more efficient performance and ability to withstand various attacks. And the C# programming language is used to simulate the authentication process on the visual studio platform, which verifies the feasibility of the protocol.
This article proposes a novel antenna structure by slot loading method based on the planar inverted F antenna (PIFA), which is fed by coaxial probe. The antenna structure consists of a new and flexible slot array including five slots. To reveal the mechanism of slot loading in creating new and broad frequency band, the article discusses the performance of the five slots in detail. Simulations with Ansoft HFSS 10.0 indicate that the 10 dB relative impedance bandwidth can reach 17.2 %, 8.9 % and 12.1 % that covers 2.40/3.30/5.15/5.725 GHz wireless local area network (WLAN) and world interoperability for microwave access (WiMAX) bands. Meanwhile, the antenna has stable radiation pattern, high gain and low profile. Especially, the slot array is flexible and capable of being realized in various application cases. Actually the design reaches broadband, multi-band, miniaturization, high gain and is easy to fabricate.
For practical considerations, it is essential to accelerate the convergence speed of the decoding algorithm used in an iterative decoding system. In this paper, replica versions of horizontal-shuffled decoding algorithms for low-density parity-check (LDPC) codes are proposed to improve the convergence speed of the original versions. The extrinsic information transfer (EXIT) chart technique is extended to the proposed algorithms to predict their convergence behavior. Both EXIT chart analysis and numerical results show that replica plain horizontal-shuffled (RPHS) decoding converges much faster than both plain horizontal-shuffled (PHS) decoding and the standard belief-propagation (BP) decoding. Furthermore, it is also revealed that replica group horizontal- shuffled (RGHS) decoding can increase the parallelism of RPHS decoding as well as preserve its high convergence speed if an equivalence condition is satisfied, and is thus suitable for hardware implementation.
A jointly optimal sensing-transmission time duration and power allocation scheme for a cooperative relay network is developed by maximizing the network energy efficiency. In particular, observing that the spectrum sensing and data transmission duration lies within a strict interval, the jointly optimal solutions of sensing-transmission duration and power allocation are obtained by sequential optimization. The superiority of the proposed scheme in relay-assisted transmission mode over non-relay transmission mode in terms of energy-efficiency has been verified by quantitative simulation results.
As one promising technology for indoor coverage and service offloading from the conventional cellular networks, femtocells have attracted considerable attention in recent years. However, most of previous work are focused on resource allocation during the access period, and the backhaul involved resource allocation is seriously ignored. The authors studied the backhaul resource allocation in the wireless backhaul based two-tier heterogeneous networks (HetNets), in which cross-tier interference control during access period is jointly considered. Assuming that the macrocell base station (MBS) protects itself from interference by pricing the backhaul spectrum allocated to femtocells, a Stackelberg game is formulated to work on the joint utility maximization of the macrocell and femtocells subject to a maximum interference tolerance at the MBS. The closed-form expressions of the optimal strategies are obtained to characterize the Stackelberg equilibriums for the proposed games, and a backhaul spectrum payment selection algorithm with guaranteed convergence is proposed to implement the backhaul resource allocation for femtocell base stations (FBSs). Simulations are presented to demonstrate the Stackelberg equilibrium (SE) is obtained by the proposed algorithm and the proposed scheme is effective in backhaul resource allocation and macrocell protection in the spectrum-sharing HetNets.
Inventory inaccuracy has great influence on the supply chain performance and it has attracted much attention of large numbers of researchers. This paper mainly studies the robust multi-period inventory inaccuracy problem by attaching radio frequency identification (RFID) technology. In particular, the typical retailer-supplier flexible commitment (RSFC) problem is considered to cope with the uncertain environment. After modeling, robust optimization (RO) and the affinely adjustable robust counterpart (AARC) methodology are applied to solve the model. Finally, this paper uses a numerical example for the analysis of how RFID technology can be exploited in supply chain, and the effect of demand uncertainty on the systems. The results highlight the importance of inventory availability related rates and variable uncertainty in determining the profitability of RFID adoption, which can provide managerial guidelines to supply chain firms.
This paper reviews multi-channel media access control (MAC) protocols based on IEEE 802.11 in wireless Mesh networks (WMNs). Several key issues in multi-channel IEEE 802.11-based WMNs are introduced and typical solutions proposed in recent years are classified and discussed in detail. The experiments are performed by network simulator version 2 (NS2) to evaluate four representative algorithms compared with traditional IEEE 802.11. Simulation results indicate that using multiple channels can substantially improve the performance of WMNs in single-hop scenario and each node equipped with multiple interfaces can substantially improve the performance of WMNs in multi-hop scenario.
This paper presents noncooperative and cooperative security transmission schemes for multiple-input multiple-output (MIMO) Gaussian wiretap channel with one helper for antenna configuration with arbitrary number. In these two schemes, the transmitter performs beamforming based on generalized singular value decomposition (GSVD), where an appropriate power allocation algorithm is utilized. Meanwhile, the helper sends artificial noise for higher secrecy rate. However, the transmission strategies for the artificial noise are different in the two schemes. In the first scheme, the helper adopts GSVD-based beamforming. Nevertheless, in this scheme, the impact of the artificial noise on the information signal at the receiver is not considered. To solve the problem, the helper performs space projection (SP)-based beamforming in the second scheme. In this scheme, suboptimal weighting factors are introduced to reduce the computational complexity, which can be adapted to the change of the channel quality. Theoretical analysis for the performance of the two proposed schemes is then given. Furthermore, simulation results indicate that the two presented schemes perform better than the existed schemes without helper. They also show that in the second scheme the suboptimal parameter setting is better than equal parameter setting and quite close to optimal parameter setting.
Energy efficiency (EE) can be enhanced by retransmissions and combining in hybrid automatic repeat request (HARQ) system. However, it is difficult to optimize the transmit power of each retransmission when the accurate retransmission number and future channel state information (CSI) cannot be obtained. This paper proposes a simple energy efficient HARQ scheme for point-to-point wireless communication. In the proposed scheme, the conditional word error rate (WER) of each retransmission is fixed and the transmit power is adapted correspondingly. Three performance metrics are analyzed including average transmission number, throughput and EE. Compared with the conventional equal power HARQ scheme, the proposed scheme can significantly improve the EE and other two metrics under the same constraint of average transmit power or average energy consumption. Furthermore, it is found that, selecting a conditional WER which is slightly smaller than the optimal one is sufficient for practical implementation.
This paper proposes the first lattice-based sequential aggregate signature (SAS) scheme with lazy verification that is provably secure in the random oracle model. As opposed to large integer factoring and discrete logarithm based systems, the security of the construction relies on worst-case lattice problem, namely, under the small integer solution (SIS) assumption. Generally speaking, SAS schemes enable any group of signers ordered in a chain to sequentially combine their signatures such that the size of the aggregate signature is much smaller than the total size of all individual signatures. Unlike prior such proposals, the new scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature, and the signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. Indeed, the new scheme does not even require a signer to know the public keys of other signers.
In many wireless sensor network applications, it should be considered that how to trade off the inherent conflict between energy efficient communication and desired quality of service such as real-time and reliability of transportation. In this paper, a novel routing protocols named balance energy-efficient and real-time with reliable communication (BERR) for wireless sensor networks (WSNs) are proposed, which considers the joint performances of real-time, energy efficiency and reliability. In BERR, a node, which is preparing to transmit data packets to sink node, estimates the energy cost, hop count value to sink node and reliability using local information gained from neighbor nodes. BERR considers not only each sender’ energy level but also that of its neighbor nodes, so that the better energy conditions a node has, the more probability it will be to be chosen as the next relay node. To enhance real-time delivery, it will choose the node with smaller hop count value to sink node as the possible relay candidate. To improve reliability, it adopts retransmission mechanism. Simulation results show that BERR has better performances in term of energy consumption, network lifetime, reliability and small transmitting delay.
Interference between users is a significant issue for resource allocation in cognitive radio (CR) networks. The interference induced by the secondary users (SUs) to the primary users (PUs) derives from two aspects: 1) out-of-band leakage (OOBL), 2) spectrum sensing error (SSEs). Filter bank multicarrier (FBMC) has small out-of-band leakage and high spectral efficiency in comparison with that of the orthogonal frequency division multiplexing (OFDM). In this paper, a resource allocation algorithm with considering spectrum sensing errors in CR networks is proposed. The interference model is established first and then the proposed algorithm is divided into two steps, power is allocated to SUs under both interference constraints and total power budget. Simulation results based on FBMC and OFDM systems show that the proposed algorithm causes less interference to the PUs than that of algorithm without considering OOBL, the performance of FBMC in interference and throughput is better than OFDM in CR networks.
Hybrid automatic repeat request (HARQ) is a well-known technique for improving system throughput and link performance of wireless communication systems, including cooperative communication systems. The amplify-and-forward (AF) relaying method is one of the most attractive cooperative diversity schemes because of its low complexity. In this article, the end-to-end performance in terms of block error rate (BLER) and normalized throughput of AF relaying with HARQ transmission under the Rayleigh fading channel is analyzed. Numerical results validate the proposed analysis and demonstrate the gain of HARQ schemes in AF relaying systems. This analytical method can be extended to the systems with other HARQ protocols and other cooperative relaying schemes.
A novel cross layer scheduling algorithm is proposed for real-time (RT) traffic in multiuser downlink multiple-input multiple-output orthogonal frequency division multiple access (MIMO-OFDMA) wireless systems. The algorithm dynamically allocates resources in space, time and frequency domain based on channel state information (CSI), users’ quality of service (QoS) requirements and queue state information (QSI). To provide higher data rate and spectrum efficiency, adaptive modulation and coding (AMC) is employed. The proposed algorithm can improve cell throughput and increase the number of users that can be supported while guaranteeing users’ QoS requirements and fairness among all users. Simulation results indicate that the proposed algorithm can achieve superior performance.
The complex orthogonal designs with maximal rates and minimal delays is an open problem for space-time block codes. Maximal rates can effectively transmit symbols to the lonest distance in the space dimension ; and minimal delays is the least decoding delays in the time dimension. Many authors have observed that regarding the complex orthogonal designs for space-time block codes with the antennas n = 4k (k N), its minimal delay is the same as that for n=4k-1. However none was able to prove it.In this paper, we use the characteristics of Hadamard matrix to prove this property to fulfill this vacancy. 16 Refs. In English.
Effective grid authentication plays a critical role in grid security, which has been recognized as a key issue in the designing and extension of grid technologies. At present, public key infrastructure (PKI) has been widely applied for grid authentication, and this article proposes a novel grid authentication mechanism, which is based on combined public key (CPK) employing elliptic curve cryptography (ECC). The designing structure of the new grid authentication mechanism and its implementation procedure are described in details. Property analysis of the new mechanism is also made in comparison with that of the globus security infrastructure (GSI) authentication, which leads to the conclusion that CPK-based grid authentication, may be applied as an optimized approach towards efficient and effective grid authentication.
Smart antenna technology is introduced to wireless mesh networks. Smart antennas based wider-range access medium access control (MAC) protocol (SWAMP) is used as MAC protocol for IEEE 802.11 mesh networks in this study. The calculation method of node throughput in chain and arbitrary topology is proposed under nodes fairness guarantee. Network scale and interference among nodes are key factors that influence node throughput. Node distribution pattern near the gateway also affects the node throughput. Experiment based on network simulator-2 (NS-2) simulation platform compares node throughput between smart antenna scenario and omni-antenna scenario. As smart antenna technology reduces the bottle collision domain, node throughput increases observably.
In this article, a method based on max signal interference noise ratio (SINR) criterion is proposed, to mitigate the interuser interference for downlink multiuser spatial multiplexing multi-input multi-output (MIMO) systems. Unlike the zero forcing (ZF) scheme in which the SNR is decreased when the interference is eliminated completely, max SINR method makes a compromise between noise and interuser interference. When the number of substreams is larger than the difference between the number of base station antennas and the sum of interference mobile station antennas, the ZF is infeasible. An existing coordinated TX-RX block diagonalization (COOR BD) method uses preprocessing at the receiver to cancel the interuser interference. However, it cannot obtain more receive diversity gain because of the preprocessing. Analysis and simulation show that the max SINR scheme has better performance than the ZF method. Moreover, when the ZF is infeasible, the max SINR scheme can obtain more receive diversity gain than COOR BD in the two-user case.
Q-ary low-density parity-check (Q-LDPC) codes have a better performance than those of the binary low-density parity-check (B-LDPC) codes, at short and medium block lengths, but the decoder of Q-LDPC has more complexity. In this article, a new stop criterion is proposed. By analyzing the changes of the maximum posteriori probability of the variable node, the criterion decides whether the iteration of the decoder must be stopped. The simulation results show that the stop criterion can effectively reduce the computation complexity of the Q-LDPC decoder with negligible performance loss.
In this article, the performance of a novel radio resource management (RRM) strategy for voice over IP (VoIP) service will be evaluated in low code rate (LCR) time duplex high speed downlink packet access (TD-HSDPA) system. This novel RRM strategy is studied from the aspects of scheduling algorithm and time slot assignment algorithm. As VoIP is delay sensitive service, the delay based proportional fair (DBPF) scheduler is proposed in this article. Smart antenna (SA), which is adopted in LCR TD-HSDPA, can obtain spatial information from the direction of arrival (DOA) estimation. To exploit this spatial information further, a dynamic time slot allocation (DTA) algorithm is introduced to cooperate with the DBPF scheduler. The performance of the round robin (RR), proportional fair (PF) with the random time slot allocation (RTA) algorithm will be given to demonstrate the benefits of this novel DBPF with DTA RRM strategy. The system level simulation results prove that the performance of this novel RRM strategy is the best of all the RRM strategies.
To suppress the side-band interferences caused by multiband orthogonal frequency division multiplexing (MB-OFDM)-based cognitive radio systems, a mathematical expression of the side-band signal is derived. Based on this expression, the constraints among the transmitted symbols, which help to suppress the interferences, are obtained. Combined with the constraints, a type of block Turbo code modulation scheme is proposed. In the modulation scheme, the side-band interferences are attenuated quickly. Compared with other techniques, in this scheme, the interference suppression is implemented more easily and sufficiently. Simultaneously, the bit error rate (BER) performance can be improved. Theoretical analyses and simulation results show that it is highly applicable for MB-OFDM-based cognitive radio systems to suffer from Rayleigh fading.
Electromagnetic Field and Microwave Technologies
Electromagnetic Field and Microwave Technologies
Electromagnetic Field and Microwave Technologies
Electromagnetic Field and Microwave Technologies
Optical Fiber Communication
Integrated Circuit Design
Electromagnetic Compatibility
Integrated Circuit Design
Electromagnetic Compatibility
Electromagnetic Field and Microwave Technologies
Microelectronics and solid state electronics
UWB Wireless Communications and CMOS RF IC Design
Microelectronics and Solic State Electronics
Distributed Network Computing
Telecommunication Economics and Services
Telecommunication Economics and Services
Computer Application and Information System