The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0001
Abstract ( 366 ) PDF (3055 KB)( 324 )
A joint channel selection and power control scheme is developed for video streaming in device-to-device (D2D) communications based cognitive radio networks. In particular, physical queue and virtual queue models by applying ‘M/G/1 queue ’and ‘M/G/1 queue with vacations’ theories are built up, respectively, to evaluate the delays experienced by various video traffics. Such delays play a vital role in calculating the packet loss rate for video streaming, which reflects the video distortion. Based on the distortion model, a video distortion minimization problem is formulated, subject to the rate constraint, maximum power constraint, primary users’ tolerant interference constraint, and secondary users’ minimum data rate requirement constraint. The optimization problem turns out to be a mixed integer nonlinear programming (MINLP), which is generally nondeterministic in polynomial time. A Lagrange dual method is thus employed to reformulate the video distortion minimization problem, based on which the sub-gradient algorithm is used to determine a relaxed solution. Thereafter, applying the iterative user removal yields the optimal joint channel selection and power control solution to the original MINLP problem. Extensive simulations validate our proposed scheme and demonstrate that it significantly increases the peak signal-to-noise ratio (PSNR) compared with the existing schemes.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0002
Abstract ( 352 ) PDF (14599 KB)( 243 )
With the rapid growth of satellite traffic, the ability to forecast traffic loads becomes vital for improving data transmission efficiency and resource management in satellite networks. To precisely forecast the short-term traffic loads in satellite networks, a forecasting algorithm based on principal component analysis and a generalized regression neural network (PCA-GRNN) is proposed. The PCA-GRNN algorithm exploits the hidden regularity of satellite networks and fully considers both the temporal and spatial correlations of satellite traffic. Specifically, it selects optimal time series of spatio-temporally correlated historical traffic from satellites as forecasting inputs and applies principal component analysis to reduce the input dimensions while preserving the main features of the data. Then, a generalized regression neural network is utilized to perform the final short-term load forecasting based on the obtained principal components. The PCA-GRNN algorithm is evaluated based on real-world traffic traces, and the results show that the PCA-GRNN method achieves a higher forecasting accuracy, has a shorter training time and is more robust than other state-of-the-art algorithms, even for incomplete traffic datasets. Therefore, the PCA-GRNN algorithm can be regarded as a preferred solution for use in real-time traffic forecasting for realistic satellite networks.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0003
Abstract ( 317 ) PDF (1063 KB)( 252 )
This article put forward a resource allocation scheme aimming at maximizing system throughput for devide-to-device(D2D) communications underlying cellular network. Firstly, user closeness is defined and calculated through social information including friendship, interest similarity and communication strength to represent the willingness of user to share the spectrum resource with others. Then a social-aware resource allocation problem is formulated to maximize the system throughput while guaranteeing the quality of service(QoS) requirements of both the admissible D2D pairs and then the power of both CUs and D2D pairs is efficiently allocated. Finally CUs and D2D pairs are matched to reuse the spectrum resource in consideration of both user closeness and physical conditions. Simulation results certify the effectiveness of the proposed scheme which significantly enhances the system throughput compared with the existing algorithms.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0004
Abstract ( 377 ) PDF (2094 KB)( 433 )
Face recognition has been a hot-topic in the field of pattern recognition where feature extraction and classification play an important role. However, convolutional neural network (CNN) and local binary pattern (LBP) can only extract single features of facial images, and fail to select the optimal classifier. To deal with the problem of classifier parameter optimization, two structures based on the support vector machine (SVM) optimized by artificial bee colony (ABC) algorithm are proposed to classify CNN and LBP features separately. In order to solve the single feature problem, a fusion system based on CNN and LBP features is proposed. The facial features can be better represented by extracting and fusing the global and local information of face images. We achieve the goal by fusing the outputs of feature classifiers. Explicit experimental results on Olivetti Research Laboratory (ORL) and face recognition technology (FERET) databases show the superiority of proposed approaches.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0005
Abstract ( 331 ) PDF (1243 KB)( 269 )
The local direction pattern (LDP) is unsusceptible to random noise which is widely used in texture extraction of face region. LDP cannot encode the central pixel thus the important information will be lost. Thus a new feature descriptor called extended local directional pattern (ELDP) is proposed for face extraction. First, the mean value of the eight directional edge response values and the gray value of center pixel are calculated. Second, the mean value is taken as the threshold. Then, the expression image is encoded using nine encoded values. In order to reduce redundant information and get more effective information, the Gabor filter is used to obtain the multi-direction Gabor magnitude maps (GMMs), and then the ELDP is used to encode the GMMs. Finally, support vector machine (SVM) is applied to classify and recognize facial expression. The experimental results show that the feature dimensions is greatly reduced and the rate of facial expression recognition is improved.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0006
Abstract ( 373 ) PDF (2269 KB)( 232 )
Implementing face recognition efficiently to real world large scale dataset presents great challenges to existing approaches.The method in this paper was proposed to learn an identity distinguishable space for large scale face recognition in MSR- Bing image recognition challenge (IRC). Firstly, a deep convolutional neural network (CNN) was used to optimize a 128 B embedding for large scale face retrieval. The embedding was trained via using triplets of aligned face patches from FaceScrub and CASIA-WebFace datasets. Secondly, the evaluation of MSR-Bing IRC was conducted according to a cross-domain retrieval scheme. The real-time retrieval in this paper was benefited from the K-means clustering performed on the feature space of training data. Furthermore, a large scale similarity learning (LSSL) was applied on the relevant face images for learning a better identity space. A novel method for selecting similar pairs was proposed for LSSL. Compared with many existing networks of face recognition, the proposed model was lightweight and the retrieval method was promising as well.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1).
Abstract ( 318 ) PDF (786 KB)( 332 )
One-bit compressed sensing(CS) technology reconstructs the sparse signal when the available measurements are reduced to only their sign-bit. It is well known that CS reconstruction should know the measurement matrix exactly to obtain a correct result. However, the measurement matrix is probably perturbed in many practical scenarios. An iterative algorithm called perturbed binary iterative hard thresholding (PBIHT) is proposed to reconstruct the sparse signal from the binary measurements (sign measurements) where the measurement matrix experiences a general perturbation. The proposed algorithm can reconstruct the original data without any prior knowledge about the perturbation. Specifically, using the ideas of the gradient descent, PBIHT iteratively estimates signal and perturbation until the estimation converges. Simulation results demonstrate that, under certain conditions, PBIHT improves the performance of signal reconstruction in the perturbation scenario.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0008
Abstract ( 293 ) PDF (3405 KB)( 333 )
Nowadays, network video is widely deployed everywhere. But the objective quality assessment of network video is a challenge, because it will be distorted by various factors, including transmission and compression. This paper proposes a new objective assessment methodology based on fuzzy inference system of Mamdani. Firstly six quality parameters [initial buffering time (Tinit), mean re-buffering duration (Trebuf), re-buffering frequency (Frebuf), Noise standard deviation (Nsd), Blur degree (Bd), and Block effect (Be)] are introduced, and they are all used as input for the fuzzy logic controller system. Secondly, the outputs are used as inputs to another fuzzy logic controller system to obtain the objective quality of network video. Lastly the proposed method is tested on four videos under different network environment and compared with other methods. The experimental results show that the proposed method can improve the similarity between subjective and objective assessment.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0009
Abstract ( 307 ) PDF (1361 KB)( 406 )
A high resolution and fast conversion rate time-to-digital converter (TDC) design based on time amplifier (TA) is proposed. The pulse-train TA employs a two-step scheme. The input time interval is first amplified by a N-times TA and the effective time is extracted in pulse-train using a time-register. Then the resulted interval is further amplified by the other pulse-train amplifier to obtain the final result. The two-step TA can thus achieve large gain that is critical for high resolution TDC. Simulation results in 1.2 V, 65 nm technology showed that for a 10 bit TDC, a resolution of 0.8 ps and a conversion rate of 150 MS/s are achieved while consuming 2.1 mW power consumption.
The Journal of China Universities of Posts and Telecommunications, 2018, 25 (1). doi： 10.19682/j.cnki.1005-8885.2018.0010
Abstract ( 413 ) PDF (695 KB)( 402 )
To solve the efficiency problem of batch anonymous authentication for vehicular Ad-hoc networks (VANET), an improved scheme is proposed by using bilinear pairing on elliptic curves. The signature is jointly generated by roadside unit(RSU) node and vehicle, thus reducing the burden of VANET certification center and improving the authentication efficiency, and making it more difficult for attacker to extract the key. Furthermore, under random oracle model (ROM) security proof is provided. Analyses show that the proposed scheme can defend against many kinds of security problems, such as anonymity, man-in-the-middle (MITM) attack, collusion attack, unforgeability, forward security and backward security etc., while the computational overheads are significantly reduced and the authentication efficiency is effectively improved. Therefore, the scheme has great theoretical significance and application value under computational capability constrained internet of things (IoT) environments.