Next Article in Journal
Real-Time Monitoring for BDS Signal-In-Space Anomalies Using Ground Observation Data
Previous Article in Journal
Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector

School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(6), 1815; https://doi.org/10.3390/s18061815
Submission received: 11 April 2018 / Revised: 21 May 2018 / Accepted: 30 May 2018 / Published: 4 June 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ( M 4 2 M 3 + 7 M 2 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

1. Introduction

Direction of arrival (DOA) estimation is a topic that has received a great deal of attention in the last several decades. It arises in many practical scenarios, such as radar, sonar, and communications [1,2,3,4]. For example, in radar and sonar systems, objects including airplanes, birds, missiles, submarines, and fish torpedoes can be tracked by estimating their DOAs. In communication systems, the DOAs of the sources can be used to improve the communication quality.
Higher-order cumulants (HOCs)-based DOA estimation methods have been developed for non-Gaussian sources [1,5,6,7,8], mainly to overcome the limitations of second-order statistics-based DOA estimation methods such as MUltiple SIgnal Classification(MUSIC) [9] and Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) [10]. Their limitations include identifiable number of sources, colored Gaussian noise, and modeling errors. The famous 4-MUSIC method [5] uses the fourth-order cumulants (FOCs) to form a MUSIC-like method. Then, the 2 q -MUSIC [6] and rectangular 2 q -MUSIC [7] methods extend 4-MUSIC by utilizing 2 q th-order cumulants. The rectangular 2 q -MUSIC method achieves a trade-off between performance and maximal identifiable number of sources compared with the 2q-MUSIC method. To further increase the degrees of freedom (DOFs) of the FOCs-based methods, a kind of sparse linear array named multiple level nested array and the corresponding spatial smoothing MUSIC (SS-MUSIC) method were developed, which make the DOFs increase from O ( M q ) to O ( M 2 q ) [8], where M is the number of sensors in the array. However, HOCs-based methods always suffer from performance degradation when snapshots are limited, mainly because accurate estimations of HOCs require a large number of snapshots.
Recently, a series of covariance vector-based DOA methods were proposed [11,12,13] which built a single measurement vector (SMV) model by covariance vector and were solved by sparse methods such as L1-norm-based methods [14] and sparse Bayesian learning (SBL) [15]. Consequently, these methods always perform better than other covariance-based DOA methods such as MUSIC when snapshots are limited, mainly because estimation errors of covariance are utilized to build their sparse models. Moreover, the DOFs of these methods increase to O ( M 2 ) , which is more than MUSIC. Analogously, the FOCs vector-based L1-norm method (4-L1) is applied in [16,17,18]. Compared with FOCs vector-based SS-MUSIC (4-SS-MUSIC) in [8], the 4-L1 method can utilize all the virtual sensors in the fourth-order difference co-array (FODCA) [8], which may give a better performance than 4-SS-MUSIC. Moreover, the 4-L1 method can be applied to any array, while 4-SS-MUSIC only suits for linear arrays. Nonetheless, the 4-L1 method is inconvenient to use in practice, since the parameter of allowable error bound is difficult to choose, and the solution of 4-L1 is sensitive to it. The allowable error bound usually needs to be chosen manually through trial-and-error for each scenario, as in [16,17,18]. Furthermore, it has been demonstrated both theoretically and empirically that the SBL technique induces less structural error (biased global minimum) and convergence error (failure in achieving the global minimum) than the L1 methods [11,19,20,21]. On the other hand, an advanced denoising procedure of the FOCs vector may further promote the performance of FOCs vector-based DOA estimation methods. An analogous procedure can be found in covariance vector-based DOA methods in [22].
In this paper, we establish a novel denoised SMV model by FOCs vector, and solve it by an off-grid sparse Bayesian inference (OGSBI) method [21]. By utilizing the concept of the fourth-order difference co-array, the advanced denoising or dimension reduction procedure of FOCs vector is presented for any geometry. The estimation errors of FOCs are integrated in the proposed SMV model, and are approximately estimated in a simple way. A necessary condition of our method regarding the number of identifiable sources is presented such that in order to uniquely identify all sources, the number of sources K must fulfill K ( M 4 2 M 3 + 7 M 2 6 M ) / 8 . The proposed method suits for any geometry, does not require prior knowledge of the number of sources, has maximum identifiability O ( M 4 ) , and is insensitive to associated parameters. The off-grid parameter estimation in our method promotes superior performance in overdetermined cases when the number of snapshots is large and signal–noise-ratio (SNR) is high, and also ensures good performance when choosing a relatively coarse grid. Numerical simulations illustrate the superior performance of the proposed method.

2. Data Model

One-dimensional DOA estimation is discussed in this paper. Consider an array with any geometry which consists of M identical, omnidirectional, unpolarized and isotropic sensors. The sensor locations n 1 , n 2 , , n M are measured by d = λ / 2 , and collected by the vector set S { n 1 , n 2 , , n M } R 3 , where n m = [ n m , x , n m , y , n m , z ] T is expressed in three dimensions for m = 1 , 2 , , M , ( · ) T is the transpose operator, λ is the wavelength, and R denotes the set of real numbers. The cardinality of S is denoted by | S | , and | S | = M . Each sensor receives the contribution of K zero-mean statistically independent stationary narrowband non-Gaussian sources. We denote by x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x M ( t ) ] T , s ( t ) = [ s 1 ( t ) , s 2 ( t ) , , s K ( t ) ] T and v ( t ) = [ v 1 ( t ) , v 2 ( t ) , , v M ( t ) ] T the observed signals of the sensors, the source signals, and the noises of the observed signals at the tth snapshot, respectively, where t = 1 , 2 , , + . The observed signals are modeled as:
x ( t ) = As ( t ) + v ( t ) ,
where A = [ a ( θ 1 ) , a ( θ 2 ) , , a ( θ K ) ] C M × K denotes the array manifold matrix, a ( θ k ) = [ e j 2 π u k T n 1 , e j 2 π u k T n 2 , , e j 2 π u k T n M ] T is the steering vector for θ k , j is an imaginary number, u k = d / λ · [ cos ( θ k ) , sin ( θ k ) , 0 ] T , θ k denotes the azimuth angle of the kth source, and C is the set of complex numbers. The noises are assumed to be statistically independent zero-mean white Gaussian and independent from each source.
Build the FOCs vector c x C M 4 , where the ith entry is [ c x ] i = cum { x i 1 ( t ) , x i 2 ( t ) , x i 3 * ( t ) , x i 4 * ( t ) } with i = ( i 1 1 ) M 3 + ( i 2 1 ) M 2 + ( i 3 1 ) M + i 4 , cum { · } denotes the cumulant operator, ( · ) * denotes complex conjugate, and the indexes i γ and γ are integers satisfying 1 i γ M and 1 γ 4 , respectively. Due to model (1), the above assumptions of the sources and noises, and the properties of the cumulants [23,24], we can obtain
[ c x ] i = k = 1 K c s k a i 1 ( θ k ) a i 2 ( θ k ) a i 3 * ( θ k ) a i 4 * ( θ k ) ,
where c s k = cum { s k 1 ( t ) , s k 2 ( t ) , s k 3 * ( t ) , s k 4 * ( t ) } with k γ = k , a i γ ( θ k ) is the i γ th element of a γ ( θ k ) , and a γ ( θ k ) = a ( θ k ) . Then, we have
c x = Bc s C M 4 ,
where c s = [ c s 1 , c s 2 , , c s K ] T , B = [ b ( θ 1 ) , b ( θ 2 ) , , b ( θ K ) ] , b ( θ k ) = a 1 ( θ k ) a 2 ( θ k ) a 3 * ( θ k ) a 4 * ( θ k ) , and ⊗ denotes the Kronecker product.

3. The Proposed Method

In this section, we present an FOCs vector-based method to perform DOA estimation. Firstly, we provide a dimension reduction method to build a novel data model by employing the concept of a fourth-order difference co-array [8]. This method can also be treated as a denoising procedure to reduce the estimation errors of FOCs. Next, according to the newly built data model, our sparse model is efficiently established and solved by the OGSBI method. Some information about the proposed method (e.g., identifiability, complexity, and comparison with the state-of-the-art methods) is also discussed.

3.1. Dimension Reduction or Denoising Procedure

In this subsection, the dimension of model (3) is reduced by utilizing a fourth-order difference co-array. Here we supplement several definitions and properties to facilitate dimension reduction.
Definition 1.
Fourth-order difference co-array (FODCA): Given an array with arbitrary geometry S , the fourth-order difference co-array D is defined as the set of distinct vector elements from the set
D = { n i 1 + n i 2 n i 3 n i 4 : n i 1 , n i 2 , n i 3 , n i 4 S } ,
where i 1 , i 2 , i 3 , i 4 = 1 , 2 , , M , as described in Section 2.
Let m 1 , m 2 , , m | D | denote the distinct elements in D , where | D | is the cardinality of D . The following definition gives the concept of a transformation matrix, which describes the relation between b ( θ ) and the steering vector on D (see Property 2 in this subsection).
Definition 2.
The transformation matrix J : Given array S , the associated FODCA D can be obtained by Definition 1. The transformation matrix J is an M 4 × | D | matrix, where the ξth column corresponds to the vector m ξ D , and the ( i , ξ ) th entry of J satisfies
J i , ξ = { 1 , if n i 1 + n i 2 n i 3 n i 4 = m ξ , 0 , otherwise ,
where the index ξ = 1 , 2 , , | D | , and recall that i = ( i 1 1 ) M 3 + ( i 2 1 ) M 2 + ( i 3 1 ) M + i 4 in Section 2.
Property 1.
The transformation matrix J has full column rank.
Property 2.
Let a D ( θ ) be the steering vector of D (i.e., a D ( θ ) = [ e j 2 π u T m 1 , e j 2 π u k T m 2 , , e j 2 π u k T m | D | ] T ), we have
b ( θ ) = Ja D ( θ ) ,
where b ( θ ) = a 1 ( θ ) a 2 ( θ ) a 3 * ( θ ) a 4 * ( θ ) , a γ ( θ ) is the steering vector for θ, and γ = 1 , 2 , 3 , 4 .
Property 3.
Define H = | D | : D = f ( S ) , S = { n 1 , , n M } , n 1 , , n M R 3 , where f ( · ) is the function that S maps to D by Definition 1. Then the supremum of H is
sup { H } = ( M 4 2 M 3 + 7 M 2 6 M + 4 ) / 4 .
Properties 1, 2, and 3 are proved in Appendix A, Appendix B, and Appendix C, respectively.
From Property 2, we have B = JA D , where A D = [ a D ( θ 1 ) , a D ( θ 2 ) , , a D ( θ K ) ] . Substituting it to (3), we obtain c x = JA D c s . Due to Property 1, we have
J + c x = A D c s C | D | ,
where J + = ( J H J ) 1 J H is the Moore–Penrose inverse of J . Comparing the model (8) with (3), it is clear that the row dimension decreases from M 4 to | D | .
Remark 1: An analogous procedure can be found in covariance vector-based DOA estimation methods [22]. Among HOCs-based DOA estimation methods, rectangular 2 q -MUSIC provides two strategies to reduce the dimension of a rectangle FOCs matrix (i.e., selection strategy and averaging strategy), by eliminating redundancies of associated left and right virtual steering vectors [7]. From this perspective, the proposed dimension reduction procedure is the essential realization of the averaging strategy applying to FODCA, with an explicit formulation (8). It is clear that the averaging strategy is better than the selection strategy due to the averaging of the noise in the associated observation components [7], if the computation burden is not taken into consideration. Thus, the averaging strategy can also be treated as a denoising procedure. In addition, it can be seen that FODCA has the same geometry as a fourth-order virtual array with parameters ( q , l ) = ( 4 , 2 ) [25], which is deduced from an eighth-order cumulants-based array processing problem. In TABLE IX from [25], the max number of identical virtual sensors in the fourth-order virtual array, with parameters ( q , l ) = ( 4 , 2 ) and space diversity (without angular and polarization diversity), is illustrated, and it is equal to (7) after simplification due to the same geometry. The detailed derivation of (7) is provided in Appendix C, since it is not provided in [25].
Remark 2: It is clear that when 2 K < M , the limiting root mean square error (RMSE) of covariance vector-based methods is not necessarily zero [22,26], while the RMSE of the traditional MUSIC method converges to zero as signal–noise-ratio (SNR) approaches infinity. As the FODCA can be regarded as two levels of second-order difference co-array [8,18], applied to construct covariance vector-based methods, it can be deduced that the FOCs vector-based DOA estimation methods, including [8,16,17,18], will be outperformed by traditional MUSIC in high-SNR regions when 2 K < M . From this point, we only suggest to use these methods (including the proposed method) in underdetermined cases ( K M ).

3.2. Sparse Model

In this subsection we develop the sparse model from (8). Let θ ˜ = [ θ ˜ 1 , , θ ˜ I ] be a fixed sampling grid in the DOA range, where I | D | . Let θ ˜ be a uniform grid with a grid resolution r = θ ˜ 2 θ ˜ 1 I 1 . Suppose θ ˜ w k is the nearest grid point to θ k , where w k { 1 , , I } . Then, the steering vector a D ( θ k ) can be approximated by its first-order Taylor series
a D ( θ k ) a D ( θ ˜ w k ) + a D ( θ ˜ w k ) ( θ k θ ˜ w k ) .
Denote
A ˜ D = [ a D ( θ ˜ 1 ) , , a D ( θ ˜ I ) ] ,
A ˜ D = [ a D ( θ ˜ 1 ) , , a D ( θ ˜ I ) ] .
Denote β = [ β 1 , , β I ] T [ r / 2 , r / 2 ] I and Φ = A ˜ D + A ˜ D diag { β } , where for w = 1 , , I , β w = θ k θ ˜ w k if w = w k for any k = 1 , , K and β w = 0 otherwise, and diag { · } denotes the operator to form a diagonal matrix by a vector or to form a vector by diagonal entries of a matrix. Neglecting approximation errors of (9), model (8) can be expressed as:
J + c x = Φ z ,
where z = [ z 1 , z 2 , , z I ] T is the zero-padded extension of c s from [ θ 1 , , θ K ] to [ θ ˜ 1 , , θ ˜ I ] .
Consider the case of existing estimation errors of FOCs. Let c ^ x = c x + e , where c ^ x is the estimate of c x , and e is the estimation errors vector. Then, model (12) becomes
J + c ^ x = Φ z + J + e .
Assume e AsCN ( 0 M 4 , Q ) [1], where AsCN denotes the asymptotic normal distribution, 0 M 4 denotes zero vector with M 4 elements, and Q C M 4 × M 4 is a positive definite (Hermitian) matrix. The estimation of FOCs and the matrix Q are discussed in Section 3.3. Following this assumption, we have J + e AsCN ( 0 | D | , Q ˜ ) , where Q ˜ = J + Q ( J + ) T . Furthermore, we can obtain Q ˜ 1 2 J + e AsCN ( 0 | D | , I | D | × | D | ) , where I | D | × | D | is | D | dimensional identity matrix. We denote by
c ˜ x = Q ˜ 1 2 J + c ^ x C | D | ,
Φ ˜ = Q ˜ 1 2 Φ = Q ˜ 1 2 ( A ˜ D + A ˜ D diag { β } ) C | D | × I ,
e ˜ = Q ˜ 1 2 J + e C | D | .
Then, we can obtain the following SMV off-grid sparse model
c ˜ x = Φ ˜ z + e ˜ ,
with e ˜ AsCN ( 0 | D | , I | D | × | D | ) . Note that if setting β w = 0 for w = 1 , , I , the model (17) is simplified to an on-grid DOA model.

3.3. Estimation of FOCs and the Matrix Q

To facilitate the implementation of the proposed method, in this subsection we present the estimation methods of FOCs and the matrix Q .
Let us introduce the following general notations: m 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) = E { x i 1 ( t ) x i 2 ( t ) x i 3 * ( t ) x i 4 * ( t ) } , m 2 , x ( i 1 , i 2 * ) = E { x i 1 ( t ) x i 2 * ( t ) } , and m 2 , x ( i 1 , i 2 ) = E { x i 1 ( t ) x i 2 ( t ) } . Their empirical estimators are m ^ 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) = 1 L t = 1 L { x i 1 ( t ) x i 2 ( t ) x i 3 * ( t ) x i 4 * ( t ) } , m ^ 2 , x ( i 1 , i 2 * ) = 1 L t = 1 L { x i 1 ( t ) x i 2 * ( t ) } , and m ^ 2 , x ( i 1 , i 2 ) = 1 L t = 1 L { x i 1 ( t ) x i 2 ( t ) } , respectively, where L is the number of snapshots. Due to the assumptions of sources and noises, [ c x ] i can be expressed as [25]:
[ c x ] i = cum { x i 1 ( t ) , x i 2 ( t ) , x i 3 * ( t ) , x i 4 * ( t ) } = m 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) m 2 , x ( i 1 , i 2 ) m 2 , x * ( i 3 , i 4 ) m 2 , x ( i 1 , i 3 * ) m 2 , x ( i 2 , i 4 * ) m 2 , x ( i 1 , i 4 * ) m 2 , x ( i 2 , i 3 * ) .
Utilizing the corresponding empirical estimators of terms in (18), [ c x ] i can be estimated by
[ c ^ x ] i = m ^ 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) m ^ 2 , x ( i 1 , i 2 ) m 2 , x * ( i 3 , i 4 ) m ^ 2 , x ( i 1 , i 3 * ) m 2 , x ( i 2 , i 4 * ) m ^ 2 , x ( i 1 , i 4 * ) m 2 , x ( i 2 , i 3 * ) .
Consider complex variables X ( t ) = x i 1 ( t ) x i 2 ( t ) x i 3 * ( t ) x i 4 * ( t ) x i 1 ( t ) x i 2 ( t ) t 0 = 1 L x i 3 * ( t 0 ) x i 4 * ( t 0 ) / L x i 1 ( t ) x i 3 * ( t ) t 1 = 1 L x i 2 ( t 1 ) x i 4 * ( t 1 ) / L x i 1 ( t ) x i 4 * ( t ) t 2 = 1 L x i 2 ( t 2 ) x i 3 * ( t 2 ) / L for t = 1 , 2 , , L . It is obvious that lim L + E { X ( t ) } = [ c x ] i . According to central limit theorem, as L + , the sample average X ¯ = t = 1 L X ( t ) = [ c ^ x ] i converges in distribution to a complex normal. This solution theoretically supports the assumption that e = c ^ x c x AsCN ( 0 M 4 , Q ) in Section 3.2, and we aim for a robust estimation of Q in the following derivation.
Consider signals given by
y ( t ) = x ( t ) x ( t ) E { x ( t ) x ( t ) } .
It is obvious that E { y ( t ) } = 0 M 2 . Let y g ( t ) denote the gth element of y ( t ) , g 1 = ( i 1 1 ) M + i 2 , and g 2 = ( i 3 1 ) M + i 4 , then we have
m 2 , y ( g 1 , g 2 * ) = m 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) m 2 , x ( i 1 , i 2 ) m 2 , x * ( i 3 , i 4 ) .
Rewrite (18) as:
[ c x ] i = m 2 , y ( g 1 , g 2 * ) m 2 , x ( i 1 , i 3 * ) m 2 , x ( i 2 , i 4 * ) m 2 , x ( i 1 , i 4 * ) m 2 , x ( i 2 , i 3 * ) ,
where y ( t ) is given by (20).
Lemma 1.
Given two joint distributed complex-valued random variables Y t and Z t with snapshots index t = 1 , 2 , , L , Y t , and Z t are independent from snapshot to snapshot, respectively. As L approaches infinity, it holds true that
var ( Y t Z t ¯ ) / var ( Y ¯ t Z ¯ t ) = O ( L ) ,
where var ( · ) is variance, and Y t Z t ¯ , Y ¯ t , and Z ¯ t are sample averages of Y t Z t , Y t , and Z t with L snapshots, respectively.
Proof. See Appendix D.
Due to Lemma 1, from (19) and (22), it is easy to find that when the snapshots number L is large, the term m ^ 2 , y ( g 1 , g 2 * ) has much larger variance than the other two terms in (22). This implies that the estimation error of [ c ^ x ] i is mainly caused by the term m ^ 2 , y ( g 1 , g 2 * ) , giving rise to the following approximate expression of e i :
e i = [ c ^ x ] i [ c x ] i m ^ 2 , y ( g 1 , g 2 * ) m 2 , y ( g 1 , g 2 * ) .
Since the signals are independent from snapshot to snapshot, and E { y ( t ) } = 0 M 2 , we can obtain
E { e i e j * } 1 L 2 t 1 = 1 L t 2 = 1 L E { [ y g 1 ( t 1 ) y g 2 * ( t 1 ) ] [ y h 1 ( t 2 ) y h 2 * ( t 2 ) ] * } m 2 , y ( g 1 , g 2 * ) m 2 , y * ( h 1 , h 2 * ) = cum { y g 1 ( t ) , y g 2 * ( t ) , y h 1 * ( t ) , y h 2 ( t ) } / L + m 2 , y ( g 1 , h 1 * ) m 2 , y ( g 2 * , h 2 ) / L + m 2 , y ( g 1 , h 2 ) m 2 , y ( g 2 * , h 1 * ) / L ,
where h 1 = ( j 1 1 ) M + j 2 , h 2 = ( j 3 1 ) M + j 4 , j = ( h 1 1 ) M 2 + h 2 , and j 1 , j 2 , j 3 , j 4 = 1 , 2 , , M .
In (25), the term cum { y g 1 ( t ) , y g 2 * ( t ) , y h 1 * ( t ) , y h 1 * ( t ) } / L contains the eighth-order statistics of x ( t ) , which significantly aggravates the inaccuracy of estimation if the number of snapshots is not large enough. In addition, as the number of snapshots number approaches infinity, we get E { e i e j * } 0 . Therefore, we neglect this term to obtain a robust estimation of Q , given by
Q ^ i , j = m ^ 2 , y ( g 1 , h 1 * ) m ^ 2 , y ( g 2 * , h 2 ) / L + m ^ 2 , y ( g 1 , h 2 ) m ^ 2 , y ( g 2 * , h 1 * ) / L .
Rewrite (26) in matrix form
Q ^ = R ^ y R ^ y T / L + [ T ^ y T ^ y , 1 * , T ^ y T ^ y , 2 * , , T ^ y T ^ y , M 2 * ] / L ,
where R ^ y = t = 1 L y ( t ) y H ( t ) / L , T ^ y = t = 1 L y ( t ) y T ( t ) / L , and T ^ y , h 1 is the h 1 th column of T ^ y .
Note that the estimator Q ^ in (30) need to be revised in certain cases. It is clear that, for the noise-free case, R ^ y = ( A A ) t = 1 L { [ s ( t ) s H ( t ) ] [ s ( t ) s H ( t ) ] } ( A A ) H / L , and T ^ y = ( A A ) t = 1 L { [ s ( t ) s T ( t ) ] [ s ( t ) s T ( t ) ] } ( A A ) T / L . When K < M , the ranks of both matrices R ^ y and T ^ y are K 2 . As a result, the estimator (27) may be singular, which may lead to a singular matrix Q ˜ . This is unallowable for model (17). Therefore, when K < M , and SNR is high, to avoid above problems, Q is estimated by
Q ^ = R ^ y R ^ y T / L + [ T ^ y T ^ y , 1 * , T ^ y T ^ y , 2 * , , T ^ y T ^ y , M 2 * ] / L + ϵ 1 I | D | × | D | ,
where ϵ 1 is a small positive number.
Specially, if the observed signals are circular, we have m 2 , x ( l 1 , l 2 ) = 0 and m 4 , x ( l 1 , l 2 , l 3 , l 4 ) = 0 for l 1 , l 2 , l 3 , l 4 = 1 , , M . Then, we obtain y ( t ) = x ( t ) x ( t ) , and c i can be estimated by
[ c ^ x ] i = m ^ 4 , x ( i 1 , i 2 , i 3 * , i 4 * ) m ^ 2 , x ( i 1 , i 3 * ) m ^ 2 , x ( i 2 , i 4 * ) m ^ 2 , x ( i 1 , i 4 * ) m ^ 2 , x ( i 2 , i 3 * ) .
Furthermore, it is obvious that m 2 , y ( g 1 , h 2 ) and m 2 , y ( g 2 * , h 1 * ) become zero in (25). Then, the estimator of Q in (27) becomes:
Q ^ = R ^ y R ^ y T / L .
Accordingly, when K < M and SNR is high, for circular sources, Q is estimated by
Q ^ = R ^ y R ^ y T / L + ϵ 2 I | D | × | D | ,
where ϵ 2 is a small positive number.
Remark 3: Approximate expressions for the covariances of FOCs which are more precise than (25) are derived in [5]. However, it is not easy to use in our method in practice, because several higher-order (greater than four) moments need to be estimated, which may cause the estimation of Q to be not as robust as FOCs. For example, if the number of snapshots is large enough to generate precise estimates of FOCs, but not large enough to generate ones of higher-order (greater than four) moments, the DOA estimation results may suffer from a mass of outliers in independent experiments. In comparison, (26) contains at most fourth-order statistics, which means that it is more robust and has lower computational cost. We will show that our estimators work well for the proposed method in numerical simulations.

3.4. DOA Estimation by OGSBI Method

In this subsection, the sparse model (17) is solved to obtain the DOAs by the off-grid sparse Bayesian inference (OGSBI) method. Recall the sparse model (17):
c ˜ x = Φ ˜ z + e ˜ .
In Section 3.3, we showed that the estimation errors of FOCs obey e = c ^ x c x AsCN ( 0 M 4 , Q ) , and the robust estimators of Q are also given. As e ˜ = Q ˜ 1 2 J + e C | D | , it is obvious that e ˜ AsCN ( 0 | D | , I | D | × | D | ) . Following the sparse Bayesian formulation in [21], further assume e ˜ CN ( 0 | D | , I | D | × | D | ) for simplicity. Then, we have
p ( c ˜ x | z , β ) = CN ( Φ ˜ z , I | D | × | D | ) .
Adopt the two-stage hierarchical prior for z that p ( z ; ρ ) = p ( z | α ) p ( α ; ρ ) d α , in which
p ( z | α ) = CN ( 0 I , Λ ) ,
p ( α ; ρ ) = w = 1 I Γ ( α w ; 1 , ρ ) ,
where ρ > 0 , α R I , Λ = diag { α } , Γ ( α w ; 1 , ρ ) = ρ e ρ α w is the probability density function (PDF) of the Gamma distribution with parameters ( 1 , ρ ), and α w is the wth entry of α . Here the Gamma hyperprior is assumed for α w , since it is a conjugate prior of the Gaussian distribution, and it is widely used in SBL techniques and demonstrated with good performance and robustness [13,21,27,28].
Assume a uniform prior for β : β U ( [ r / 2 , r / 2 ] I ) . By combining the stages of the hierarchical Bayesian model, the joint PDF is
p ( z , c ˜ x , α , β ) = p ( c ˜ x | z , β ) p ( z | α ) p ( α ) p ( β ) .
The posterior distribution of z can be obtained
p ( z | c ˜ x , Λ , β ) = CN ( μ , Σ ) ,
with
μ = Σ Φ ˜ H c ˜ x
Σ = ( Φ ˜ H Φ ˜ + Λ 1 ) 1 = Λ Λ Φ ˜ H ( I + Φ ˜ Λ Φ ˜ H ) 1 Φ ˜ Λ .
As a single snapshot case of the OGSBI method, we can obtain the following updates of α from the deduction in [21]:
α w new = ( 1 + 4 ρ E { | z w | 2 } 1 ) / ( 2 ρ ) ,
where E { | z w | 2 } = | μ w | 2 + Σ w w , μ w is the wth entry of μ , and Σ w w is ( w , w ) th entry of Σ . For β , by maximizing E { log p ( c ˜ x | z , β ) p ( β ) } , we can obtain the updates
β new = arg min β [ r / 2 , r / 2 ] I { β T P β 2 q T β } ,
where P = R { [ ( A ˜ D ) H A ˜ D ] * ( μ μ H + Σ ) } , q = R { diag { μ * } ( A ˜ D ) H ( c ˜ x A ˜ D μ ) } R { diag { ( A ˜ D ) H A ˜ D Σ } } , R ( · ) takes the real part of a complex variable and ⊙ is the Hadamard product. An easier way to estimate β is provided in [21]. We use β , P , and q hereafter to denote their truncated versions for simplicity. By (41) and β { β T P β 2 q T β } = 2 ( P β q ) , we have β new = β ˇ if P is invertible and β ˇ = P 1 q [ r / 2 , r / 2 ] K . Otherwise, we update β elementwise. That is, at each step we update one β w by fixing up the other entries of β . For w = 1 , , K , we first let β ˇ = q k ( P k ) k T β k P k k , where β k denotes β without the kth entry. Then, by constraining β k [ r / 2 , r / 2 ] , we have
β k new = { β ˇ k , if β ˇ k [ r / 2 , r / 2 ] ; r / 2 , if β ˇ k < r / 2 ; r / 2 , otherwise .
The OGSBI is terminated if α κ + 1 α κ 2 / α κ 2 < τ or the maximum number of iterations is reached, where κ is the iteration and τ is the predefined tolerance parameter. Then, α can be treated as the pseudo-spectrum
P w = α w .
By finding the grid indices of highest K peaks of P , denoted by p ^ k , k = 1 , , K , we can obtain K estimated DOAs by
θ ^ k = θ ^ w k + β ^ w k .
Note that the parameter ρ in the method should be given in advance. According to associated information in [21] and our testing results, the OGSBI method is insensitive to ρ if ρ is not too large. Furthermore, we have observed that by setting ρ = 0.01 , the sparse Bayesian learning methods in [13,21,28] always achieve a good performance in a wide range of scenarios. By contrast, the 4-L1 method [16,17,18] is sensitive to the parameter of allowable error bound, which is not easy to estimate (it was chosen to give the best results through trial-and-error for each scenario in the corresponding simulations in [16,17,18]). Therefore, compared with the 4-L1 method, the OGSBI method has the advantage of convenience in setting parameters.
Finally, we summarize our method in Algorithm 1.
Algorithm 1   The proposed sparse method using denoised FOCs vector
1)Input: x , S , and d.
2)Initialization: r, α , ρ , ϵ , and τ .
3)According to the associated scenarios, calculate c ^ x by (19) or (29).
4)According to the associated scenarios, calculate Q ^ by (27), (28), (30), or (31).
5)Calculate J , A ˜ D , A ˜ D , and c ˜ x by Definition 2, (10), (11), and (14), respectively.
6)Repeat the following:
 a)  Calculate μ and Σ by (38) and (39), respectively.
 b)  Update α by (40).
 c)  Update β by (42).
 d)  If α κ + 1 α κ 2 / α κ 2 < τ or the maximum number of iterations is reached, iteration terminates.
7)Find the grid indices of the highest peaks of (43), and output the DOAs by (44).

3.5. Identifiability of the Number of Sources

According to the theorems about parameter identifiability in [29,30,31] and the model (8), we have the following proposition.
Proposition 1.
Any K sources can be uniquely identified from model (17) if and only if
K < spark { B θ } / 2 ,
where spark { B θ } is defined as the smallest number of elements in B θ that are linearly dependent [32], B θ = { b ( θ ) : θ Θ } C | D | , and Θ is [ π / 2 , π / 2 ) for linear array and [ π , π ) for planar array.
It is generally difficult to compute spark { B θ } , except for when D is a uniform linear array (ULA). In this circumstance, D is hole-free, spark { B θ } = | D | + 1 , and then K < ( | D | + 1 ) / 2 sources can be identified. However, hole-free FODCA with O ( M 4 ) DOFs has not been studied so far to the best of our knowledge. Recently, [16,17,18] provided several methods to construct FODCA with larger ULA segments, with which K < ( | D ULA | + 1 ) / 2 sources can be identified, where D ULA is defined as the largest ULA segment in D .
For general arrays, because B θ C | D | , it is obvious that
spark { B θ } | D | + 1 .
According to Property 3, (46), and Proposition 1, we can obtain a necessary condition regarding the identifiability of the number of sources.
Theorem 1.
Consider model (17), a necessary condition to uniquely identify all sources is that the number of sources K fulfills
K ( M 4 2 M 3 + 7 M 2 6 M ) / 8 .
Proof. See Appendix E.
In other words, any more than ( M 4 2 M 3 + 7 M 2 6 M ) / 8 sources cannot be uniquely identified by model (17).
In addition, there are O ( M 4 ) distinct elements in D ULA for certain sparse linear arrays (SLAs), such as four-level nested arrays [8] and arrays constructed from a expanding and shift scheme which consists of coprime arrays or nested arrays [18]. Therefore, our proposed method can identify O ( M 4 ) sources at most, if these arrays (not limited to) are used.

3.6. Complexity

Consider the worst case of complexity. For non-circular sources, the estimation of c x using (19) needs O ( L M 4 ) multiplications. The estimation of Q using (27) or (28) needs O ( L M 8 ) multiplications. Note that all the symmetries of c x and Q are not taken into account here, while it does not affect the order of complexity. Each iteration of (38) and (39) needs O ( I 2 | D | ) multiplications, which are much more than each iteration of (40) and (42). Let T denote the iteration number, and consider the worst case that O ( | D | ) = O ( M 4 ) . Combing all of the above processes, we can obtain that the complexity of the proposed method is O ( L M 8 + I 2 M 4 T ) .

3.7. Comparison with State-of-the-Art Methods

In this subsection, we compare the proposed method with state-of-the-art FOCs-based ones, in terms of required geometry, identifiability, prior knowledge of the number of sources, ability to handle correlated sources, numerical complexity, and sensitivity to parameters. Associated methods are listed as follows:
  • 4-MUSIC [5,6]: non-redundant version with averaging strategy [7], denoted by A-4-MUSIC;
  • 4-SS-MUSIC [8]: with averaging strategy, denoted by A-4-SS-MUSIC;
  • 4-L1 [16,17,18].
Associated comparison results are illustrated in Table 1.
In Table 1, T 1 and T 2 denote the iteration number of the proposed method and 4-L1, respectively. All comparison results are obvious or were discussed in the previous subsections, except for the comparison of complexity. Recall that M is the number of sensors, L is the number of snapshots, and I is the number of grid points. It should be noted that the complexities of four methods are derived under the assumption that O ( | D | ) = O ( M 4 ) , which gives the best identifiability and highest complexities for all methods. Due to the condition I | D | required to perform the sparsity in the proposed method and 4-L1, it is clear that O ( I ) O ( M 4 ) , which results in O ( I 3 T 2 ) > O ( I M 8 ) and O ( I 2 M 4 T 1 ) > O ( I M 8 ) . Consequently, the proposed method and 4-L1 always have higher complexity than A-4-MUSIC and A-4-SS-MUSIC.
Next, we make a comparison of complexity between the proposed method and 4-L1. Firstly, if the number of snapshots L is large enough that the terms O ( I 3 T 2 ) and O ( I 2 M 4 T 1 ) can be negligible, the proposed method has higher complexity than 4-L1 due to the term O ( L M 8 ) which is caused by the estimation of Q . Secondly, if L is small, we limit the comparison between O ( I 3 T 2 ) and O ( I 2 M 4 T 1 ) . It is clear that O ( I 3 ) O ( I 2 M 4 ) , implying that the proposed method has lower complexity than 4-L1 in one iteration. However, it is a fact that 4-L1 always has less iterations than the proposed method (i.e., T 1 > T 2 ). Thus, it is not easy to draw a conclusion. In fact, to achieve the super resolution, 4-L1 needs a dense grid, while the proposed method only needs a coarser grid due to the off-grid parameter estimation, which gives our method an advantage for complexity comparison. It should be noted that off-grid parameter estimation technique has been successfully applied to L1-norm minimization methods in [33,34]. However, to the best of our knowledge, the off-grid version of the 4-L1 method has not been published, and it is beyond the scope of this paper.

4. Numerical Simulations

Numerical simulations were carried out to demonstrate the superior performance of the proposed method. In the following simulations, the sampled analytic signals of statistically independent quaternary phase-shift keying (QPSK) signals were used as circular sources. The RMSE was chosen as the performance criterion, and is computed by
R M S E = 1 F K f = 1 F k = 1 K | θ ^ k , f θ k | 2 ,
where F is the number of independent experiments, and θ ^ k , f is the estimate of θ k in the fth independent experiment. The RMSE performances of the four methods in Table 1 with respect to number of snapshots, grid, adjacent sources, SNR, modelling errors, geometry, and circularity of sources were further investigated. In the proposed method, we set that ρ = 0.01 and τ = 10 4 for all scenarios, and set ϵ 1 = 10 5 or ϵ 2 = 10 5 for the case that K < M and S N R 8 dB. 4-L1 was solved by the SeDuMi toolbox [35], and the associated allowable error bound was chosen to give the best results through trial-and-error for each scenario as done in [16,17,18]. Each simulation executed F = 200 independent experiments for all methods.

4.1. Identifiability

Consider an SLA with three sensors located at S = { 1 , 2 , 6 } × d . It can be deduced that the associated FODCA has 19 distinct elements. That is, | D | = 19 , which is equal to sup { H } when M = 3 . According to Theorem 1, any greater than nine sources cannot be uniquely identified for the proposed method. Let us consider nine sources with DOAs { 60 , 45 , 30 , 15 , 0 , 15 , 30 , 45 , 60 } . The number of snapshots and SNR were set as L = 10 4 and S N R = 20 dB, respectively. Grid resolution was set to r = 0.5 . Figure 1a,b shows the normalized spectra of the proposed method and 4-L1, respectively. As can be seen, all DOAs of the sources were identified successfully. Note that this was unachievable for A-4-MUSIC and A-4-SS-MUSIC. The virtual array of A-4-MUSIC contained seven distinct virtual sensors, which could identify six sources at most. The largest ULA segment of the FOCDA contained 13 distinct elements, giving rise to a maximum identifiable number of six for A-4-SS-MUSIC.

4.2. Impact of the Grid Resolution

Consider an SLA { 1 , 2 , 4 , 11 , 25 } × d , which is constructed from an expanding and shift scheme that consists of two nested arrays { 1 , 2 , 4 } × d and { 1 , 2 , 4 } × d [18]. It can be obtained that the number of distinct elements of the corresponding FODCA and associated largest ULA segment is | D | = 85 and | D | ULA = 55 , respectively. Consider two independent sources with DOAs { 0 , 20 } + ζ 1 , where ζ 1 is a random variable chosen uniformly within [ 1 , 1 ] . SNR was set at S N R = 3 dB.
Figure 2a,b shows the RMSE performance of the four methods with respect to number of snapshots, with grid resolution r = 0.5 and r = 0.1 , respectively. In Figure 2a, when snapshots number was large, the RMSEs of A-4-MUSIC, A-4-SS-MUSIC, and 4-L1 did not reduce as snapshots increased, mainly because of the mismatch errors between grid and true DOAs. Due to the off-grid parameters estimation, the proposed method outperformed the other three methods for most of simulated snapshot numbers. In Figure 2b, the grid resolution is r = 0.1 , and it is clear that the RMSEs of A-4-MUSIC, A-4-SS-MUSIC, and 4-L1 increased to the same level of the proposed method when snapshots were greater than 50 and less than 2000. Nonetheless, the proposed method still outperformed the other three methods when snapshots were more than or equal to 2000. It can also be observed that the proposed method had smaller RMSE than A-4-MUSIC and 4-L1 when snapshots were less than 100. The RMSEs of A-4-SS-MUSIC were slightly larger than the other three methods when snapshots were greater than 50 and less than 5000, which may be caused by the fact that 4-SS-MUSIC does not utilize all the virtual sensors in associated FODCA. In addition, the dashed line in Figure 2b gives the RMSEs of the proposed method for r = 0.5 , and it is clear that the grid resolution seldom affected the RMSEs of the proposed method when snapshots were less than 2000, which implies that when snapshots are not too large, the proposed method can reduce the computational complexity by selecting a relatively coarser grid without heavy loss of RMSE performance.

4.3. Identifiability for Adjacent Sources

Next, we investigated the identifiability of adjacent sources using the proposed method. The array and parameters were set as in Section 4.2, except the grid resolution was set to r = 0.1 , and the DOAs of sources were set to { ζ 2 , ζ 2 + θ } , where ζ 2 is a random variable chosen uniformly within [ 1 , 1 ] , and θ = 0.5 , 1 , , 5 are the different DOA intervals. Figure 3a,b shows the RMSE performance of four methods with different DOA intervals, when the number of snapshots is 300 and SNR is 3 dB, and when the number of snapshots is 1000 and SNR is 10 dB, respectively. It is clear that the proposed method had smallest RMSEs for most simulated DOA intervals in both simulations, demonstrating the superior identifiability of adjacent sources with the proposed method.

4.4. Impact of SNR

The array and parameters were set as in Section 4.2, except the grid resolution was set to r = 0.1 . Figure 4 plots the RMSEs of four methods as a function of SNR, where the number of snapshots was set to L = 500 . It is clear that the four methods had almost equal RMSEs when SNR was smaller than 4 dB. In the high SNR region, a relatively better performance of the proposed method appeared, benefiting from the off-grid parameter estimation as discussed in the large snapshot number region in Section 4.2.

4.5. Impact of Modelling Errors

We also studied the effect of modelling errors on the proposed method. We used the same array geometry and parameters, and the same way of generating K = 2 DOAs as in Section 4.4. We perturbed the steering vector by adding a modelling error vector e a ( θ k ) to get the perturbed steering vector as a ˜ ( θ k ) = a ( θ k ) + e a ( θ k ) , where k = 1 , , K , and e a ( θ k ) are assumed to be zero mean statistically independent Gaussian distributed with E { e a ( θ k ) e a ( θ k ) H = σ a ( θ k ) 2 I , where σ a ( θ k ) 2 = 0.0025 . Figure 5 shows the RMSE performance of four methods with such modelling errors. In order to facilitate comparison, the RMSEs of the proposed methods with no modelling errors are also plotted in a dashed line. It can be observed that the RMSE performance of the four methods did not degrade heavily. However, when snapshots were more than 1000, the superior performance of the proposed method disappeared, suggesting that modelling errors did have a large impact on the off-grid parameters estimation of the proposed method.

4.6. Underdetermined Case

We investigated the performance of the proposed method in underdetermined cases. The same array { 1 , 2 , 4 , 11 , 25 } × d was used, and the grid resolution r = 0.1 , while six independent sources with DOAs { 45 , 30 , 15 , 0 , 25 , 40 } were set. Figure 6 shows the RMSE performance of the four methods. In Figure 6a, the number of snapshots varying from 100 to 10,000 and the SNR was fixed at 20 dB. It is clear that the proposed method had the smallest RMSEs for most of the simulated snapshot numbers. In Figure 6b, SNR varied from −4 dB to 28 dB and the snapshots number was fixed at 1000. It can be observed that the proposed method had smaller RMSEs when S N R > 4 dB, but had slightly larger values when S N R 4 dB, when compared with the other three methods. The RMSE performance of A-4-SS-MUSIC was obviously outperformed by the other three methods in the large snapshot number and high SNR regions, because 4-SS-MUSIC does not utilize all the virtual sensors in the associated FODCA. Moreover, the RMSEs of each method seemed to converge to a positive constant as SNR increased. A similar phenomenon can be observed and explained in covariance vector-based methods in the underdetermined case [22,26].

4.7. Non-Linear Array Case

We now give an example to show the effectiveness of the proposed method in the non-linear array case. We randomly generated a planar array as shown in Figure 7a, where the minimum distance between sensors was half of the wavelength. The corresponding FODCA geometry is also illustrated, which had | D | = 131 distinct elements. Note that the A-4-SS-MUSIC cannot be applied for the non-linear array case. We set three sources with DOAs { 60 , 0 , 40 } + ζ 3 , where ζ 3 is a random variable chosen uniformly within [ 1 , 1 ] . SNR was set at S N R = 6 dB, and the RMSEs of A-4-MUSIC, 4-L1, and the proposed method were plotted as a function of the number of snapshots (Figure 7b). It can also be seen that the proposed method had better RMSE performance when snapshots were greater than 200.

4.8. Effectiveness for Non-Circular Sources

Finally, we consider the case of non-circular sources, using the same array { 1 , 2 , 4 , 11 , 25 } × d , and six independent sources with DOAs { 45 , 30 , 15 , 0 , 25 , 40 } as in Section 4.6. In contrast, we replaced the imaginary parts of sources by their real parts. That is, s NC ( t ) = R { s ( t ) } + j R { s ( t ) } , where s NC ( t ) is the non-circular sources to be simulated. We set S N R = 20 dB and r = 0.1 . We estimated the FOCs by (19) for all the methods, and estimated Q ^ by (27) for the proposed method. Figure 8 shows the RMSEs of the four methods versus the number of snapshots. Similar to Figure 6a, it is clear that the proposed method had the smallest RMSEs for all simulated snapshot numbers. Beyond doubt, all the methods were effective for non-circular sources with the estimation of FOCs by (19).

5. Conclusion

In this paper, a novel SMV sparse model for non-Gaussian sources using denoised FOCs vector is established to perform DOA estimation, and solved efficiently by the OGSBI method. An advanced denoising and dimension reduction procedure of FOCs vector is provided. The estimation errors of FOCs are integrated in the proposed SMV model, and are approximately estimated in a simple way. The proposed method suits any geometry, does not need prior knowledge of the number of sources, has maximum identifiability O ( M 4 ) , and is insensitive to associated parameters. The off-grid parameter estimation in our method promotes superior performance when the number of snapshots is large and SNR is high, and also ensures good performance when choosing a relatively coarse grid. Numerical simulations illustrate that the proposed method has superior identifiability or RMSE performance in different scenarios, namely grid resolution, adjacent sources, SNR, modelling errors, geometry, and non-circular sources, when compared with other state-of-the-art methods.

Author Contributions

J.W. designed the method and wrote the paper; J.W. and Y.F. conceived and designed the simulations; R.D. performed the experiments; J.W., Y.F., and G.L. analyzed the simulation results; R.D. contributed SeDuMi toolbox in MATLAB.

Funding

This research was funded by State Commission of Science Technology of China (grant no. 17H863-05-2T-002-028-02) and Foundation of Key Laboratory of Underwater Acoustic Countermeasure (grant no. kmb5494).

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix A. The Proof of Property 1

Proof. 
Consider the linear equation ξ = 1 | D | υ ξ J ξ = 0 , where J ξ is the ξ th column vector of J . For any n i 1 , n i 2 , n i 3 , n i 4 S , it can be deduced that 0 = [ ξ = 1 | D | υ ξ J ξ ] i = ξ = 1 | D | υ ξ J i , ξ = υ ξ 0 , where the index ξ 0 satisfies that associated vector m ξ 0 = n i 1 + n i 2 n i 3 n i 4 . Due to the arbitrariness of n i 1 , n i 2 , n i 3 , n i 4 S , the coefficients υ ξ = 0 for all υ ξ = 1 , 2 , , | D | , implying that J has full column rank. The proof is completed. ☐

Appendix B. The Proof of Property 2

Proof. 
It is obvious that J a D ( θ ) = ξ = 1 | D | J ξ e j 2 π u T m ξ . According to Definition 2, we can obtain [ ξ = 1 | D | J ξ e j 2 π u T m ξ ] i = e j 2 π u T ( n i 1 + n i 2 n i 3 n i 4 ) . On the other hand, the ith row of b ( θ ) is [ b ( θ ) ] i = a i 1 ( θ ) a i 2 ( θ ) a i 3 * ( θ ) a i 4 * ( θ ) = e j 2 π u T ( n i 1 + n i 2 n i 3 n i 4 ) . Thus, we have b ( θ ) = Ja D ( θ ) . The proof is completed. ☐

Appendix C. The Proof of Property 3

Proof. 
Consider Definition 1. For groups { n i 1 , n i 2 } and { n i 3 , n i 4 } , we have two cases as follows.
Case 1: the same element does not exist between { n i 1 , n i 2 } and { n i 3 , n i 4 } . If n i 1 = n i 2 and n i 3 = n i 4 , there are ( M 1 ) ( M 1 1 ) distinct elements at most; if n i 1 = n i 2 and n i 3 n i 4 , there are ( M 1 ) ( M 1 2 ) distinct elements at most; if n i 1 n i 2 and n i 3 = n i 4 , there are ( M 2 ) ( M 2 1 ) distinct elements at most; if n i 1 n i 2 and n i 3 n i 4 , there are ( M 2 ) ( M 2 2 ) distinct elements at most.
Case 2: the same element exists between { n i 1 , n i 2 } and { n i 3 , n i 4 } . Without loss of generality, assuming n i 2 = n i 4 , if n i 1 = n i 3 , there is one element 0; if n i 1 n i 3 , there are M ( M 1 ) distinct elements at most.
If there is no other redundancy, the max | D | is obtained by summing over all the cases, which is
| D | max ( M ) = ( M 4 2 M 3 + 7 M 2 6 M + 4 ) / 4 .
Next, we provide an example to show that the maximum | D | max ( M ) can be achieved for any M. Consider the linear array S = { 10 0 , 10 1 , , 10 M 1 } . It is not hard to find that in both of the above cases all the maxima can be achieved due to the different orders of magnitude of the elements in S . Thus, | D | max ( M ) can be achieved for any M. Therefore, combining the definition of H , it is clear that the supremum of H is sup { H } = ( M 4 2 M 3 + 7 M 2 6 M + 4 ) / 4 for given M. The proof is completed. ☐

Appendix D. The Proof of Lemma 1

Proof. 
It is clear that var ( Y t Z t ¯ ) = var ( t = 1 L Y t Z t / L ) = var ( Y t Z t ) / L , and var ( Y ¯ t Z ¯ t ) = var ( t 1 = 1 L Y t 1 · t 2 = 1 L Z t 2 / L 2 ) = var ( Y t Z t ) / L 3 + var ( Y t ) var ( Z t ) ( L 1 ) / L 3 . Since var ( Y t Z t ) and var ( Y t ) var ( Z t ) are independent from L, it can be deduced that as L approaches to infinity, var ( Y t Z t ¯ ) / var ( Y ¯ t Z ¯ t ) = O ( L ) . The proof is completed. ☐

Appendix E. The Proof of Theorem 1

Proof. 
Due to Proposition 1, if any K sources can be uniquely identified from model (8), we have K < spark { B θ } / 2 . From (46), we obtain K < ( | D | + 1 ) / 2 . Applying Property 3, we can obtain K < ( M 4 2 M 3 + 7 M 2 6 M ) / 8 + 1 . As K is an integer, it is obvious K ( M 4 2 M 3 + 7 M 2 6 M ) / 8 . The proof is completed. ☐

References

  1. Liu, J.; Zhou, W.; Wang, X. Fourth-order cumulants-based sparse representation approach for DOA estimation in MIMO radar with unknown mutual coupling. Signal Process. 2016, 128, 123–130. [Google Scholar] [CrossRef]
  2. Saucan, A.; Chonavel, T.; Sintes, C.; Le, J. CPHD-DOA tracking of multiple extended sonar targets in impulsive environments. IEEE Trans. Signal Process. 2016, 64, 1147–1160. [Google Scholar] [CrossRef]
  3. Wan, L.; Han, G.; Jiang, J.; Zhu, C.; Shu, L. A DOA estimation approach for transmission performance guarantee in D2D communication. Mob. Netw. Appl. 2017, 22, 998–1009. [Google Scholar] [CrossRef]
  4. Chang, J.; Chang, A. DOA estimation in the time-space CDMA system using modified particle swarm optimization. AEU Int. J. Electron. Commun. 2013, 67, 340–347. [Google Scholar] [CrossRef]
  5. Porat, B.; Friedlander, B. Direction finding algorithms based on high-order statistics. IEEE Trans. Signal Process. 1991, 39, 2016–2024. [Google Scholar] [CrossRef]
  6. Chevalier, P.; Ferreol, A.; Albera, L. High-resolution direction finding from higher order statistics: The 2q-MUSIC algorithm. IEEE Trans. Signal Process. 2006, 54, 2986–2997. [Google Scholar] [CrossRef]
  7. Becker, H.; Chevalier, P.; Haardt, M. Higher order direction finding from rectangular cumulant matrices: The rectangular 2q-MUSIC algorithms. Signal Process. 2017, 133, 240–249. [Google Scholar]
  8. Pal, P.; Vaidyanathan, P. Multiple level nested array: An efficient geometry for 2qth order cumulant based array processing. IEEE Trans. Signal Process. 2012, 60, 1253–1269. [Google Scholar] [CrossRef]
  9. Schmidt, R. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 1986, 34, 276–280. [Google Scholar] [CrossRef]
  10. Roy, R.; Kailath, T. ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef]
  11. Liu, Z.; Huang, Z.; Zhou, Y. Sparsity-inducing direction finding for narrowband and wideband signals based on array covariance vectors. IEEE Trans. Wirel. Commun. 2013, 12, 1–12. [Google Scholar] [CrossRef]
  12. He, Z.; Shi, Z.; Huang, L. Covariance sparsity-aware DOA estimation for nonuniform noise. Digit. Signal Process. 2014, 28, 75–81. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Zhang, L.; Gu, Y. Array covariance matrix-based sparse Bayesian learning for off-grid direction-of-arrival estimation. Electron. Lett. 2016, 52, 401–402. [Google Scholar] [CrossRef]
  14. Malioutov, D.; Cetin, M.; Willsky, A. A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 2005, 53, 3010–3022. [Google Scholar] [CrossRef] [Green Version]
  15. Tipping, M. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  16. Shen, Q.; Liu, W.; Cui, W.; Wu, S. Extension of co-prime arrays based on the fourth-order difference co-array concept. IEEE Signal Process. Lett. 2016, 23, 615–619. [Google Scholar] [CrossRef]
  17. Shen, Q.; Liu, W.; Cui, W.; Wu, S. Extension of nested arrays with the fourth-order difference co-array enhancement. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, 20–25 March 2016; pp. 2991–2995. [Google Scholar]
  18. Cai, J.; Liu, W.; Zong, R.; Shen, Q. An expanding and shift scheme for constructing fourth-order difference co-Arrays. IEEE Signal Process. Lett. 2017, 24, 480–484. [Google Scholar] [CrossRef]
  19. Wipf, D.; Rao, B. An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 2007, 55, 3704–3716. [Google Scholar] [CrossRef]
  20. Stoica, P.; Selen, Y. Model-order selection: A review of information criterion rules. IEEE Signal Process. Mag. 2004, 21, 36–47. [Google Scholar] [CrossRef]
  21. Yang, Z.; Xie, L.; Zhang, C. Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans. Signal Process. 2011, 61, 38–43. [Google Scholar] [CrossRef]
  22. Liu, C.; Vaidyanathan, P. Cramér–Rao bounds for coprime and other sparse arrays, which find more sources than sensors. Digit. Signal Process. 2016, 61, 43–61. [Google Scholar] [CrossRef]
  23. Mendel, J.M. Tutorial on higher-order statistics (spectra) in signal processing and system theory: Theoretical results and some applications. Proc. IEEE 1991, 79, 278–305. [Google Scholar] [CrossRef]
  24. Dogan, M.; Mendel, J. Applications of cumulants to array processing. I. Aperture extension and array calibration. IEEE Trans. Signal Process. 1995, 43, 1200–1216. [Google Scholar] [CrossRef]
  25. Chevalier, P.; Albera, L.; Ferreol, A.; Comon, P. On the virtual array concept for higher order array processing. IEEE Trans. Signal Process. 2005, 53, 1254–1271. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, M.; Nehorai, A. Coarrays, MUSIC, and the Cramér–Rao Bound. IEEE Trans. Signal Process. 2017, 65, 933–946. [Google Scholar] [CrossRef] [Green Version]
  27. Babacan, S.; Molina, R.; Katsaggelos, A. Bayesian compressive sensing using Laplace priors. IEEE Trans. Image Process. 2010, 19, 53–63. [Google Scholar] [CrossRef] [PubMed]
  28. Dai, J.; So, C. Sparse Bayesian learning approach for outlier-resistant Direction-of-arrival estimation. IEEE Trans. Signal Process. 2018, 66, 744–756. [Google Scholar] [CrossRef]
  29. Yang, Z.; Li, J.; Stoica, P.; Xie, L. Sparse methods for direction-of-arrival estimation. arXiv, 2016; arXiv:1609.09596. [Google Scholar]
  30. Chen, J.; Huo, X. Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans. Signal Process. 2006, 54, 4634–4643. [Google Scholar] [CrossRef]
  31. Davies, M.; Eldar, Y. Rank awareness in joint sparse recovery. IEEE Trans. Inf. Theory 2010, 58, 1135–1146. [Google Scholar] [CrossRef]
  32. Kruskal, J. Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl. 1977, 18, 95–138. [Google Scholar] [CrossRef]
  33. Shen, Q.; Cui, W.; Liu, W.; Wu, S.; Zhang, D.; Amin, M. Underdetermined wideband DOA estimation of off-grid sources employing the difference co-array concept. Signal Process. 2017, 130, 299–304. [Google Scholar] [CrossRef] [Green Version]
  34. Liu, Q.; So, H.; Gu, Y. Off-grid DOA estimation with nonconvex regularization via joint sparse representation. Signal Process. 2018, 142, 87–95. [Google Scholar] [CrossRef]
  35. Sturm, J. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
Figure 1. Normalized spectra of (a) the proposed method and (b) 4-L1. S = { 1 , 2 , 6 } × d , K = 9 , L = 10 4 , signal–noise ratio ( S N R ) = 20 , and r = 0.5 . DOA: direction of arrival.
Figure 1. Normalized spectra of (a) the proposed method and (b) 4-L1. S = { 1 , 2 , 6 } × d , K = 9 , L = 10 4 , signal–noise ratio ( S N R ) = 20 , and r = 0.5 . DOA: direction of arrival.
Sensors 18 01815 g001
Figure 2. Root mean square error (RMSE) performance of four methods versus number of snapshots. S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , S N R = 3 dB, (a) r = 0.5 and (b) r = 0.1 .
Figure 2. Root mean square error (RMSE) performance of four methods versus number of snapshots. S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , S N R = 3 dB, (a) r = 0.5 and (b) r = 0.1 .
Sensors 18 01815 g002
Figure 3. RMSE performance of four methods with different DOA intervals, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , (a) L = 300 and S N R = 3 dB, (b) L = 1000 and S N R = 10 dB.
Figure 3. RMSE performance of four methods with different DOA intervals, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , (a) L = 300 and S N R = 3 dB, (b) L = 1000 and S N R = 10 dB.
Sensors 18 01815 g003
Figure 4. RMSE performance of four methods versus SNR, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , L = 500 .
Figure 4. RMSE performance of four methods versus SNR, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , L = 500 .
Sensors 18 01815 g004
Figure 5. RMSE performance of four methods with modelling errors, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , S N R = 3 dB.
Figure 5. RMSE performance of four methods with modelling errors, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 2 , r = 0.1 , S N R = 3 dB.
Sensors 18 01815 g005
Figure 6. RMSE performance of four methods in the underdetermined case, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 6 , r = 0.1 , (a) S N R = 20 dB, (b) snapshots number is L = 1000 .
Figure 6. RMSE performance of four methods in the underdetermined case, S = { 1 , 2 , 4 , 11 , 25 } × d , K = 6 , r = 0.1 , (a) S N R = 20 dB, (b) snapshots number is L = 1000 .
Sensors 18 01815 g006
Figure 7. RMSE performance of three methods in the non-linear array case. M = 5 , K = 3 , r = 0.1 , (a) the array geometry and the corresponding FODCA geometry, (b) RMSE versus the number of snapshots, S N R = 6 dB.
Figure 7. RMSE performance of three methods in the non-linear array case. M = 5 , K = 3 , r = 0.1 , (a) the array geometry and the corresponding FODCA geometry, (b) RMSE versus the number of snapshots, S N R = 6 dB.
Sensors 18 01815 g007
Figure 8. RMSE performance of four methods with non-circular sources, M = 5 , K = 6 , r = 0.1 , S N R = 6 dB.
Figure 8. RMSE performance of four methods with non-circular sources, M = 5 , K = 6 , r = 0.1 , S N R = 6 dB.
Sensors 18 01815 g008
Table 1. Comparison with state-of-the-art methods. 4-L1: fourth-order cumulants (FOCs) vector-based L1-norm method; A-4-MUSIC: non-redundant version of 4-MUSIC with averaging strategy; A-4-SS-MUSIC: FOCs vector-based spatial smoothing (SS)-MUSIC with averaging strategy; FODCA: fourth-order difference co-array; ULA: uniform linear array.
Table 1. Comparison with state-of-the-art methods. 4-L1: fourth-order cumulants (FOCs) vector-based L1-norm method; A-4-MUSIC: non-redundant version of 4-MUSIC with averaging strategy; A-4-SS-MUSIC: FOCs vector-based spatial smoothing (SS)-MUSIC with averaging strategy; FODCA: fourth-order difference co-array; ULA: uniform linear array.
A-4-MUSICA-4-SS-MUSIC4-L1Proposed
Required geometryAny geometryLA, with FODCA-
containing ULA
segment
Any geometryAny geometry
Maximum
identifiability
O ( M 2 ) O ( M 4 ) O ( M 4 ) O ( M 4 )
Prior knowledge
of the number of sources
NeedNeedNo needNo need
Handles
correlated sources
YesNoNoNo
Parameters and
sensitivity
With no parametersWith no parametersWith sensitive
parameters
With insensitive
parameters
Complexity O ( L M 4 + I M 4 + M 6 ) O ( L M 4 + I M 8 + M 12 ) O ( L M 4 + I 3 T 2 ) O ( L M 8 + I 2 M 4 T 1 )

Share and Cite

MDPI and ACS Style

Fan, Y.; Wang, J.; Du, R.; Lv, G. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector. Sensors 2018, 18, 1815. https://doi.org/10.3390/s18061815

AMA Style

Fan Y, Wang J, Du R, Lv G. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector. Sensors. 2018; 18(6):1815. https://doi.org/10.3390/s18061815

Chicago/Turabian Style

Fan, Yangyu, Jianshu Wang, Rui Du, and Guoyun Lv. 2018. "Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector" Sensors 18, no. 6: 1815. https://doi.org/10.3390/s18061815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop