We are a sharing community. So please help us by uploading **1** new document or like us to download:

OR LIKE TO DOWNLOAD IMMEDIATELY

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

1

A Study of Inter-Speaker Variability in Speaker Verification Patrick Kenny, Pierre Ouellet, Najim Dehak, Vishwa Gupta and Pierre Dumouchel

EDICS Category: SPE-SPKR Abstract— We propose a new approach to the problem of estimating the hyperparameters which define the inter-speaker variability model in joint factor analysis. We tested the proposed estimation technique on the NIST 2006 speaker recognition evaluation data and obtained 10–15% reductions in error rates on the core condition and the extended data condition (as measured both by equal error rates and the NIST detection cost function). We show that when a large joint factor analysis model is trained in this way and tested on the core condition, the extended data condition and the cross-channel condition, it is capable of performing at least as well as fusions of multiple systems of other types. (The comparisons are based on the best results on these tasks that have been reported in the literature.) In the case of the cross-channel condition, a factor analysis model with 300 speaker factors and 200 channel factors can achieve equal error rates of less than 3.0%. This is a substantial improvement over the best results that have previously been reported on this task. Index Terms— Speaker verification, Gaussian mixture model, speaker factors, channel factors

decomposed into a sum of two supervectors, a speaker supervector s and a channel supervector c: M = s + c,

(1)

where s and c are statistically independent and normally distributed. Secondly, we assume that the distribution of s has a hidden variable description of the form s = m + vy + dz

(2)

where m is a CF × 1 supervector; v is a rectangular matrix of low rank and y is a normally distributed random vector; d is a CF × CF diagonal matrix and z is a normally distributed CF -dimensional random vector. We will refer to the columns of v as eigenvoices and we will refer to the components of y as speaker factors.1 Thirdly, we assume that the distribution of c has a hidden variable description of the form

I. I NTRODUCTION

c = ux,

Factor analysis is a model of speaker and session variability in Gaussian mixture models (GMM’s). This article is concerned with the speaker variability component of our version of factor analysis. In our approach to speaker recognition, the role of this component is to provide a prior distribution for target speaker models (we use the term prior distribution in the sense in which it is used in Bayesian statistics [1]). As such, it plays a key role in estimating target speaker models at enrollment time. In order to formulate precisely the problem that we address, we begin by recapitulating the basic assumptions in factor analysis. Let C be the number of components in a Universal Background Model (UBM) and F the dimension of the acoustic feature vectors. We use the term supervector to refer to the CF -dimensional vector obtained by concatenating the F -dimensional mean vectors in the GMM corresponding to a given utterance. Our assumptions are as follows. Firstly, we assume that a speaker- and channel-dependent supervector M can be

where u is a rectangular matrix of low rank and x is a normally distributed random vector. We refer to the components of x as channel factors and we use the term eigenchannels to refer to the columns of u. Finally, we associate a diagonal covariance matrix Σc with each mixture component c whose role is to model the variability in the acoustic observation vectors which is not captured by either the speaker model (2) or the channel model (3). We denote by Σ the CF × CF supercovariance matrix whose diagonal is the concatenation of these covariance matrices. Although most authors (e.g. [4], [5]) use the term factor analysis to refer to the channel model (3) alone, we have always used this term in a broader sense which includes the speaker model (2) as well. (Where it is necessary to make this distinction explicitly we speak of joint factor analysis.) Our concern in this article is with the way the hyperparameters v and d in (2) are estimated. These hyperparameters provide a prior distribution for maximum a priori (MAP) estimation of speaker-dependent GMM’s at enrollment time, and they are critically important to the success of our approach to speaker recognition. (The MAP calculation is explained in Section III

(3)

The authors are with the Centre de recherche informatique de Montr´eal (CRIM). 1 As in our previous work, we are following the usage of [2]. A different email: [email protected], [email protected], [email protected], [email protected], usage prevails in the general statistical literature: the columns of v would be [email protected] referred to as speaker factors and the entries of y as factor loadings. The This work was supported in part by the Natural Science and Engineerterminology is used in this way in [3] where factor analysis methods are ing Research Council of Canada and by the Minist`ere du D´eveloppement applied to the face recognition problem. ´ Economique et R´egional et de la Recherche du Gouvernement du Qu´ebec. c IEEE 0000–0000/00$00.00

2

of [6].) Since the assumption in (2) is equivalent to saying that s is normally distributed with mean m and covariance matrix d2 + vv ∗ , (2) is a model of inter-speaker variability. Our goal in this article is to show how improved inter-speaker variability modeling can lead to substantial gains in speaker recognition performance. If v = 0 and u = 0, then the assumption in (2) is the same as in classical MAP [7]; on the other hand, if d = 0 and u = 0, the assumption is the same as in eigenvoice MAP [8]. Classical MAP adaptation (including relevance MAP [9]) is by far the most popular type of speaker modeling in text-independent speaker recognition, but our experience has been that MAP adaptation using eigenvoices is generally much more effective, at least in situations where limited amounts of enrollment data are available. Classical MAP adaptation can only adapt those Gaussians which are seen in the enrollment data but, if large amounts of enrollment data are available, it is arguably the best way of estimating speaker supervectors since it is asymptotically equivalent to maximum likelihood estimation. On the other hand, eigenvoice MAP is helpful if only small amounts of enrollment data are available, since only a small number of free parameters need to be estimated at enrollment time. The fact that the supervector covariance matrix is full rather than diagonal in this case ensures that MAP adaptation takes account of the correlations between the different Gaussians in a speaker supervector so that all of the Gaussians are updated at enrollment time even if only a small fraction of them are observed. An extreme example of the effectiveness of eigenvoices can be found in [10], which is concerned with the use of factor analysis to model syllable-level prosodic features. The number of feature vectors per conversation side is only about 400; it is unrealistic to expect classical MAP adaptation to be very effective in this situation. It should be possible to capitalize on the advantages of both classical MAP and eigenvoice MAP by including both terms vy and dz in (2) (this was first suggested in [11]). However, extensive experimentation in [12] showed that the term dz was only helpful on an extended data task where 15–20 minutes of enrollment data are available for each target speaker. (The term vy is helpful in all circumstances, even in the extended data task.) Since including the term dz is the source of most of the mathematical complication in [6], and the extended data task is of secondary interest to most researchers, this led us to wonder if we would not be better off suppressing the term dz altogether. The reason why the term dz was not helpful in [12] is that in a typical factor analysis training scenario with, say, 1000 training speakers and 300 speaker factors, almost all of the speaker variability in the training set can be well accounted for by v alone (v has 300 times as many free parameters as d). Thus, if the maximum likelihood criterion is used to estimate d and v, what tends to happen is that d ends up playing no useful role unless very large amounts of enrollment data are available (as in the extended data task). However, there is reason to doubt that the maximum likelihood criterion is appropriate for this type of estimation problem. Even if the linear/Gaussian assumptions in (2) are granted, there is no

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

reason to believe that (2) is a correct model of inter-speaker variability — it is just a compromise that is forced on us by the fact that a supervector covariance matrix of sufficiently high rank to be realistic would probably be impossible to estimate or to calculate with. (Impossible to calculate with because the rank of the covariance matrix would be too high; impossible to estimate because many more training speakers would be needed than are currently available.) This led us to explore another way of estimating v and d which we will explain in Section II and which we refer to as decoupled estimation. In Section III, we show how decoupled estimation leads to 10–15% reductions in error rates (as measured both by equal error rates and the NIST detection cost function) on both the core condition and the extended data condition of the NIST 2006 speaker recognition evaluation data. In order to be able to turn around these experiments in a reasonable time, we used factor analysis models of relatively modest dimensions. Our final results, using a much larger factor analysis model, are presented in Section IV. These results show that a stand-alone joint factor analysis model is capable of performing at least as well as fusions of large numbers of systems of other types (based on comparisons with the best results that have been reported in the literature). The results on the NIST 2006 cross channel condition are particularly impressive: equal error rates of less than 3% can be achieved without any special-purpose signal processing. Results of other tests are presented in [13]; these include cross-channel tests in which microphone speech is used for enrollment as well as for verification and tests involving very short utterances at verification time. II. E STIMATING

THE

H YPERPARAMETERS

The supervector defined by a UBM can serve as an estimate of m and the UBM covariance matrices are good first approximations to the residual covariance matrices Σc (c = 1, . . . , C). The problem of estimating v in the case where d = 0 was addressed in [8] and a very similar approach can be adopted for estimating d in the case where v = 0. We first summarize the estimation procedures for these two special cases and then explain how they can be combined to tackle the general case. A. Baum-Welch statistics Given a speaker s and acoustic feature vectors Y1 , Y2 , . . ., for each mixture component c we define the Baum-Welch statistics in the usual way: X Nc (s) = γt (c) Fc (s)

=

t X

γt (c)Yt

t

Sc (s)

=

diag

X t

γt (c)Yt Yt∗

!

where, for each time t, γt (c) is the posterior probability of the event that the feature vector Yt is accounted for by the mixture component c. We calculate these posteriors using the UBM.

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

3

We denote the centralized first- and second order BaumWelch statistics by F˜c (s) and S˜c (s): F˜c (s) =

X

m, v, Σ and s ranges over the training speakers: X Nc = Nc (s) (c = 1, . . . , C) s

γt (c)(Yt − mc )

Ac

t

S˜c (s) =

diag

X

γt (c)(Yt − mc )(Yt − mc )∗

t

!

Fc (s) − Nc (s)mc Sc (s) − diag (Fc (s)m∗c + mc Fc (s)∗ − Nc (s)mc m∗c ) .

Let N (s) be the CF × CF diagonal matrix whose diagonal ˜ (s) be the CF × 1 blocks are Nc (s)I (c = 1, . . . , C). Let F ˜ supervector obtained by concatenating Fc (s) (c = 1, . . . , C). ˜ Let S(s) be the CF × CF diagonal matrix whose diagonal blocks are S˜c (s) (c = 1, . . . , C).

Nc (s)E [y(s)y ∗ (s)] (c = 1, . . . , C)

s

C =

˜ (s)E [y ∗ (s)] F

X s

N

where mc is the subvector of m corresponding to the mixture component c. In other words, F˜c (s) = S˜c (s) =

=

X

=

X

N (s).

s

For each mixture component c = 1, . . . , C and for each f = 1, . . . , F , set i = (c − 1)F + f ; let vi denote the ith row of v and Ci the ith row of C. Then v is updated by solving the equations =

vi Ac

Ci (i = 1, . . . , CF ).

The update formula for Σ is Σ=N

X

−1

s

B. Training an eigenvoice model In this section we consider the problem of estimating m, v and Σ under the assumption that d = 0. We assume that initial estimates of the hyperparameters are given. (Random initialization of v works fine in practice.) The approach that we adopt is similar to Gales’s cluster adaptive training [14] in that it does not rely on techniques such as MAP or MLLR to produce GMM’s for the training speakers (all of the computation is done with Baum-Welch statistics rather than GMM’s). It differs from the approach in [14] in that the hyperparameter estimation problem is formulated in terms of maximum likelihood II [15]. As such, our approach is very similar to the Probabilistic Principal Components Analysis (PPCA) of [16] (which is formally a special case of our procedure). Note that, as in PPCA, the terms eigenvector and eigenvalue do not appear in our formulation but it is known that, unless the optimization gets stuck locally, PPCA does succeed in finding principal eigenvectors. Thus it is appropriate for us to speak of eigenvoices even though our estimation procedure is not formulated as an eigenvalue problem. 1) The posterior distribution of the hidden variables: For each speaker s, set l(s) = I + v ∗ Σ−1 N (s)v. Then the posterior distribution of y(s) conditioned on the acoustic observa˜ (s) tions of the speaker is Gaussian with mean l−1 (s)v ∗ Σ−1 F −1 and covariance matrix l (s). (See [8], Proposition 1.) We will use the notation E [·] to indicate posterior expectations; thus, E [y(s)] denotes the posterior mean of y(s) and E [y(s)y ∗ (s)] the posterior correlation matrix. 2) Maximum likelihood re-estimation: This entails accumulating the following statistics over the training set, where the posterior expectations are calculated using initial estimates of

!

˜ S(s) − diag (Cv ) . ∗

(See [8], Proposition 3.) 3) Minimum divergence re-estimation: Given initial estimates m0 and v 0 , the update formulas for m and v are m = v =

m0 + v 0 µy v 0 T ∗yy .

Here µy

=

1X E [y(s)] , S s

T yy is an upper triangular matrix such that T ∗yy T yy

=

1X E [y(s)y ∗ (s)] − µy µ∗y S s

(i.e. Cholesky decomposition), S is the number of training speakers, and the sums extend over all speakers in the training set. (See [6], Theorem 7.) This update formula leaves the range of the covariance matrix vv ∗ unchanged. The only freedom it has is to rotate the eigenvoices and scale the corresponding eigenvalues. This type of hyperparameter estimation was introduced in [17]; its role is to get good estimates of the eigenvalues corresponding to the eigenvoices ([18], Section II-C). Thus is useful for diagnostic purposes; for example, in comparing the eigenvalues of uu∗ with those of vv ∗ as in Table VI below. Maximum likelihood estimation on its own produces eigenvalues which are difficult to interpret [6].

C. Training a diagonal model An analogous development can be used to estimate m, d and Σ if v is constrained to be 0.

4

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

1) The posterior distribution of the hidden variables: For each speaker s, set l(s) = I +d2 Σ−1 N (s). Then the posterior distribution of z(s) conditioned on the acoustic observations ˜ (s) and of the speaker is Gaussian with mean l−1 (s)dΣ−1 F covariance matrix l−1 (s). (The derivation here is essentially the same as in Section II-B.1.) Again, we will use the notation E [·] to indicate posterior expectations; thus, E [z(s)] denotes the posterior mean of z(s) and E [z(s)z ∗ (s)] the posterior correlation matrix. It is straightforward to verify that, in the special case where d is assumed to satisfy 1 Σ, d = r this posterior calculation leads to the standard relevance MAP estimation formulas for speaker supervectors [9] (r is the relevance factor). The following two sections summarize datadriven procedures for estimating m, d and Σ which do not depend on the relevance MAP assumption. It can be shown that when these update formulas are applied iteratively, the values of a likelihood function analogous to that given in Proposition 2 of [8] increase on successive iterations. 2) Maximum likelihood re-estimation: This entails accumulating the following statistics over the training set where the posterior expectations are calculated using initial estimates of m, d, Σ and s ranges over the training speakers: X Nc = Nc (s) (c = 1, . . . , C) 2

a =

s X

diag (N (s)E [z(s)z ∗ (s)])

s X

N (s).

s

X

b = N

=

˜ (s)E [z ∗ (s)] diag F

S is the number of training speakers, and the sums extend over all speakers in the training set. We will need a variant of this update procedure which applies to the case where m is forced to be 0. In this case d is estimated from d0 by taking T zz to be such that ! 1X 2 ∗ T zz = diag E [z(s)z (s)] . S s D. Joint estimation of v and d There is no difficulty in principle in extending the maximum likelihood and minimum divergence training procedures to handle a general factor analysis model in which both v and d are non-zero (Theorems 4 and 7 in [6]). We used this type of joint estimation in all of our previous work in factor analysis and to produce benchmarks for the experiments that we will report in this article. However, joint estimation of v and d is computationally demanding because, in a general factor analysis model, all of the hidden variables become correlated with each other in the posterior distributions. Our experience has been that, given the Baum-Welch statistics, training a diagonal model runs very quickly and training a pure eigenvoice model can be made to run quickly (at the cost of some memory overhead) by suitably organizing the computation of the matrices l(s) in Section II-B.1. Unfortunately, no such computational shortcuts seem to be possible in the general case. Furthermore, even if the eigenvoice component v is carefully initialized, many iterations of joint estimation seem to be needed to estimate d properly and, because the contribution of d to the likelihood of the training data is minor compared with the contribution of v, it is difficult to judge when the training algorithm has effectively converged.

s

For i = 1, . . . , CF let di be the ith entry of d and similarly for ai and bi . Then d is updated by solving the equation di ai

= bi

for each i. The update formula for Σ is Σ=N

X

−1

s

! ˜ S(s) − diag (bd) .

3) Minimum divergence re-estimation: Given initial estimates m0 and d0 , the update formulas for m and d are m d

= m0 + d0 µz = d0 T zz

where µz

=

1X E [z(s)] , S s

T zz is a diagonal matrix such that T 2zz

=

diag

1X E [z(s)z ∗ (s)] − µz µ∗z S s

!

,

E. Decoupled estimation of v and d A much more serious problem with joint estimation is that it tends to produce estimates of d which are too small, so that almost all of the speaker variability in a factor analysis training set is accounted for by the term vy in (2) and very little of the variability is accounted for by the term dz. Thus, in practice, the term dz is of little use except in situations where large amounts of enrollment data are available (as we observed in [12]). It is probable that the reason why joint estimation behaves in this way is that it is a maximum likelihood estimation procedure and v has many more free parameters than d. But as we mentioned in Section I, there is reason to doubt the appropriateness of maximum likelihood estimation in this situation, and there is a good argument which suggests that the term dz ought to be helpful in distinguishing between speakers. The term vy can only capture inter-speaker variability which is confined to a low-dimensional affine subspace of supervector space (namely the subspace containing m which is spanned by the eigenvoices). It is reasonable to believe that the orientation of this subspace reflects attributes which are common to all speakers. No such constraint is imposed on the term dz, so it ought to be capable of capturing attributes which

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

5

are unique to individual speakers. Similar considerations led the authors in [19] and [20] to construct speaker recognition systems which operate in the orthogonal complement of the principal components of a large training set. (This orthogonal complement is referred to as the “speaker unique subspace” in [20].) This raises the question of how to produce a reasonable estimate of d which is not “too small”. Since the term dz models residual inter-speaker variability which is not captured by a large set of eigenvoices, this can be achieved by withholding a subset of the training speaker population; the speakers that are withheld serve to estimate d but they play no role in estimating v. Thus, we split the factor analysis training set in two and use the larger of the two sets to estimate m and v and the smaller to estimate d and Σ. We first fit a pure eigenvoice model to the larger training set using the procedures described in Sections II-B.2 and II-B.3. Then, for each speaker s in the residual training set, we calculate the MAP estimate of y(s), namely E [y(s)], as in Section II-B.1. This gives us a preliminary estimate of the speaker’s supervector s, namely s = m + vE [y(s)] .

(4)

We centralize the speaker’s Baum-Welch statistics by subtracting the speaker’s supervector (that is, we apply the formulas in Section II-A with m replaced by s). Finally, we use these centralized statistics together with the procedures described in Sections II-C.2 and II-C.3 to estimate a pure diagonal model with m = 0. This gives us estimates of d and Σ. Since this training algorithm uses only the diagonal and eigenvoice estimation procedures, it converges rapidly. III. E XPERIMENTS A. Enrollment and test data We used the core condition and the extended data condition (in which 8-conversation sides are available for enrolling each target speaker) of the NIST 2006 speaker recognition evaluation (SRE) for testing [21]. Although we will report results on male speakers as well as female, we used mostly he female trials in the 2006 SRE for our experiments. B. Feature Extraction We extracted 19 cepstral coefficients together with a log energy feature using a 25 ms Hamming window and a 10 ms frame advance. These were subjected to feature warping using a 3 s sliding window [22]. ∆ coefficients were then calculated using a 5 frame window, giving a total of 40 features. C. Factor analysis training data We trained two gender-dependent UBM’s having 1024 Gaussians and gender-dependent factor analysis models having 0, 100 and 300 speaker factors. Except where otherwise indicated, the number of channel factors was fixed at 50. For training UBM’s we used Switchboard II, Phases 2 and 3; Switchboard Cellular, Parts 1 and 2; the Fisher English Corpus, Parts 1 and 2; the NIST 2003 Language Recognition

Evaluation data set; and the NIST 2004 SRE enrollment and test data. For training factor analysis models we used the LDC releases of Switchboard II, Phases 2 and 3; Switchboard Cellular, Parts 1 and 2; and the NIST 2004 SRE data. We used only those speakers for which five or more recordings were available. For decoupled estimation of v and d, we estimated v on the Switchboard data and d on the 2004 SRE data. In order to ensure strict disjointness between the factor analysis training data and the NIST 2006 SRE data which we used for testing, we made no use of the 2005 SRE data. (For the extended data condition, some of the 2005 data was recycled in 2006. In [23] we reported how failing to keep the training and test sets disjoint could produce extremely misleading results.) Note also that, since the Switchboard corpora consist of English only data and English is predominant in the NIST 2004 SRE data, the factor analysis training set is biased towards English speakers. D. Implementation details The first step in building factor analysis models is to train gender dependent UBM’s in the usual way. BaumWelch statistics extracted with these UBM’s are sufficient statistics for all subsequent processing: hyperparameter estimation, target speaker enrollment and likelihood calculations at verification time. To estimate v and d we pooled all of the recordings of each speaker in the factor analysis training set and ignored channel effects as in [24]. (The rationale here is that channel effects can be averaged out if sufficiently many recordings are available for each speaker.) In implementing decoupled estimation of v and d, we ran the algorithms in Section IIB to convergence (seven iterations of maximum likelihood estimation and one of minimum divergence estimation) before calculating the speaker supervectors for each training speaker according to (4). We decoupled the estimation of u from that of v and d as in [25], [24] (rather than using the maximum likelihood procedures in [6], [18]). Recall that equation (2) can be interpreted as saying that speaker supervectors are normally distributed with mean 0 and covariance matrix d2 + vv ∗ . For the purposes of enrolling target speakers, we interpret this normal distribution as a prior distribution in the sense in which this term is used in Bayesian statistics. Given an enrollment utterance and the hyperparameters m, u, v and d, we enroll a target speaker by calculating the posterior distribution of the hidden variables x, y and z, using the maximum a posteriori estimate of m + vy + dz as a point estimate of the speaker’s supervector. (We do not use the point estimate of x since the channel effects in the enrollment data are irrelevant.) As is generally the case with a Gaussian prior, the posterior is also Gaussian and can be calculated in closed form [1]. The case where d = 0 and u = 0 is treated in Section IIB.1; the case where u = 0 and v = 0 is treated in Section II-C.1; the case where d 6= 0, v = 0, u = 0 and there is a single enrollment utterance is treated in the Appendix to [26];

6

the case where d 6= 0, v 6= 0, u = 0 and there is a single enrollment utterance is formally equivalent to this – one only has to replace u by the matrix u v ;

finally, the general case in which there are multiple enrollment utterances is treated in Section III of [6] but we have found that pooling the Baum-Welch statistics from the various utterances together (as if we had a single utterance for enrollment) works just as well as the complicated calculation described there. Note that our enrollment procedure results in a point of estimate of each target speaker’s supervector, rather than a posterior distribution as in the Bayesian approach that we originally attempted [18]. At verification time, likelihoods were evaluated according to (19) in [24]. (We did not use the correction (20) in [24]. This is a minor technical issue which is discussed at length in [12].) Thus, we account for channel effects in test utterances by integrating over the channel factors x in (3) rather than by using a point estimate of the channel factors for each test utterance as other authors do. If a test utterance is sufficiently long, the posterior distribution of the channel factors will be sharply peaked and using a point estimate of the channel factors (either a MAP estimate or a maximum likelihood estimate) will give essentially the same result as integrating over the channel factors. But in the case of short test utterances (say 10 seconds of speech), integrating over channel factors seems to be the right thing to do. (Since the integral in question is Gaussian there is no difficulty in evaluating it in closed form.) It was reported in [27], [28], [29] that channel factors are unhelpful for tasks involving short test utterances but this does not agree with our experience. In [13] we present some good results on 10 second test conditions; we believe that our success can be traced to not attempting to obtain point estimates of channel factors under these conditions. Finally, the denominator of the log likelihood ratio statistic used for verification was calculated in exactly the same way as the numerator with the UBM supervector used in place of the hypothesized speaker’s supervector. E. Imposters The verification decision scores obtained with the factor analysis models were normalized using zt-norm. As in [12], we used 283 t-norm speakers in the female case and 227 in the male case. We used 1000 z-norm utterances for each gender. The imposters were chosen at random from the factor analysis training data. The reasons for using such a large number of z-norm utterances are explained in [12]. F. Results The results of our experiments on the female portion of the common subset of the core condition of the NIST 2006 SRE are summarized in Table I. (EER refers to the equal error rate and DCF to the minimum value of the NIST detection cost function. The common subset consists of English language trials only. All results in this paper were obtained using version

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

5 of the 2006 SRE answer key.) There are some blank entries in the table because decoupled estimation applies only in the case where both v and d are non-zero. The best result is TABLE I Results obtained on the core condition of the NIST 2006 SRE (female speakers, English language trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 4.4% 0.027 4.1% 0.024 3.9% 0.024 5.2% 0.027

Decoupled EER DCF 3.9% 0.022 3.6% 0.021 – – – –

obtained with 300 speaker factors and decoupled estimation. It is apparent that, contrary to our conclusion in [12], the term dz in (2) can play a useful role in restricted data tasks after all. There is an anomaly in the joint estimation column: In the 300 speaker factor case we obtained a better EER by setting d = 0 than by joint estimation of v and d. We attribute this to the convergence issue mentioned in Section II-D. Table II gives the corresponding results on all trials of the female portion of the core condition. Again, the best results are obtained with 300 speaker and decoupled estimation.. Comparing the second row of Table II with that of Table I TABLE II Results obtained on the core condition of the NIST 2006 SRE (female speakers, all trials)

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 5.9% 0.032 5.6% 0.030 5.2% 0.028 7.2% 0.034

Decoupled EER DCF 4.9% 0.027 4.6% 0.025 – – – –

we see that, if d is estimated with decoupled estimation, then the term dz in (2) is particularly effective in modeling nonEnglish speakers. This is to be expected since we used a large amount of English-only data (namely the Switchboard corpora) to estimate v. We replicated these experiments on the female trials of the extended data condition. The results are summarized in Tables III and IV. Patterns similar to those in Tables I and II are evident. TABLE III Results obtained on the extended data condition of the NIST 2006 SRE (female speakers, English language trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 2.2% 0.012 2.1% 0.014 2.1% 0.014 3.1% 0.017

Decoupled EER DCF 2.1% 0.011 1.9% 0.011 – – – –

We report results on male speakers in Table V. Note that these results are much better than the results we obtained for female speakers.

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

TABLE IV Results obtained on the extended data condition of the NIST 2006 SRE

7

IV. R ESULTS OBTAINED

MODEL

(female speakers, all trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 2.5% 0.012 2.7% 0.014 2.7% 0.015 3.6% 0.016

Decoupled EER DCF 2.3% 0.014 2.3% 0.012 – – – –

TABLE V Results obtained on the core condition and the extended data condition of the NIST 2006 SRE for male speakers (50 channel factors, 300 speaker factors, d 6= 0, decoupled estimation).

Core condition, English Core condition, all trials Extended data, English Extended data, all trials

EER 2.1% 4.2% 1.4% 1.7%

DCF 0.013 0.020 0.006 0.008

G. Note on Baum-Welch statistics The results we have obtained using speaker factors are clearly much better than those obtained using d alone, but the reader may have noticed that the figures presented in the fourth rows of Tables I and II are not quite as good as the best results that have been reported with comparable standalone GMM/UBM systems as in [30], [5], [31]. These systems are comparable because they use relevance MAP for speaker enrollment and channel factors to compensate for intersession variability. As we mentioned in Section II-C.1, relevance MAP is essentially a special type of diagonal factor analysis model. The reason for the discrepancy in performance is that we use the UBM to extract Baum-Welch statistics in our system rather than speaker-dependent GMM’s. It turns out that, in the case of a diagonal factor analysis model, using speaker-dependent GMM’s does indeed produce better results. For example, on the English language trials in the core condition, a diagonal model with 100 channel factors produces an EER of 2.8% for male speakers, which is similar to the results presented in [30], [5], [31] (but not as good as the result in the first line of Table V). However, for a factor analysis model with 300 speaker factors, using speaker-dependent GMM’s (estimated with speaker factors) to extract Baum-Welch statistics turns out to be harmful. For example, on the English language trials in the core condition, a factor analysis model with 300 speaker factors and 100 channel factors produces an EER of 4.2% for male speakers if the Baum-Welch statistics are extracted with speaker-dependent GMM’s; on the other hand, an EER of 1.4% is obtained if the UBM is used for this purpose. This is the reason why we have always used the UBM to extract Baum-Welch statistics in our work on factor analysis. (The extraordinarily low error rate of 1.4% is attributable to using 100 channel factors rather than 50 as in Table V.)

WITH A LARGE FACTOR ANALYSIS

In this section we report results obtained on the NIST 2006 SRE test set by increasing the dimensions of the male and female factor analysis models. We increased the number of Gaussians from 1024 to 2048, we increased the dimension of the acoustic feature vectors from 40 to 60 by appending ∆∆ coefficients, and we increased the number of channel factors from 50 to 100. We kept the number of speaker factors at 300 because previous experience has shown that using larger numbers of speaker factors is not helpful [12]. In presenting the results that we obtained with the large factor analysis models, we will break them out by gender because (as we saw in the previous section) there are large differences in performance between males and females. Some insight into this phenomenon can be gained by inspecting the way the male and female factor analysis models fit the training data. Table VI gives the traces of the matrices vv ∗ , d2 and uu∗ for the two gender-dependent factor analysis models. It is clear that, as measured both by the trace of vv ∗ and by the trace of d2 , there is substantially greater variability among male speakers than among female speakers. Thus, distinguishing between male speakers seems to be intrinsically easier than distinguishing between female speakers, at least if the feature set consists of cepstral coefficients. (In our earlier work we used 12 cepstral coefficients. We increased the number of cepstral coefficients to 19 in the present work in the hope that this would narrow the gender gap.) Perhaps even more surprisingly, we observe that the trace of uu∗ is substantially larger for the male model than for the female model. which seems to indicate that the channel model (3) gives a better fit in the case of male speech. However, although the figures in the fourth row of Table VI are not really comparable with the others, they suggest that most of the variability in the data is not captured by either the speaker model (2) or the channel model (3) and this residual variability is larger in the case of males than in the case of females. TABLE VI Speaker and channel variability in male and female factor analysis models.

∗ tr (vv ` ´) tr d2 tr (uu∗ ) tr (Σ)

Male speakers 1075 334 501 25,840

Female speakers 975 306 433 23,535

A. Core condition The results we obtained on the core condition of the NIST 2006 SRE with the large factor analysis model just described are summarized in Table VII. Even though they were obtained with a stand-alone system, these results compare very favorably with the best results on the 2006 core condition that have been reported in the literature, namely those obtained by STBU [32], SRI [33] and MIT/IBM [4]. The STBU system achieved an EER of 2.3% (English language trials only, results pooled over male and female speakers) by fusing 10 subsystems

8

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

TABLE VII Results obtained with a large factor analysis model on the core condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 1.5% 0.011 3.0% 0.017

Female speakers EER DCF 2.7% 0.017 3.3% 0.020

(cepstral and MLLR); SRI achieved an EER of 2.6% by fusing 8 subsystems (cepstral, MLLR and higher level); and MIT/IBM achieved an EER of 2.7% by fusing 9 subsystems (cepstral, MLLR and higher level). In comparing our results with those of STBU, it should be noted that the individual subsystems of the STBU system were trained on pre-2005 data but the fusion parameters were estimated using the data made available for the 2005 NIST SRE. (Robust fusion was a key ingredient in the success of the STBU system in the 2006 SRE.) On the other hand, our reason for excluding the 2005 data from the factor analysis training set was simply to enable us to experiment properly with the extended data condition in the 2006 evaluation set (as we explained in Section III-C). Had we included the 2005 data in factor analysis training, the proportion of Mixer data in the factor analysis training set would have increased from 20% to 50% and this would presumably have resulted in even better performance on the 2006 evaluation set. B. Extended data condition Table VIII summarizes the results obtained with the large factor analysis model on the extended data condition of the 2006 NIST SRE. The best results on this task in the literature TABLE VIII Results obtained with a large factor analysis model on the extended data condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 0.8% 0.004 1.1% 0.007

Female speakers EER DCF 1.6% 0.009 1.8% 0.009

are those reported by MIT/IBM [4] where EER’s of 1.5% (English language trials, male and female results pooled) and 2.6% (all trials) were obtained by fusing 9 subsystems (cepstral, MLLR and higher level). It is interesting to note that, although the extended data task was intended to encourage research into higher level systems, and higher level systems (including an MLLR system) play an important role in reducing the error rates in [4], we were able to obtain better results using cepstral features alone. C. Cross-channel condition In the cross-channel condition, the enrollment data for each target speaker consists of a conversation side extracted from a recording of a telephone conversation but the test data consists of recordings made using one of 8 different microphones. (The

identity of the microphone is not given. The cross-channel task is described in detail in [34].) We used the development data provided by NIST to estimate 100 eigenchannels to model the effects of the various microphones and we appended these eigenchannels to the 100 eigenchannels that we had previously estimated on telephone speech. Thus, the factor analysis model that we used at recognition time had 200 channel factors rather than 100. Since the enrollment data in this task consists of telephone speech, we did not have to make any change to the factor analysis model used at enrollment time. The only other modification that we made was to chose the z-norm utterances from the cross-channel development data rather than from the factor analysis training data described in Section III-C. The results we obtained on the cross-channel test data are summarized in Table IX. These results are a good deal TABLE IX Results obtained with a large factor analysis model on the cross-channel condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 2.6% 0.011 2.5% 0.011

Female speakers EER DCF 2.9% 0.014 3.3% 0.015

better than the best results that have been reported on this task, namely an EER of 4.0% obtained by MIT [34]. The MIT results were obtained by fusing two cepstral systems (a support vector machine with nuisance attribute projection and a GMM/UBM system with channel factors) and speech enhancement played an important role in reducing error rates. Our system makes use of no special-purpose signal processing; it relies solely on a large number of channel factors to compensate for transducer effects. The results of other auxiliary microphone tests (where microphone speech is used at enrollment time as well as at verification time) can be found in [13]. V. C ONCLUSION We have shown how careful modeling of inter-speaker variability enables a stand-alone joint factor analysis system to perform as well as fusions of large numbers of systems of other types (which typically include models of inter-session variability but not of inter-speaker variability). Of course this achievement comes at a cost — the implementation is more complicated and painstaking experimentation is needed — but our approach beats the state of the art on the NIST 2006 extended data task (without using higher level features) and on the cross-channel task (without using speech enhancement). The principal departure from our earlier work is that we abandoned (at least partially) the maximum likelihood principle for estimating the hyperparameters which define a joint factor analysis model. This decision was driven by the results in [12] which led us to conclude that the maximum likelihood principle could not produce useful estimates of the hyperparameter d in (2), apparently because (2) is not a realistic model of inter-speaker variability. There is an obvious

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

parallel here with speech recognition: Hidden Markov Models are not realistic models of acoustic-phonetic phenomena, and maximum likelihood estimation does not perform as well as other estimation criteria (such as Maximum Mutual Information or Minimum Phone Error). This raises the question of whether similar discriminative training criteria can be used to estimate factor analysis hyperparameters. First steps in this direction have been taken in [35], [36] but the results suggest that getting this type of approach to work in speaker recognition may require a large effort, just as it has in speech recognition. R EFERENCES [1] A. O’Hagan and J. Forster, Kendall’s advanced theory of statistics. Vol. 2B. London, UK: Arnold, 2004. [2] J. A. Bilmes, “Graphical models and automatic speech recognition,” in Mathematical Foundations of Speech and Language Processing, M. Johnson, S. P. Khudanpur, M. Ostendorf, and R. Rosenfeld, Eds. New York, NY: Springer-Verlag, 2004, pp. 191–246. [3] S. J. D. Prince and J. H. Elder, “Probabilistic linear discriminant analysis for inferences about identity,” in Proc. ICCV 2007, Rio de Janeiro, Brazil, Oct. 2007. [4] W. M. Campbell, D. E. Sturim, W. Shen, D. A. Reynolds, and J. Navratil, “The MIT-LL/IBM 2006 speaker recognition system: High-performance reduced-complexity recognition,” in Proc. ICASSP 2007, Honolulu, HI, Apr. 2007, pp. IV–217 – IV–220. [5] F. Castaldo, D. Colibro, E. Dalmasso, P. Laface, and C. Vair, “Compensation of Nuisance Factors for Speaker and Language Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1969 – 1978, 2007. [6] P. Kenny, “Joint factor analysis of speaker and session variability: Theory and algorithms, Tech. Report CRIM-06/08-13,” 2005. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [7] J.-L. Gauvain and C.-H. Lee, “Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains,” IEEE Trans. Speech Audio Processing, vol. 2, pp. 291–298, Apr. 1994. [8] P. Kenny, G. Boulianne, and P. Dumouchel, “Eigenvoice modeling with sparse training data,” IEEE Trans. Speech Audio Processing, vol. 13, no. 3, pp. 345–359, May 2005. [9] D. Reynolds, T. Quatieri, and R. Dunn, “Speaker verification using adapted Gaussian mixture models,” Digital Signal Processing, vol. 10, pp. 19–41, 2000. [10] N. Dehak, P. Kenny, and P. Dumouchel, “Modeling prosodic features with joint factor analysis for speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 7, pp. 2095 – 2103, Sept. 2007. [11] S. Lucey and T. Chen, “Improved speaker verification through probabilistic subspace adaptation,” in Proc. Eurospeech, Geneva, Switzerland, Sept. 2003, pp. 2021–2024. [12] P. Kenny, N. Dehak, R. Dehak, V. Gupta, and P. Dumouchel, “The role of speaker factors in the NIST extended data task,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [13] P. Kenny, N. Dehak, P. Ouellet, V. Gupta, and P. Dumouchel, “Development of the primary CRIM system for the NIST 2008 speaker recognition evaluation,” in Proc. Interspeech, Brisbane, Australia, Sept. 2008. [14] M. Gales, “Cluster adaptive training of hidden Markov models,” IEEE Trans. Speech Audio Processing, vol. 8, pp. 417–428, 2000. [15] D. J. C. MacKay, “Comparison of approximate methods for handling hyperparameters,” Neural Computation, vol. 11, no. 5, pp. 1035–1068, 1999. [16] M. Tipping and C. Bishop, “Mixtures of probabilistic principal component analysers,” Neural Computation, vol. 11, pp. 435–474, 1999. [17] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Speaker adaptation using an eigenphone basis,” IEEE Trans. Speech Audio Processing, vol. 12, no. 6, pp. 579–589, Nov. 2004. [18] ——, “Speaker and session variability in GMM-based speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1448–1460, May 2007. [19] S. Kajarekar, “Four weightings and a fusion,” in Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, 2005, pp. 17–22.

9

[20] H. Aronowitz, “Speaker recognition using Kernel-PCA and inter-session variability modeling,” in Proc. Interspeech 2007, Antwerp, Belgium, Aug. 2007. [21] (2006) The NIST year 2006 speaker recognition evaluation plan. [Online]. Available: http://www.nist.gov/speech/tests/spk/2006/index.htm [22] J. Pelecanos and S. Sridharan, “Feature warping for robust speaker verification,” in Proc. Speaker Odyssey, Crete, Greece, June 2001, pp. 213–218. [23] P. Kenny and P. Dumouchel, “Disentangling speaker and channel effects in speaker verification,” in Proc. ICASSP, Montreal, Canada, May 2004, pp. I–37–40. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [24] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1435–1447, May 2007. [25] ——, “Factor analysis simplified,” in Proc. ICASSP 2005, Philadelphia, PA, Mar. 2005, pp. 637–640. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [26] S.-C. Yin, R. Rose, and P. Kenny, “A joint factor analysis approach to progressive model adaptation in text independent speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 7, pp. 1999– 2010, Sept. 2007. [27] B. Fauve, N. Evans, N. Pearson, J.-F. Bonastre, and J. Mason, “Influence of task duration in text-independent speaker verification,” in Proc. Interspeech 2007, Antwerp, Belgium, Aug. 2007. [28] B. Fauve, N. Evans, and J. Mason, “Improving the performance of textindependent short-duration SVM- and GMM-based speaker verification,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [29] R. Vogt, C. Lustri, and S. Sridharan, “Factor analysis modeling for speaker verification with short utterances,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [30] L. Burget, P. Matejka, O. Glembek, P. Schwarz, and J. Cernocky, “Analysis of feature extraction and channel compensation in GMM speaker recognition system,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1979–1986, Sept. 2007. [31] B. G. B. Fauve, D. Matrouf, N. Scheffer, J.-F. Bonastre, and J. S. D. Mason, “State-of-the-Art Performance in Text-Independent Speaker Verification Through Open-Source Software,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1960 – 1968, 2007. [32] N. Brummer, L. Burget, J. Cernocky, O. Glembek, F. Grezl, M. Karafiat, D. van Leeuwen, P. Matejka, P. Schwarz and A. Strasheim, “Fusion of heterogeneous speaker recognition systems in the STBU submission for the NIST speaker recognition evaluation 2006,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 2072 – 2084, Sept. 2007. [33] A. Stolcke, E. Shriberg, L. Ferrer, S. Kajarekar, K. Sonmez, and G. Tur, “Speech recognition as feature extraction for speaker recognition,” in Proc. SAFE 2007: Workshop on Signal Processing Applications for Public Security and Forensics, Washington, D.C., 2007, pp. 39–43. [34] D. E. Sturim, W. M. Campbell, D. A. Reynolds, R. B. Dunn, and T. F. Quatieri, “Robust speaker recognition with cross-channel data: MIT-LL results on the 2006 NIST SRE auxiliary microphone task,” in Proc. ICASSP 2007, Honolulu, HI, Apr. 2007, pp. IV–49–IV–52. [35] A. Solomonoff, W. Campbell, and I. Boardman, “Advances in channel compensation for SVM speaker recognition,” in Proc. ICASSP 2005, Philadelphia, PA, Mar. 2005. [36] R. Vogt, S. Kajarekar, and S. Sridharan, “Discriminant NAP for SVM speaker recognition,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008.

View more...
1

A Study of Inter-Speaker Variability in Speaker Verification Patrick Kenny, Pierre Ouellet, Najim Dehak, Vishwa Gupta and Pierre Dumouchel

EDICS Category: SPE-SPKR Abstract— We propose a new approach to the problem of estimating the hyperparameters which define the inter-speaker variability model in joint factor analysis. We tested the proposed estimation technique on the NIST 2006 speaker recognition evaluation data and obtained 10–15% reductions in error rates on the core condition and the extended data condition (as measured both by equal error rates and the NIST detection cost function). We show that when a large joint factor analysis model is trained in this way and tested on the core condition, the extended data condition and the cross-channel condition, it is capable of performing at least as well as fusions of multiple systems of other types. (The comparisons are based on the best results on these tasks that have been reported in the literature.) In the case of the cross-channel condition, a factor analysis model with 300 speaker factors and 200 channel factors can achieve equal error rates of less than 3.0%. This is a substantial improvement over the best results that have previously been reported on this task. Index Terms— Speaker verification, Gaussian mixture model, speaker factors, channel factors

decomposed into a sum of two supervectors, a speaker supervector s and a channel supervector c: M = s + c,

(1)

where s and c are statistically independent and normally distributed. Secondly, we assume that the distribution of s has a hidden variable description of the form s = m + vy + dz

(2)

where m is a CF × 1 supervector; v is a rectangular matrix of low rank and y is a normally distributed random vector; d is a CF × CF diagonal matrix and z is a normally distributed CF -dimensional random vector. We will refer to the columns of v as eigenvoices and we will refer to the components of y as speaker factors.1 Thirdly, we assume that the distribution of c has a hidden variable description of the form

I. I NTRODUCTION

c = ux,

Factor analysis is a model of speaker and session variability in Gaussian mixture models (GMM’s). This article is concerned with the speaker variability component of our version of factor analysis. In our approach to speaker recognition, the role of this component is to provide a prior distribution for target speaker models (we use the term prior distribution in the sense in which it is used in Bayesian statistics [1]). As such, it plays a key role in estimating target speaker models at enrollment time. In order to formulate precisely the problem that we address, we begin by recapitulating the basic assumptions in factor analysis. Let C be the number of components in a Universal Background Model (UBM) and F the dimension of the acoustic feature vectors. We use the term supervector to refer to the CF -dimensional vector obtained by concatenating the F -dimensional mean vectors in the GMM corresponding to a given utterance. Our assumptions are as follows. Firstly, we assume that a speaker- and channel-dependent supervector M can be

where u is a rectangular matrix of low rank and x is a normally distributed random vector. We refer to the components of x as channel factors and we use the term eigenchannels to refer to the columns of u. Finally, we associate a diagonal covariance matrix Σc with each mixture component c whose role is to model the variability in the acoustic observation vectors which is not captured by either the speaker model (2) or the channel model (3). We denote by Σ the CF × CF supercovariance matrix whose diagonal is the concatenation of these covariance matrices. Although most authors (e.g. [4], [5]) use the term factor analysis to refer to the channel model (3) alone, we have always used this term in a broader sense which includes the speaker model (2) as well. (Where it is necessary to make this distinction explicitly we speak of joint factor analysis.) Our concern in this article is with the way the hyperparameters v and d in (2) are estimated. These hyperparameters provide a prior distribution for maximum a priori (MAP) estimation of speaker-dependent GMM’s at enrollment time, and they are critically important to the success of our approach to speaker recognition. (The MAP calculation is explained in Section III

(3)

The authors are with the Centre de recherche informatique de Montr´eal (CRIM). 1 As in our previous work, we are following the usage of [2]. A different email: [email protected], [email protected], [email protected], [email protected], usage prevails in the general statistical literature: the columns of v would be [email protected] referred to as speaker factors and the entries of y as factor loadings. The This work was supported in part by the Natural Science and Engineerterminology is used in this way in [3] where factor analysis methods are ing Research Council of Canada and by the Minist`ere du D´eveloppement applied to the face recognition problem. ´ Economique et R´egional et de la Recherche du Gouvernement du Qu´ebec. c IEEE 0000–0000/00$00.00

2

of [6].) Since the assumption in (2) is equivalent to saying that s is normally distributed with mean m and covariance matrix d2 + vv ∗ , (2) is a model of inter-speaker variability. Our goal in this article is to show how improved inter-speaker variability modeling can lead to substantial gains in speaker recognition performance. If v = 0 and u = 0, then the assumption in (2) is the same as in classical MAP [7]; on the other hand, if d = 0 and u = 0, the assumption is the same as in eigenvoice MAP [8]. Classical MAP adaptation (including relevance MAP [9]) is by far the most popular type of speaker modeling in text-independent speaker recognition, but our experience has been that MAP adaptation using eigenvoices is generally much more effective, at least in situations where limited amounts of enrollment data are available. Classical MAP adaptation can only adapt those Gaussians which are seen in the enrollment data but, if large amounts of enrollment data are available, it is arguably the best way of estimating speaker supervectors since it is asymptotically equivalent to maximum likelihood estimation. On the other hand, eigenvoice MAP is helpful if only small amounts of enrollment data are available, since only a small number of free parameters need to be estimated at enrollment time. The fact that the supervector covariance matrix is full rather than diagonal in this case ensures that MAP adaptation takes account of the correlations between the different Gaussians in a speaker supervector so that all of the Gaussians are updated at enrollment time even if only a small fraction of them are observed. An extreme example of the effectiveness of eigenvoices can be found in [10], which is concerned with the use of factor analysis to model syllable-level prosodic features. The number of feature vectors per conversation side is only about 400; it is unrealistic to expect classical MAP adaptation to be very effective in this situation. It should be possible to capitalize on the advantages of both classical MAP and eigenvoice MAP by including both terms vy and dz in (2) (this was first suggested in [11]). However, extensive experimentation in [12] showed that the term dz was only helpful on an extended data task where 15–20 minutes of enrollment data are available for each target speaker. (The term vy is helpful in all circumstances, even in the extended data task.) Since including the term dz is the source of most of the mathematical complication in [6], and the extended data task is of secondary interest to most researchers, this led us to wonder if we would not be better off suppressing the term dz altogether. The reason why the term dz was not helpful in [12] is that in a typical factor analysis training scenario with, say, 1000 training speakers and 300 speaker factors, almost all of the speaker variability in the training set can be well accounted for by v alone (v has 300 times as many free parameters as d). Thus, if the maximum likelihood criterion is used to estimate d and v, what tends to happen is that d ends up playing no useful role unless very large amounts of enrollment data are available (as in the extended data task). However, there is reason to doubt that the maximum likelihood criterion is appropriate for this type of estimation problem. Even if the linear/Gaussian assumptions in (2) are granted, there is no

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

reason to believe that (2) is a correct model of inter-speaker variability — it is just a compromise that is forced on us by the fact that a supervector covariance matrix of sufficiently high rank to be realistic would probably be impossible to estimate or to calculate with. (Impossible to calculate with because the rank of the covariance matrix would be too high; impossible to estimate because many more training speakers would be needed than are currently available.) This led us to explore another way of estimating v and d which we will explain in Section II and which we refer to as decoupled estimation. In Section III, we show how decoupled estimation leads to 10–15% reductions in error rates (as measured both by equal error rates and the NIST detection cost function) on both the core condition and the extended data condition of the NIST 2006 speaker recognition evaluation data. In order to be able to turn around these experiments in a reasonable time, we used factor analysis models of relatively modest dimensions. Our final results, using a much larger factor analysis model, are presented in Section IV. These results show that a stand-alone joint factor analysis model is capable of performing at least as well as fusions of large numbers of systems of other types (based on comparisons with the best results that have been reported in the literature). The results on the NIST 2006 cross channel condition are particularly impressive: equal error rates of less than 3% can be achieved without any special-purpose signal processing. Results of other tests are presented in [13]; these include cross-channel tests in which microphone speech is used for enrollment as well as for verification and tests involving very short utterances at verification time. II. E STIMATING

THE

H YPERPARAMETERS

The supervector defined by a UBM can serve as an estimate of m and the UBM covariance matrices are good first approximations to the residual covariance matrices Σc (c = 1, . . . , C). The problem of estimating v in the case where d = 0 was addressed in [8] and a very similar approach can be adopted for estimating d in the case where v = 0. We first summarize the estimation procedures for these two special cases and then explain how they can be combined to tackle the general case. A. Baum-Welch statistics Given a speaker s and acoustic feature vectors Y1 , Y2 , . . ., for each mixture component c we define the Baum-Welch statistics in the usual way: X Nc (s) = γt (c) Fc (s)

=

t X

γt (c)Yt

t

Sc (s)

=

diag

X t

γt (c)Yt Yt∗

!

where, for each time t, γt (c) is the posterior probability of the event that the feature vector Yt is accounted for by the mixture component c. We calculate these posteriors using the UBM.

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

3

We denote the centralized first- and second order BaumWelch statistics by F˜c (s) and S˜c (s): F˜c (s) =

X

m, v, Σ and s ranges over the training speakers: X Nc = Nc (s) (c = 1, . . . , C) s

γt (c)(Yt − mc )

Ac

t

S˜c (s) =

diag

X

γt (c)(Yt − mc )(Yt − mc )∗

t

!

Fc (s) − Nc (s)mc Sc (s) − diag (Fc (s)m∗c + mc Fc (s)∗ − Nc (s)mc m∗c ) .

Let N (s) be the CF × CF diagonal matrix whose diagonal ˜ (s) be the CF × 1 blocks are Nc (s)I (c = 1, . . . , C). Let F ˜ supervector obtained by concatenating Fc (s) (c = 1, . . . , C). ˜ Let S(s) be the CF × CF diagonal matrix whose diagonal blocks are S˜c (s) (c = 1, . . . , C).

Nc (s)E [y(s)y ∗ (s)] (c = 1, . . . , C)

s

C =

˜ (s)E [y ∗ (s)] F

X s

N

where mc is the subvector of m corresponding to the mixture component c. In other words, F˜c (s) = S˜c (s) =

=

X

=

X

N (s).

s

For each mixture component c = 1, . . . , C and for each f = 1, . . . , F , set i = (c − 1)F + f ; let vi denote the ith row of v and Ci the ith row of C. Then v is updated by solving the equations =

vi Ac

Ci (i = 1, . . . , CF ).

The update formula for Σ is Σ=N

X

−1

s

B. Training an eigenvoice model In this section we consider the problem of estimating m, v and Σ under the assumption that d = 0. We assume that initial estimates of the hyperparameters are given. (Random initialization of v works fine in practice.) The approach that we adopt is similar to Gales’s cluster adaptive training [14] in that it does not rely on techniques such as MAP or MLLR to produce GMM’s for the training speakers (all of the computation is done with Baum-Welch statistics rather than GMM’s). It differs from the approach in [14] in that the hyperparameter estimation problem is formulated in terms of maximum likelihood II [15]. As such, our approach is very similar to the Probabilistic Principal Components Analysis (PPCA) of [16] (which is formally a special case of our procedure). Note that, as in PPCA, the terms eigenvector and eigenvalue do not appear in our formulation but it is known that, unless the optimization gets stuck locally, PPCA does succeed in finding principal eigenvectors. Thus it is appropriate for us to speak of eigenvoices even though our estimation procedure is not formulated as an eigenvalue problem. 1) The posterior distribution of the hidden variables: For each speaker s, set l(s) = I + v ∗ Σ−1 N (s)v. Then the posterior distribution of y(s) conditioned on the acoustic observa˜ (s) tions of the speaker is Gaussian with mean l−1 (s)v ∗ Σ−1 F −1 and covariance matrix l (s). (See [8], Proposition 1.) We will use the notation E [·] to indicate posterior expectations; thus, E [y(s)] denotes the posterior mean of y(s) and E [y(s)y ∗ (s)] the posterior correlation matrix. 2) Maximum likelihood re-estimation: This entails accumulating the following statistics over the training set, where the posterior expectations are calculated using initial estimates of

!

˜ S(s) − diag (Cv ) . ∗

(See [8], Proposition 3.) 3) Minimum divergence re-estimation: Given initial estimates m0 and v 0 , the update formulas for m and v are m = v =

m0 + v 0 µy v 0 T ∗yy .

Here µy

=

1X E [y(s)] , S s

T yy is an upper triangular matrix such that T ∗yy T yy

=

1X E [y(s)y ∗ (s)] − µy µ∗y S s

(i.e. Cholesky decomposition), S is the number of training speakers, and the sums extend over all speakers in the training set. (See [6], Theorem 7.) This update formula leaves the range of the covariance matrix vv ∗ unchanged. The only freedom it has is to rotate the eigenvoices and scale the corresponding eigenvalues. This type of hyperparameter estimation was introduced in [17]; its role is to get good estimates of the eigenvalues corresponding to the eigenvoices ([18], Section II-C). Thus is useful for diagnostic purposes; for example, in comparing the eigenvalues of uu∗ with those of vv ∗ as in Table VI below. Maximum likelihood estimation on its own produces eigenvalues which are difficult to interpret [6].

C. Training a diagonal model An analogous development can be used to estimate m, d and Σ if v is constrained to be 0.

4

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

1) The posterior distribution of the hidden variables: For each speaker s, set l(s) = I +d2 Σ−1 N (s). Then the posterior distribution of z(s) conditioned on the acoustic observations ˜ (s) and of the speaker is Gaussian with mean l−1 (s)dΣ−1 F covariance matrix l−1 (s). (The derivation here is essentially the same as in Section II-B.1.) Again, we will use the notation E [·] to indicate posterior expectations; thus, E [z(s)] denotes the posterior mean of z(s) and E [z(s)z ∗ (s)] the posterior correlation matrix. It is straightforward to verify that, in the special case where d is assumed to satisfy 1 Σ, d = r this posterior calculation leads to the standard relevance MAP estimation formulas for speaker supervectors [9] (r is the relevance factor). The following two sections summarize datadriven procedures for estimating m, d and Σ which do not depend on the relevance MAP assumption. It can be shown that when these update formulas are applied iteratively, the values of a likelihood function analogous to that given in Proposition 2 of [8] increase on successive iterations. 2) Maximum likelihood re-estimation: This entails accumulating the following statistics over the training set where the posterior expectations are calculated using initial estimates of m, d, Σ and s ranges over the training speakers: X Nc = Nc (s) (c = 1, . . . , C) 2

a =

s X

diag (N (s)E [z(s)z ∗ (s)])

s X

N (s).

s

X

b = N

=

˜ (s)E [z ∗ (s)] diag F

S is the number of training speakers, and the sums extend over all speakers in the training set. We will need a variant of this update procedure which applies to the case where m is forced to be 0. In this case d is estimated from d0 by taking T zz to be such that ! 1X 2 ∗ T zz = diag E [z(s)z (s)] . S s D. Joint estimation of v and d There is no difficulty in principle in extending the maximum likelihood and minimum divergence training procedures to handle a general factor analysis model in which both v and d are non-zero (Theorems 4 and 7 in [6]). We used this type of joint estimation in all of our previous work in factor analysis and to produce benchmarks for the experiments that we will report in this article. However, joint estimation of v and d is computationally demanding because, in a general factor analysis model, all of the hidden variables become correlated with each other in the posterior distributions. Our experience has been that, given the Baum-Welch statistics, training a diagonal model runs very quickly and training a pure eigenvoice model can be made to run quickly (at the cost of some memory overhead) by suitably organizing the computation of the matrices l(s) in Section II-B.1. Unfortunately, no such computational shortcuts seem to be possible in the general case. Furthermore, even if the eigenvoice component v is carefully initialized, many iterations of joint estimation seem to be needed to estimate d properly and, because the contribution of d to the likelihood of the training data is minor compared with the contribution of v, it is difficult to judge when the training algorithm has effectively converged.

s

For i = 1, . . . , CF let di be the ith entry of d and similarly for ai and bi . Then d is updated by solving the equation di ai

= bi

for each i. The update formula for Σ is Σ=N

X

−1

s

! ˜ S(s) − diag (bd) .

3) Minimum divergence re-estimation: Given initial estimates m0 and d0 , the update formulas for m and d are m d

= m0 + d0 µz = d0 T zz

where µz

=

1X E [z(s)] , S s

T zz is a diagonal matrix such that T 2zz

=

diag

1X E [z(s)z ∗ (s)] − µz µ∗z S s

!

,

E. Decoupled estimation of v and d A much more serious problem with joint estimation is that it tends to produce estimates of d which are too small, so that almost all of the speaker variability in a factor analysis training set is accounted for by the term vy in (2) and very little of the variability is accounted for by the term dz. Thus, in practice, the term dz is of little use except in situations where large amounts of enrollment data are available (as we observed in [12]). It is probable that the reason why joint estimation behaves in this way is that it is a maximum likelihood estimation procedure and v has many more free parameters than d. But as we mentioned in Section I, there is reason to doubt the appropriateness of maximum likelihood estimation in this situation, and there is a good argument which suggests that the term dz ought to be helpful in distinguishing between speakers. The term vy can only capture inter-speaker variability which is confined to a low-dimensional affine subspace of supervector space (namely the subspace containing m which is spanned by the eigenvoices). It is reasonable to believe that the orientation of this subspace reflects attributes which are common to all speakers. No such constraint is imposed on the term dz, so it ought to be capable of capturing attributes which

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

5

are unique to individual speakers. Similar considerations led the authors in [19] and [20] to construct speaker recognition systems which operate in the orthogonal complement of the principal components of a large training set. (This orthogonal complement is referred to as the “speaker unique subspace” in [20].) This raises the question of how to produce a reasonable estimate of d which is not “too small”. Since the term dz models residual inter-speaker variability which is not captured by a large set of eigenvoices, this can be achieved by withholding a subset of the training speaker population; the speakers that are withheld serve to estimate d but they play no role in estimating v. Thus, we split the factor analysis training set in two and use the larger of the two sets to estimate m and v and the smaller to estimate d and Σ. We first fit a pure eigenvoice model to the larger training set using the procedures described in Sections II-B.2 and II-B.3. Then, for each speaker s in the residual training set, we calculate the MAP estimate of y(s), namely E [y(s)], as in Section II-B.1. This gives us a preliminary estimate of the speaker’s supervector s, namely s = m + vE [y(s)] .

(4)

We centralize the speaker’s Baum-Welch statistics by subtracting the speaker’s supervector (that is, we apply the formulas in Section II-A with m replaced by s). Finally, we use these centralized statistics together with the procedures described in Sections II-C.2 and II-C.3 to estimate a pure diagonal model with m = 0. This gives us estimates of d and Σ. Since this training algorithm uses only the diagonal and eigenvoice estimation procedures, it converges rapidly. III. E XPERIMENTS A. Enrollment and test data We used the core condition and the extended data condition (in which 8-conversation sides are available for enrolling each target speaker) of the NIST 2006 speaker recognition evaluation (SRE) for testing [21]. Although we will report results on male speakers as well as female, we used mostly he female trials in the 2006 SRE for our experiments. B. Feature Extraction We extracted 19 cepstral coefficients together with a log energy feature using a 25 ms Hamming window and a 10 ms frame advance. These were subjected to feature warping using a 3 s sliding window [22]. ∆ coefficients were then calculated using a 5 frame window, giving a total of 40 features. C. Factor analysis training data We trained two gender-dependent UBM’s having 1024 Gaussians and gender-dependent factor analysis models having 0, 100 and 300 speaker factors. Except where otherwise indicated, the number of channel factors was fixed at 50. For training UBM’s we used Switchboard II, Phases 2 and 3; Switchboard Cellular, Parts 1 and 2; the Fisher English Corpus, Parts 1 and 2; the NIST 2003 Language Recognition

Evaluation data set; and the NIST 2004 SRE enrollment and test data. For training factor analysis models we used the LDC releases of Switchboard II, Phases 2 and 3; Switchboard Cellular, Parts 1 and 2; and the NIST 2004 SRE data. We used only those speakers for which five or more recordings were available. For decoupled estimation of v and d, we estimated v on the Switchboard data and d on the 2004 SRE data. In order to ensure strict disjointness between the factor analysis training data and the NIST 2006 SRE data which we used for testing, we made no use of the 2005 SRE data. (For the extended data condition, some of the 2005 data was recycled in 2006. In [23] we reported how failing to keep the training and test sets disjoint could produce extremely misleading results.) Note also that, since the Switchboard corpora consist of English only data and English is predominant in the NIST 2004 SRE data, the factor analysis training set is biased towards English speakers. D. Implementation details The first step in building factor analysis models is to train gender dependent UBM’s in the usual way. BaumWelch statistics extracted with these UBM’s are sufficient statistics for all subsequent processing: hyperparameter estimation, target speaker enrollment and likelihood calculations at verification time. To estimate v and d we pooled all of the recordings of each speaker in the factor analysis training set and ignored channel effects as in [24]. (The rationale here is that channel effects can be averaged out if sufficiently many recordings are available for each speaker.) In implementing decoupled estimation of v and d, we ran the algorithms in Section IIB to convergence (seven iterations of maximum likelihood estimation and one of minimum divergence estimation) before calculating the speaker supervectors for each training speaker according to (4). We decoupled the estimation of u from that of v and d as in [25], [24] (rather than using the maximum likelihood procedures in [6], [18]). Recall that equation (2) can be interpreted as saying that speaker supervectors are normally distributed with mean 0 and covariance matrix d2 + vv ∗ . For the purposes of enrolling target speakers, we interpret this normal distribution as a prior distribution in the sense in which this term is used in Bayesian statistics. Given an enrollment utterance and the hyperparameters m, u, v and d, we enroll a target speaker by calculating the posterior distribution of the hidden variables x, y and z, using the maximum a posteriori estimate of m + vy + dz as a point estimate of the speaker’s supervector. (We do not use the point estimate of x since the channel effects in the enrollment data are irrelevant.) As is generally the case with a Gaussian prior, the posterior is also Gaussian and can be calculated in closed form [1]. The case where d = 0 and u = 0 is treated in Section IIB.1; the case where u = 0 and v = 0 is treated in Section II-C.1; the case where d 6= 0, v = 0, u = 0 and there is a single enrollment utterance is treated in the Appendix to [26];

6

the case where d 6= 0, v 6= 0, u = 0 and there is a single enrollment utterance is formally equivalent to this – one only has to replace u by the matrix u v ;

finally, the general case in which there are multiple enrollment utterances is treated in Section III of [6] but we have found that pooling the Baum-Welch statistics from the various utterances together (as if we had a single utterance for enrollment) works just as well as the complicated calculation described there. Note that our enrollment procedure results in a point of estimate of each target speaker’s supervector, rather than a posterior distribution as in the Bayesian approach that we originally attempted [18]. At verification time, likelihoods were evaluated according to (19) in [24]. (We did not use the correction (20) in [24]. This is a minor technical issue which is discussed at length in [12].) Thus, we account for channel effects in test utterances by integrating over the channel factors x in (3) rather than by using a point estimate of the channel factors for each test utterance as other authors do. If a test utterance is sufficiently long, the posterior distribution of the channel factors will be sharply peaked and using a point estimate of the channel factors (either a MAP estimate or a maximum likelihood estimate) will give essentially the same result as integrating over the channel factors. But in the case of short test utterances (say 10 seconds of speech), integrating over channel factors seems to be the right thing to do. (Since the integral in question is Gaussian there is no difficulty in evaluating it in closed form.) It was reported in [27], [28], [29] that channel factors are unhelpful for tasks involving short test utterances but this does not agree with our experience. In [13] we present some good results on 10 second test conditions; we believe that our success can be traced to not attempting to obtain point estimates of channel factors under these conditions. Finally, the denominator of the log likelihood ratio statistic used for verification was calculated in exactly the same way as the numerator with the UBM supervector used in place of the hypothesized speaker’s supervector. E. Imposters The verification decision scores obtained with the factor analysis models were normalized using zt-norm. As in [12], we used 283 t-norm speakers in the female case and 227 in the male case. We used 1000 z-norm utterances for each gender. The imposters were chosen at random from the factor analysis training data. The reasons for using such a large number of z-norm utterances are explained in [12]. F. Results The results of our experiments on the female portion of the common subset of the core condition of the NIST 2006 SRE are summarized in Table I. (EER refers to the equal error rate and DCF to the minimum value of the NIST detection cost function. The common subset consists of English language trials only. All results in this paper were obtained using version

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

5 of the 2006 SRE answer key.) There are some blank entries in the table because decoupled estimation applies only in the case where both v and d are non-zero. The best result is TABLE I Results obtained on the core condition of the NIST 2006 SRE (female speakers, English language trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 4.4% 0.027 4.1% 0.024 3.9% 0.024 5.2% 0.027

Decoupled EER DCF 3.9% 0.022 3.6% 0.021 – – – –

obtained with 300 speaker factors and decoupled estimation. It is apparent that, contrary to our conclusion in [12], the term dz in (2) can play a useful role in restricted data tasks after all. There is an anomaly in the joint estimation column: In the 300 speaker factor case we obtained a better EER by setting d = 0 than by joint estimation of v and d. We attribute this to the convergence issue mentioned in Section II-D. Table II gives the corresponding results on all trials of the female portion of the core condition. Again, the best results are obtained with 300 speaker and decoupled estimation.. Comparing the second row of Table II with that of Table I TABLE II Results obtained on the core condition of the NIST 2006 SRE (female speakers, all trials)

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 5.9% 0.032 5.6% 0.030 5.2% 0.028 7.2% 0.034

Decoupled EER DCF 4.9% 0.027 4.6% 0.025 – – – –

we see that, if d is estimated with decoupled estimation, then the term dz in (2) is particularly effective in modeling nonEnglish speakers. This is to be expected since we used a large amount of English-only data (namely the Switchboard corpora) to estimate v. We replicated these experiments on the female trials of the extended data condition. The results are summarized in Tables III and IV. Patterns similar to those in Tables I and II are evident. TABLE III Results obtained on the extended data condition of the NIST 2006 SRE (female speakers, English language trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 2.2% 0.012 2.1% 0.014 2.1% 0.014 3.1% 0.017

Decoupled EER DCF 2.1% 0.011 1.9% 0.011 – – – –

We report results on male speakers in Table V. Note that these results are much better than the results we obtained for female speakers.

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

TABLE IV Results obtained on the extended data condition of the NIST 2006 SRE

7

IV. R ESULTS OBTAINED

MODEL

(female speakers, all trials).

100 speaker factors, d 6= 0 300 speaker factors, d 6= 0 300 speaker factors, d = 0 0 speaker factors, d 6= 0

Joint Estimation EER DCF 2.5% 0.012 2.7% 0.014 2.7% 0.015 3.6% 0.016

Decoupled EER DCF 2.3% 0.014 2.3% 0.012 – – – –

TABLE V Results obtained on the core condition and the extended data condition of the NIST 2006 SRE for male speakers (50 channel factors, 300 speaker factors, d 6= 0, decoupled estimation).

Core condition, English Core condition, all trials Extended data, English Extended data, all trials

EER 2.1% 4.2% 1.4% 1.7%

DCF 0.013 0.020 0.006 0.008

G. Note on Baum-Welch statistics The results we have obtained using speaker factors are clearly much better than those obtained using d alone, but the reader may have noticed that the figures presented in the fourth rows of Tables I and II are not quite as good as the best results that have been reported with comparable standalone GMM/UBM systems as in [30], [5], [31]. These systems are comparable because they use relevance MAP for speaker enrollment and channel factors to compensate for intersession variability. As we mentioned in Section II-C.1, relevance MAP is essentially a special type of diagonal factor analysis model. The reason for the discrepancy in performance is that we use the UBM to extract Baum-Welch statistics in our system rather than speaker-dependent GMM’s. It turns out that, in the case of a diagonal factor analysis model, using speaker-dependent GMM’s does indeed produce better results. For example, on the English language trials in the core condition, a diagonal model with 100 channel factors produces an EER of 2.8% for male speakers, which is similar to the results presented in [30], [5], [31] (but not as good as the result in the first line of Table V). However, for a factor analysis model with 300 speaker factors, using speaker-dependent GMM’s (estimated with speaker factors) to extract Baum-Welch statistics turns out to be harmful. For example, on the English language trials in the core condition, a factor analysis model with 300 speaker factors and 100 channel factors produces an EER of 4.2% for male speakers if the Baum-Welch statistics are extracted with speaker-dependent GMM’s; on the other hand, an EER of 1.4% is obtained if the UBM is used for this purpose. This is the reason why we have always used the UBM to extract Baum-Welch statistics in our work on factor analysis. (The extraordinarily low error rate of 1.4% is attributable to using 100 channel factors rather than 50 as in Table V.)

WITH A LARGE FACTOR ANALYSIS

In this section we report results obtained on the NIST 2006 SRE test set by increasing the dimensions of the male and female factor analysis models. We increased the number of Gaussians from 1024 to 2048, we increased the dimension of the acoustic feature vectors from 40 to 60 by appending ∆∆ coefficients, and we increased the number of channel factors from 50 to 100. We kept the number of speaker factors at 300 because previous experience has shown that using larger numbers of speaker factors is not helpful [12]. In presenting the results that we obtained with the large factor analysis models, we will break them out by gender because (as we saw in the previous section) there are large differences in performance between males and females. Some insight into this phenomenon can be gained by inspecting the way the male and female factor analysis models fit the training data. Table VI gives the traces of the matrices vv ∗ , d2 and uu∗ for the two gender-dependent factor analysis models. It is clear that, as measured both by the trace of vv ∗ and by the trace of d2 , there is substantially greater variability among male speakers than among female speakers. Thus, distinguishing between male speakers seems to be intrinsically easier than distinguishing between female speakers, at least if the feature set consists of cepstral coefficients. (In our earlier work we used 12 cepstral coefficients. We increased the number of cepstral coefficients to 19 in the present work in the hope that this would narrow the gender gap.) Perhaps even more surprisingly, we observe that the trace of uu∗ is substantially larger for the male model than for the female model. which seems to indicate that the channel model (3) gives a better fit in the case of male speech. However, although the figures in the fourth row of Table VI are not really comparable with the others, they suggest that most of the variability in the data is not captured by either the speaker model (2) or the channel model (3) and this residual variability is larger in the case of males than in the case of females. TABLE VI Speaker and channel variability in male and female factor analysis models.

∗ tr (vv ` ´) tr d2 tr (uu∗ ) tr (Σ)

Male speakers 1075 334 501 25,840

Female speakers 975 306 433 23,535

A. Core condition The results we obtained on the core condition of the NIST 2006 SRE with the large factor analysis model just described are summarized in Table VII. Even though they were obtained with a stand-alone system, these results compare very favorably with the best results on the 2006 core condition that have been reported in the literature, namely those obtained by STBU [32], SRI [33] and MIT/IBM [4]. The STBU system achieved an EER of 2.3% (English language trials only, results pooled over male and female speakers) by fusing 10 subsystems

8

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

TABLE VII Results obtained with a large factor analysis model on the core condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 1.5% 0.011 3.0% 0.017

Female speakers EER DCF 2.7% 0.017 3.3% 0.020

(cepstral and MLLR); SRI achieved an EER of 2.6% by fusing 8 subsystems (cepstral, MLLR and higher level); and MIT/IBM achieved an EER of 2.7% by fusing 9 subsystems (cepstral, MLLR and higher level). In comparing our results with those of STBU, it should be noted that the individual subsystems of the STBU system were trained on pre-2005 data but the fusion parameters were estimated using the data made available for the 2005 NIST SRE. (Robust fusion was a key ingredient in the success of the STBU system in the 2006 SRE.) On the other hand, our reason for excluding the 2005 data from the factor analysis training set was simply to enable us to experiment properly with the extended data condition in the 2006 evaluation set (as we explained in Section III-C). Had we included the 2005 data in factor analysis training, the proportion of Mixer data in the factor analysis training set would have increased from 20% to 50% and this would presumably have resulted in even better performance on the 2006 evaluation set. B. Extended data condition Table VIII summarizes the results obtained with the large factor analysis model on the extended data condition of the 2006 NIST SRE. The best results on this task in the literature TABLE VIII Results obtained with a large factor analysis model on the extended data condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 0.8% 0.004 1.1% 0.007

Female speakers EER DCF 1.6% 0.009 1.8% 0.009

are those reported by MIT/IBM [4] where EER’s of 1.5% (English language trials, male and female results pooled) and 2.6% (all trials) were obtained by fusing 9 subsystems (cepstral, MLLR and higher level). It is interesting to note that, although the extended data task was intended to encourage research into higher level systems, and higher level systems (including an MLLR system) play an important role in reducing the error rates in [4], we were able to obtain better results using cepstral features alone. C. Cross-channel condition In the cross-channel condition, the enrollment data for each target speaker consists of a conversation side extracted from a recording of a telephone conversation but the test data consists of recordings made using one of 8 different microphones. (The

identity of the microphone is not given. The cross-channel task is described in detail in [34].) We used the development data provided by NIST to estimate 100 eigenchannels to model the effects of the various microphones and we appended these eigenchannels to the 100 eigenchannels that we had previously estimated on telephone speech. Thus, the factor analysis model that we used at recognition time had 200 channel factors rather than 100. Since the enrollment data in this task consists of telephone speech, we did not have to make any change to the factor analysis model used at enrollment time. The only other modification that we made was to chose the z-norm utterances from the cross-channel development data rather than from the factor analysis training data described in Section III-C. The results we obtained on the cross-channel test data are summarized in Table IX. These results are a good deal TABLE IX Results obtained with a large factor analysis model on the cross-channel condition of the NIST 2006 SRE.

English trials All trials

Male speakers EER DCF 2.6% 0.011 2.5% 0.011

Female speakers EER DCF 2.9% 0.014 3.3% 0.015

better than the best results that have been reported on this task, namely an EER of 4.0% obtained by MIT [34]. The MIT results were obtained by fusing two cepstral systems (a support vector machine with nuisance attribute projection and a GMM/UBM system with channel factors) and speech enhancement played an important role in reducing error rates. Our system makes use of no special-purpose signal processing; it relies solely on a large number of channel factors to compensate for transducer effects. The results of other auxiliary microphone tests (where microphone speech is used at enrollment time as well as at verification time) can be found in [13]. V. C ONCLUSION We have shown how careful modeling of inter-speaker variability enables a stand-alone joint factor analysis system to perform as well as fusions of large numbers of systems of other types (which typically include models of inter-session variability but not of inter-speaker variability). Of course this achievement comes at a cost — the implementation is more complicated and painstaking experimentation is needed — but our approach beats the state of the art on the NIST 2006 extended data task (without using higher level features) and on the cross-channel task (without using speech enhancement). The principal departure from our earlier work is that we abandoned (at least partially) the maximum likelihood principle for estimating the hyperparameters which define a joint factor analysis model. This decision was driven by the results in [12] which led us to conclude that the maximum likelihood principle could not produce useful estimates of the hyperparameter d in (2), apparently because (2) is not a realistic model of inter-speaker variability. There is an obvious

IEEE TRANS. AUDIO SPEECH AND LANGUAGE PROCESSING

parallel here with speech recognition: Hidden Markov Models are not realistic models of acoustic-phonetic phenomena, and maximum likelihood estimation does not perform as well as other estimation criteria (such as Maximum Mutual Information or Minimum Phone Error). This raises the question of whether similar discriminative training criteria can be used to estimate factor analysis hyperparameters. First steps in this direction have been taken in [35], [36] but the results suggest that getting this type of approach to work in speaker recognition may require a large effort, just as it has in speech recognition. R EFERENCES [1] A. O’Hagan and J. Forster, Kendall’s advanced theory of statistics. Vol. 2B. London, UK: Arnold, 2004. [2] J. A. Bilmes, “Graphical models and automatic speech recognition,” in Mathematical Foundations of Speech and Language Processing, M. Johnson, S. P. Khudanpur, M. Ostendorf, and R. Rosenfeld, Eds. New York, NY: Springer-Verlag, 2004, pp. 191–246. [3] S. J. D. Prince and J. H. Elder, “Probabilistic linear discriminant analysis for inferences about identity,” in Proc. ICCV 2007, Rio de Janeiro, Brazil, Oct. 2007. [4] W. M. Campbell, D. E. Sturim, W. Shen, D. A. Reynolds, and J. Navratil, “The MIT-LL/IBM 2006 speaker recognition system: High-performance reduced-complexity recognition,” in Proc. ICASSP 2007, Honolulu, HI, Apr. 2007, pp. IV–217 – IV–220. [5] F. Castaldo, D. Colibro, E. Dalmasso, P. Laface, and C. Vair, “Compensation of Nuisance Factors for Speaker and Language Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1969 – 1978, 2007. [6] P. Kenny, “Joint factor analysis of speaker and session variability: Theory and algorithms, Tech. Report CRIM-06/08-13,” 2005. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [7] J.-L. Gauvain and C.-H. Lee, “Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains,” IEEE Trans. Speech Audio Processing, vol. 2, pp. 291–298, Apr. 1994. [8] P. Kenny, G. Boulianne, and P. Dumouchel, “Eigenvoice modeling with sparse training data,” IEEE Trans. Speech Audio Processing, vol. 13, no. 3, pp. 345–359, May 2005. [9] D. Reynolds, T. Quatieri, and R. Dunn, “Speaker verification using adapted Gaussian mixture models,” Digital Signal Processing, vol. 10, pp. 19–41, 2000. [10] N. Dehak, P. Kenny, and P. Dumouchel, “Modeling prosodic features with joint factor analysis for speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 7, pp. 2095 – 2103, Sept. 2007. [11] S. Lucey and T. Chen, “Improved speaker verification through probabilistic subspace adaptation,” in Proc. Eurospeech, Geneva, Switzerland, Sept. 2003, pp. 2021–2024. [12] P. Kenny, N. Dehak, R. Dehak, V. Gupta, and P. Dumouchel, “The role of speaker factors in the NIST extended data task,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [13] P. Kenny, N. Dehak, P. Ouellet, V. Gupta, and P. Dumouchel, “Development of the primary CRIM system for the NIST 2008 speaker recognition evaluation,” in Proc. Interspeech, Brisbane, Australia, Sept. 2008. [14] M. Gales, “Cluster adaptive training of hidden Markov models,” IEEE Trans. Speech Audio Processing, vol. 8, pp. 417–428, 2000. [15] D. J. C. MacKay, “Comparison of approximate methods for handling hyperparameters,” Neural Computation, vol. 11, no. 5, pp. 1035–1068, 1999. [16] M. Tipping and C. Bishop, “Mixtures of probabilistic principal component analysers,” Neural Computation, vol. 11, pp. 435–474, 1999. [17] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Speaker adaptation using an eigenphone basis,” IEEE Trans. Speech Audio Processing, vol. 12, no. 6, pp. 579–589, Nov. 2004. [18] ——, “Speaker and session variability in GMM-based speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1448–1460, May 2007. [19] S. Kajarekar, “Four weightings and a fusion,” in Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, 2005, pp. 17–22.

9

[20] H. Aronowitz, “Speaker recognition using Kernel-PCA and inter-session variability modeling,” in Proc. Interspeech 2007, Antwerp, Belgium, Aug. 2007. [21] (2006) The NIST year 2006 speaker recognition evaluation plan. [Online]. Available: http://www.nist.gov/speech/tests/spk/2006/index.htm [22] J. Pelecanos and S. Sridharan, “Feature warping for robust speaker verification,” in Proc. Speaker Odyssey, Crete, Greece, June 2001, pp. 213–218. [23] P. Kenny and P. Dumouchel, “Disentangling speaker and channel effects in speaker verification,” in Proc. ICASSP, Montreal, Canada, May 2004, pp. I–37–40. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [24] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1435–1447, May 2007. [25] ——, “Factor analysis simplified,” in Proc. ICASSP 2005, Philadelphia, PA, Mar. 2005, pp. 637–640. [Online]. Available: http://www.crim.ca/perso/patrick.kenny [26] S.-C. Yin, R. Rose, and P. Kenny, “A joint factor analysis approach to progressive model adaptation in text independent speaker verification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 7, pp. 1999– 2010, Sept. 2007. [27] B. Fauve, N. Evans, N. Pearson, J.-F. Bonastre, and J. Mason, “Influence of task duration in text-independent speaker verification,” in Proc. Interspeech 2007, Antwerp, Belgium, Aug. 2007. [28] B. Fauve, N. Evans, and J. Mason, “Improving the performance of textindependent short-duration SVM- and GMM-based speaker verification,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [29] R. Vogt, C. Lustri, and S. Sridharan, “Factor analysis modeling for speaker verification with short utterances,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008. [30] L. Burget, P. Matejka, O. Glembek, P. Schwarz, and J. Cernocky, “Analysis of feature extraction and channel compensation in GMM speaker recognition system,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1979–1986, Sept. 2007. [31] B. G. B. Fauve, D. Matrouf, N. Scheffer, J.-F. Bonastre, and J. S. D. Mason, “State-of-the-Art Performance in Text-Independent Speaker Verification Through Open-Source Software,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 1960 – 1968, 2007. [32] N. Brummer, L. Burget, J. Cernocky, O. Glembek, F. Grezl, M. Karafiat, D. van Leeuwen, P. Matejka, P. Schwarz and A. Strasheim, “Fusion of heterogeneous speaker recognition systems in the STBU submission for the NIST speaker recognition evaluation 2006,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 2072 – 2084, Sept. 2007. [33] A. Stolcke, E. Shriberg, L. Ferrer, S. Kajarekar, K. Sonmez, and G. Tur, “Speech recognition as feature extraction for speaker recognition,” in Proc. SAFE 2007: Workshop on Signal Processing Applications for Public Security and Forensics, Washington, D.C., 2007, pp. 39–43. [34] D. E. Sturim, W. M. Campbell, D. A. Reynolds, R. B. Dunn, and T. F. Quatieri, “Robust speaker recognition with cross-channel data: MIT-LL results on the 2006 NIST SRE auxiliary microphone task,” in Proc. ICASSP 2007, Honolulu, HI, Apr. 2007, pp. IV–49–IV–52. [35] A. Solomonoff, W. Campbell, and I. Boardman, “Advances in channel compensation for SVM speaker recognition,” in Proc. ICASSP 2005, Philadelphia, PA, Mar. 2005. [36] R. Vogt, S. Kajarekar, and S. Sridharan, “Discriminant NAP for SVM speaker recognition,” in Proc. IEEE Odyssey Workshop, Stellenbosch, South Africa, Jan. 2008.

We are a sharing community. So please help us by uploading **1** new document or like us to download:

OR LIKE TO DOWNLOAD IMMEDIATELY