Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
t-distributed stochastic neighbor embedding
Technique for dimensionality reduction

t-distributed stochastic neighbor embedding (t-SNE) is a statistical technique for nonlinear dimensionality reduction, visualizing high-dimensional data by mapping each point into two or three dimensions so that similar objects appear close together. Developed by Geoffrey Hinton and colleagues, t-SNE models probabilities over pairs of high-dimensional objects and seeks to minimize the Kullback–Leibler divergence between these and a low-dimensional distribution. It commonly uses Euclidean distance but can be adapted. Applications span genomics, natural language processing, and cancer research. The algorithm requires O(n²) time and space for a dataset of n elements.

We don't have any images related to t-distributed stochastic neighbor embedding yet.
We don't have any YouTube videos related to t-distributed stochastic neighbor embedding yet.
We don't have any PDF documents related to t-distributed stochastic neighbor embedding yet.
We don't have any Books related to t-distributed stochastic neighbor embedding yet.
We don't have any archived web articles related to t-distributed stochastic neighbor embedding yet.

Details

Given a set of N {\displaystyle N} high-dimensional objects x 1 , … , x N {\displaystyle \mathbf {x} _{1},\dots ,\mathbf {x} _{N}} , t-SNE first computes probabilities p i j {\displaystyle p_{ij}} that are proportional to the similarity of objects x i {\displaystyle \mathbf {x} _{i}} and x j {\displaystyle \mathbf {x} _{j}} , as follows.

For i ≠ j {\displaystyle i\neq j} , define

p j ∣ i = exp ⁡ ( − ‖ x i − x j ‖ 2 / 2 σ i 2 ) ∑ k ≠ i exp ⁡ ( − ‖ x i − x k ‖ 2 / 2 σ i 2 ) {\displaystyle p_{j\mid i}={\frac {\exp(-\lVert \mathbf {x} _{i}-\mathbf {x} _{j}\rVert ^{2}/2\sigma _{i}^{2})}{\sum _{k\neq i}\exp(-\lVert \mathbf {x} _{i}-\mathbf {x} _{k}\rVert ^{2}/2\sigma _{i}^{2})}}}

and set p i ∣ i = 0 {\displaystyle p_{i\mid i}=0} . Note the above denominator ensures ∑ j p j ∣ i = 1 {\displaystyle \sum _{j}p_{j\mid i}=1} for all i {\displaystyle i} .

As van der Maaten and Hinton explained: "The similarity of datapoint x j {\displaystyle x_{j}} to datapoint x i {\displaystyle x_{i}} is the conditional probability, p j | i {\displaystyle p_{j|i}} , that x i {\displaystyle x_{i}} would pick x j {\displaystyle x_{j}} as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at x i {\displaystyle x_{i}} ."12

Now define

p i j = p j ∣ i + p i ∣ j 2 N {\displaystyle p_{ij}={\frac {p_{j\mid i}+p_{i\mid j}}{2N}}}

This is motivated because p i {\displaystyle p_{i}} and p j {\displaystyle p_{j}} from the N samples are estimated as 1/N, so the conditional probability can be written as p i ∣ j = N p i j {\displaystyle p_{i\mid j}=Np_{ij}} and p j ∣ i = N p j i {\displaystyle p_{j\mid i}=Np_{ji}} . Since p i j = p j i {\displaystyle p_{ij}=p_{ji}} , you can obtain previous formula.

Also note that p i i = 0 {\displaystyle p_{ii}=0} and ∑ i , j p i j = 1 {\displaystyle \sum _{i,j}p_{ij}=1} .

The bandwidth of the Gaussian kernels σ i {\displaystyle \sigma _{i}} is set in such a way that the entropy of the conditional distribution equals a predefined entropy using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of σ i {\displaystyle \sigma _{i}} are used in denser parts of the data space. The entropy increases with the perplexity of this distribution P i {\displaystyle P_{i}} ; this relation is seen as

P e r p ( P i ) = 2 H ( P i ) {\displaystyle Perp(P_{i})=2^{H(P_{i})}}

where H ( P i ) {\displaystyle H(P_{i})} is the Shannon entropy H ( P i ) = − ∑ j p j | i log 2 ⁡ p j | i . {\displaystyle H(P_{i})=-\sum _{j}p_{j|i}\log _{2}p_{j|i}.}

The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.".13

Since the Gaussian kernel uses the Euclidean distance ‖ x i − x j ‖ {\displaystyle \lVert x_{i}-x_{j}\rVert } , it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the p i j {\displaystyle p_{ij}} become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this.14

t-SNE aims to learn a d {\displaystyle d} -dimensional map y 1 , … , y N {\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{N}} (with y i ∈ R d {\displaystyle \mathbf {y} _{i}\in \mathbb {R} ^{d}} and d {\displaystyle d} typically chosen as 2 or 3) that reflects the similarities p i j {\displaystyle p_{ij}} as well as possible. To this end, it measures similarities q i j {\displaystyle q_{ij}} between two points in the map y i {\displaystyle \mathbf {y} _{i}} and y j {\displaystyle \mathbf {y} _{j}} , using a very similar approach. Specifically, for i ≠ j {\displaystyle i\neq j} , define q i j {\displaystyle q_{ij}} as

q i j = ( 1 + ‖ y i − y j ‖ 2 ) − 1 ∑ k ∑ l ≠ k ( 1 + ‖ y k − y l ‖ 2 ) − 1 {\displaystyle q_{ij}={\frac {(1+\lVert \mathbf {y} _{i}-\mathbf {y} _{j}\rVert ^{2})^{-1}}{\sum _{k}\sum _{l\neq k}(1+\lVert \mathbf {y} _{k}-\mathbf {y} _{l}\rVert ^{2})^{-1}}}}

and set q i i = 0 {\displaystyle q_{ii}=0} . Herein a heavy-tailed Student t-distribution (with one-degree of freedom, which is the same as a Cauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.

The locations of the points y i {\displaystyle \mathbf {y} _{i}} in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution P {\displaystyle P} from the distribution Q {\displaystyle Q} , that is:

K L ( P ∥ Q ) = ∑ i ≠ j p i j log ⁡ p i j q i j {\displaystyle \mathrm {KL} \left(P\parallel Q\right)=\sum _{i\neq j}p_{ij}\log {\frac {p_{ij}}{q_{ij}}}}

The minimization of the Kullback–Leibler divergence with respect to the points y i {\displaystyle \mathbf {y} _{i}} is performed using gradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs.

Output

While t-SNE plots often seem to display clusters, the visual clusters can be strongly influenced by the chosen parameterization (especially the perplexity) and so a good understanding of the parameters for t-SNE is needed. Such "clusters" can be shown to even appear in structured data with no clear clustering,15 and so may be false findings. Similarly, the size of clusters produced by t-SNE is not informative, and neither is the distance between clusters.16 Thus, interactive exploration may be needed to choose parameters and validate results.1718 It has been shown that t-SNE can often recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.19

Software

  • A C++ implementation of Barnes-Hut is available on the github account of one of the original authors.
  • The R package Rtsne implements t-SNE in R.
  • ELKI contains tSNE, also with Barnes-Hut approximation
  • scikit-learn, a popular machine learning library in Python implements t-SNE with both exact solutions and the Barnes-Hut approximation.
  • Tensorboard, the visualization kit associated with TensorFlow, also implements t-SNE (online version)
  • The Julia package TSne implements t-SNE

References

  1. Hinton, Geoffrey; Roweis, Sam (January 2002). Stochastic neighbor embedding (PDF). Neural Information Processing Systems. https://cs.nyu.edu/~roweis/papers/sne_final.pdf

  2. van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). "Visualizing Data Using t-SNE" (PDF). Journal of Machine Learning Research. 9: 2579–2605. http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf

  3. Gashi, I.; Stankovic, V.; Leita, C.; Thonnard, O. (2009). "An Experimental Study of Diversity with Off-the-shelf AntiVirus Engines". Proceedings of the IEEE International Symposium on Network Computing and Applications: 4–11.

  4. Hamel, P.; Eck, D. (2010). "Learning Features from Music Audio with Deep Belief Networks". Proceedings of the International Society for Music Information Retrieval Conference: 339–344.

  5. Jamieson, A.R.; Giger, M.L.; Drukker, K.; Lui, H.; Yuan, Y.; Bhooshan, N. (2010). "Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE". Medical Physics. 37 (1): 339–351. doi:10.1118/1.3267037. PMC 2807447. PMID 20175497. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2807447

  6. Wallach, I.; Liliean, R. (2009). "The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding". Bioinformatics. 25 (5): 615–620. doi:10.1093/bioinformatics/btp035. PMID 19153135. https://doi.org/10.1093%2Fbioinformatics%2Fbtp035

  7. Balamurali, Mehala; Silversides, Katherine L.; Melkumyan, Arman (2019-04-01). "A comparison of t-SNE, SOM and SPADE for identifying material type domains in geological data". Computers & Geosciences. 125: 78–89. Bibcode:2019CG....125...78B. doi:10.1016/j.cageo.2019.01.011. ISSN 0098-3004. S2CID 67926902. https://www.sciencedirect.com/science/article/pii/S0098300418306010

  8. Balamurali, Mehala; Melkumyan, Arman (2016). "t-SNE Based Visualisation and Clustering of Geological Domain". In Hirose, Akira; Ozawa, Seiichi; Doya, Kenji; Ikeda, Kazushi; Lee, Minho; Liu, Derong (eds.). Neural Information Processing. Lecture Notes in Computer Science. Vol. 9950. Cham: Springer International Publishing. pp. 565–572. doi:10.1007/978-3-319-46681-1_67. ISBN 978-3-319-46681-1. 978-3-319-46681-1

  9. Leung, Raymond; Balamurali, Mehala; Melkumyan, Arman (2021-01-01). "Sample Truncation Strategies for Outlier Removal in Geochemical Data: The MCD Robust Distance Approach Versus t-SNE Ensemble Clustering". Mathematical Geosciences. 53 (1): 105–130. Bibcode:2021MatGe..53..105L. doi:10.1007/s11004-019-09839-z. ISSN 1874-8953. S2CID 208329378. https://doi.org/10.1007/s11004-019-09839-z

  10. Birjandtalab, J.; Pouyan, M. B.; Nourani, M. (2016-02-01). "Nonlinear dimension reduction for EEG-based epileptic seizure detection". 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). pp. 595–598. doi:10.1109/BHI.2016.7455968. ISBN 978-1-5090-2455-1. S2CID 8074617. 978-1-5090-2455-1

  11. Pezzotti, Nicola (2015). "Approximated and User Steerable tSNE for Progressive Visual Analytics". arXiv:1512.01655 [cs.CV]. /wiki/ArXiv_(identifier)

  12. van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). "Visualizing Data Using t-SNE" (PDF). Journal of Machine Learning Research. 9: 2579–2605. http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf

  13. van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). "Visualizing Data Using t-SNE" (PDF). Journal of Machine Learning Research. 9: 2579–2605. http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf

  14. Schubert, Erich; Gertz, Michael (2017-10-04). Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection. SISAP 2017 – 10th International Conference on Similarity Search and Applications. pp. 188–203. doi:10.1007/978-3-319-68474-1_13. /wiki/Doi_(identifier)

  15. "K-means clustering on the output of t-SNE". Cross Validated. Retrieved 2018-04-16. https://stats.stackexchange.com/a/264647

  16. Wattenberg, Martin; Viégas, Fernanda; Johnson, Ian (2016-10-13). "How to Use t-SNE Effectively". Distill. 1 (10): e2. doi:10.23915/distill.00002. ISSN 2476-0757. http://distill.pub/2016/misread-tsne

  17. Pezzotti, Nicola; Lelieveldt, Boudewijn P. F.; Maaten, Laurens van der; Hollt, Thomas; Eisemann, Elmar; Vilanova, Anna (2017-07-01). "Approximated and User Steerable tSNE for Progressive Visual Analytics". IEEE Transactions on Visualization and Computer Graphics. 23 (7): 1739–1752. arXiv:1512.01655. doi:10.1109/tvcg.2016.2570755. ISSN 1077-2626. PMID 28113434. S2CID 353336. /wiki/ArXiv_(identifier)

  18. Wattenberg, Martin; Viégas, Fernanda; Johnson, Ian (2016-10-13). "How to Use t-SNE Effectively". Distill. 1 (10). doi:10.23915/distill.00002. Retrieved 4 December 2017. https://distill.pub/2016/misread-tsne/

  19. Linderman, George C.; Steinerberger, Stefan (2017-06-08). "Clustering with t-SNE, provably". arXiv:1706.02582 [cs.LG]. /wiki/ArXiv_(identifier)