Using LLMs as Speech-to-Text Retrieval SystemsTransforming LLMs into Cross-modal and Cross-lingual Retrieval Systems

Frank Palma Gomez\({}^{1}\) Ramon Sanabria\({}^{2}\) Yun-hsuan Sung\({}^{4}\)
Daniel Cer\({}^{4}\) Siddharth Dalmia\({}^{3{\ddagger}}\) Gustavo Hernandez Abrego\({}^{4{\ddagger}}\)
\(^1\)Boston University \(^2\)The University of Edinburgh \(^3\)Google DeepMind
\(^4\)Google Research
fpg@bu.com
12


Abstract

Large language models (LLMs) are trained on text-only data that go far beyond the languages with paired speech and text data. At the same time, Dual Encoder (DE) based retrieval systems project queries and documents into the same embedding space and have demonstrated their success in retrieval and bi-text mining. To match speech and text in many languages, we propose using LLMs to initialize multi-modal DE retrieval systems. Unlike traditional methods, our system doesn’t require speech data during LLM pre-training and can exploit LLM’s multilingual text understanding capabilities to match speech and text in languages unseen during retrieval training. Our multi-modal LLM-based retrieval system is capable of matching speech and text in 102 languages despite only training on 21 languages. Our system outperforms previous systems trained explicitly on all 102 languages. We achieve a 10% absolute improvement in Recall@1 averaged across these languages. Additionally, our model demonstrates cross-lingual speech and text matching, which is further enhanced by readily available machine translation data.

1 Introduction↩︎

LLMs have demonstrated their effectiveness in modelling textual sequences to tackle various downstream tasks [1][3]. This effectiveness has led to the development of powerful LLMs capable of modelling text in a wide range of languages. The abundance of textual data in different languages across the internet has fueled the progress of multi-lingual models [4][6]. On the other hand, speech technologies are prevalent in smartphones and personal assistants, but their language availability is relatively limited compared to the languages that LLMs support [7], [8].

Figure 1: Our dual encoder architecture and training pipeline. We expand the embedding layer of our backbone LLM to support the additional discretized speech tokens, that are extracted from a pre-trained speech encoder. At the same time, we tokenize the corresponding transcripts with the LLM tokenizer. We encode the speech tokens and transcripts separately and train the model with a contrastive loss over the dot product between speech and transcript embeddings.

Various efforts have explored solutions to the speech-text data scarcity problem [9][11]. Works such as SpeechMatrix [12] use separate speech and text encoders to mine semantically similar utterances that are neighbors in an embedding space. However, these approaches are limiting because they require speech and text encoders that have aligned representation spaces.

We posit that we can retrieve speech and text utterances by aligning both modalities within the embedding space built from a single pre-trained LLM. We take inspiration from previous works that use pre-trained LLMs to perform automatic speech recognition (ASR) and automatic speech translation (AST) [13][15]. Our intuition is that we can perform the speech and text alignment leveraging the capabilities of text-only LLMs without requiring two separate models.

In this paper, we propose converting LLMs into speech and text DE retrieval systems without requiring speech pre-training and outperform previous methods with significantly less data. By discretizing speech into acoustic units [16], we extend our LLMs embedding layer and treat the acoustic units as ordinary text tokens. Consequently, we transform our LLM into a retrieval system via a contrastive loss allowing us to match speech and text utterances in various languages. Our contributions are the following:

  1. We build a speech-to-text symmetric DE from a pre-trained LLM. We show that our retrieval system is effective matching speech and text in 102 languages of FLEURS [17] despite only training on 21 languages.

  2. We show that our model exhibits cross-lingual speech and text matching without training on this type of data. At the same time, we find that cross-lingual speech and text matching is furthered improved by training on readily available machine translation data.

2 Method↩︎

We train a transformer-based DE model that encodes speech and text given a dataset \(\emph{D} = \{(x_i, y_i)\}\), where \(x_i\) is a speech utterance and \(y_i\) is its transcription. We denote the speech and text embeddings as \(\boldsymbol{x_i} = E(x_i)\) and \(\boldsymbol{y_i} = E(y_i)\), respectively, where \(E\) is a transformer-based DE that encodes speech and text.

2.1 Generating Audio Tokens↩︎

We convert raw speech into discrete tokens using the process in [18], [19]. The process converts a speech query \(x_i\) into an embedding using a pre-trained speech encoder. The output embedding is then discretized into a set of tokens using k-means clustering. We refer to the resulting tokens as audio tokens. We use the 2B variant of the Universal Speech Model (USM) encoder [20] as the speech encoder and take the middle layer as the embedding for \(x_i\). Additionally, we generate audio tokens at 25Hz using k-means clustering, resulting in a set of 1024 possible audio tokens. We will refer to this as our audio token vocabulary.

2.2 Supporting Text and Audio Tokens↩︎

To support text and audio tokens in our LLM, we follow the formulation of [13]. We extend the embedding layer of a transformer decoder by \(a\) tokens, where \(a\) represents the size of our audio token vocabulary. This modification leads to an embedding layer with size \((t + a) \times m\), where \(t\) is the number of tokens in the text vocabulary and \(m\) is the dimensions of the embedding vectors. In our implementation, the first \(t\) tokens represent text and the remaining \(a\) tokens are reserved for audio. We initialize the embeddings layer from scratch when training our model.

3 Data and Tasks↩︎

Appendix [sec:dataset95stats] details our training and evaluation datasets along with the number of languages in each dataset, the split we used, and the size of each dataset. We focus on the following retrieval tasks:

3.0.0.1 Speech-to-Text Retrieval (S2T)

involves retrieving the corresponding transcription from a database given a speech sample. In S2T, we train on CoVoST-2 [21] speech utterances and their transcriptions. CoVoST-2 is a large multilingual speech corpus derived from Wikipedia expanding over 21 languages and provides translation to and from English. We use FLEURS [17] to evaluate S2T performance on 102 languages. FLEURS is an \(n\)-way parallel dataset containing speech utterances from FLoRES-101 [22] human translations. To evaluate S2T, we report recall at 1 (\(R@1\)) rates for retrieving the correct transcription for every speech sample and word error rate (WER).

3.0.0.2 Speech-to-Text Translation Retrieval (S2TT)

attempts to retrieve the corresponding text translation of a speech sample. We use S2TT to measure the cross-lingual capabilities of our multi-modal DE retrieval system. We evaluate this capability zero-shot on X \(\to\) En S2TT data of FLUERS and explore if we can further improve this capability by training on readily-available machine translation data from WikiMatrix [23]. We pick French, German, Dutch, and Polish to English that are common across WikiMatrix and FLEURS and further discuss the amount of machine translation data used in Appendix [sec:dataset95stats]. For S2TT, we report 4-gram corpusBLEU [24].

4 Model↩︎

Figure 1 shows an illustration of our model. We initialize our dual encoder from PaLM 2 XXS [25] and append a linear projection layer after pooling the outputs along the sequence length dimension. The embedding and linear projection layers are initialized randomly. After initializing our model from PaLM 2, we use a contrastive loss [26]. Appendix 9.1 includes more details on our training setup. We will refer to our proposed model as PaLM 2 DE.

5 Experiments↩︎

Table 1: PaLM 2 DE results for R@1 and WER compared against the mSLAM DE on 102 languages from FLEURS for speech-to-text retrieval (S2T).
R@1 \(\uparrow\) WER \(\downarrow\)
mSLAM DE [17] 76.9 14.6
PaLM 2 DE (Proposed Model) 86.15 13.85

We train our DE model to perform S2T, where the task is to retrieve the corresponding transcription given a speech sample. We train on the 21 languages from CoVoST-2 and evaluate our model using the S2T portion of FLEURS in 102 languages.

5.1 Speech-to-Text Retrieval↩︎

Table 1 shows the average \(R@1\) and WER for S2T for 102 languages from FLEURS. We compare against the mSLAM DE model from [17], a model trained on 426k hours of S2T data in 51 languages and fine-tuned on FLEURS training data. Our model significantly outperforms the mSLAM DE baseline in \(R@1\) and \(WER\) metrics despite being trained with only 1/10 of the data and having been initialized from a text-only LLM. More importantly, our model was only trained on the 21 languages in CoVoST-2 and never fine-tuned on the FLEURS training data.

5.1.1 Seen-Unseen Breakdown↩︎

Figure 2: \(R@1\) transcription retrieval for seen and unseen languages in the training set.

In Figure 2 we break down the \(R@1\) scores based on seen and unseen languages during training. We find that our model performs best on the 20 languages that are within the training and evaluation data, but still perform significantly well on the remaining 82 unseen languages. We hypothesize this is due to the vast textual multilingual data our backbone LLM has seen during pre-training.

5.1.2 Language Group Breakdown↩︎

Table 2: FLEURS S2T (R@1) performance broken down by language groups. Bold represents better performance. Numbers in parenthesis represent the number of languages within the language group.
R@1 \(\uparrow\)
2-3Language Group (#) mSLAM DE PaLM 2 DE
[17] (Proposed Model)
Afro-Asiatic (7) 73.67 90.82
Atlantic-Congo (15) 86.77 79.47
Austro-Asiatic (3) 47.90 41.71
Austronesian (6) 75.50 85.74
Dravidian (4) 65.70 90.46
Indo-European (51) 84.62 92.38
Japonic (1) 5.80 63.23
Kartvelian (1) 70.50 74.57
Koreanic (1) 5.20 45.81
Kra-Dai (1) 3.20 37.12
Mongolic (1) 70.70 98.10
Nilo-Saharan (1) 91.00 94.53
Sino-Tibetan (3) 3.40 61.91
Turkic (4) 81.28 91.89
Uralic (3) 91.40 93.38

Table 2 shows the \(R@1\) language group breakdown for S2T on FLEURS. We find that although we only trained on 21 languages, our model significantly outperforms mSLAM DE in 13 of the 15 language groups. These results are consistent with the experiments in [15] which explore the effect of initializing speech language models from pre-trained LLMs.

5.2 Evaluating on Cross-Modal and Cross-Lingual Tasks↩︎

Figure 3: BLEU scores for FLEURS zero-shot S2TT when training on Transcripts or Transcripts + Translations for PaLM 2 DE. Combining transcripts and translation data improves zero-shot S2TT retrieval.

We evaluate on S2TT to gauge the cross-modal and cross-lingual capabilities of our model. We show that we can further improve S2TT by simply training on a mixture of S2T and translation data without using any S2TT training data.

5.2.1 Zero-Shot S2TT↩︎

Given the multi-lingual capabilities of our backbone language model, we explore if these capabilities are transferred after training our model contrastively on the S2T task. We hypothesize that our model should showcase cross-lingual and cross-modal capabilities due to the cross-modal training task and the cross-lingual capabilities of the backbone LLM. We evaluate S2TT in a zero-shot setting to assess our model’s performance retrieving English translations given a speech sample in another language. Using the FLEURS S2TT portion, we evaluate S2TT X \(\to\) En in 4 languages: German, Polish, French, and Dutch.

Figure 3 shows BLEU S2TT performance using S2T CoVoST-2 in 21 languages. We call this setup Transcripts in Figure 3. Our results demonstrate that even when only training our model on speech and transcriptions, we can achieve some zero-shot S2TT performance and We find that S2TT BLEU scores are considerably higher for languages present S2T training data. For example, Polish was not in the S2T training therefore its BLEU scores are the lowest.

5.2.2 Improving S2TT with MT Data↩︎

To further improve our model’s cross-lingual performance, we add readily available translation data from [23] to improve S2TT. For each batch, we combine 25% translation and 75% S2T data. Figure 3 shows comparison of only training on S2T (Transcripts) and combining S2T and translation data ( Transcriptions + Translations). We find that combining S2T and translation data significantly improves the S2TT BLEU scores in all 4 languages without training on S2TT data. This finding demonstrates that we can improve our models cross-lingual performance with highly accessible translation data without needing scarce and often expensive speech-to-text translation training data.

6 Related Work↩︎

The success of pre-trained LLMs have motivated the application of these models in different modalities. [18] transformed speech into pseudo-text units to introduce the task of generative spoken language modeling. [19] introduced a framework to generate audio with long-term consistency. Consequently, [15] showed that SpeechLMs benefit from being initialized from pre-train LLMs while [13] demonstrated that pre-trained LLMs can be adapted to various tasks that required text and speech understanding.

On the other hand, several works aim to build joint speech and text representations. [27] introduced w2v-bert which combines masked language modeling and contrastive learning to create speech representations. [28] jointly pre-trains on speech and text from unsupervised speech and text data. Recently, [29] employed separate speech and text encoders to generate embeddings in over 200 languages. Nevertheless, there is still a lack of understanding of whether joint speech and text representations can be built from a single encoder. We fill this gap by using pre-trained LLMs to jointly train on speech samples and their transcriptions to show that our approach is capable of speech-text matching in 102 languages.

7 Conclusion↩︎

We present an effective approach to developing a speech-to-text DE from a text-only LLM. Our findings suggest that by using a text-only LLM as a backbone model, we can drastically outperform previous approaches using considerably less speech-to-text training data. Additionally, we find that we can improve zero-shot speech translation by simply combining readily available translation and S2T data. We showcase our findings in 102 languages for S2T and 4 languages in S2TT; opening up the possibility of using speech-to-text DE’s in different cross-model and cross-lingual settings.

8 Acknowledgements↩︎

We would like to thank Shankar Kumar and Ankur Bapna for the valuable feedback on the draft of the paper. Chris Tar, Mario Guajardo-Céspedes, and Jason Riesa for the early experiment discussions and feedback. Christian Frank, Duc Dung Nguyen, Alex Tudor, and Dalia El Badawy for helping answer questions about AudioPaLM.

9 Appendix↩︎

Table 3: Example of the speech and transcript inputs given to our model. The speech input is composed of a prefix containing the language and the input modality. Text will be tokenized using the LLMs tokenizer and an offset will be applied to the audio token to match the tokens that were reserved within the audio token vocabulary. Bold numbers represent the audio tokens before tokenization and after the offset is applied to the audio tokens.
Input Type Before Tokenization Input Ids
Speech [English Speech] 50,210,245, \(\ldots\) 240, 503, 32050, 32210, 32245, \(\ldots\)
Transcription [English Text] Hello World . 59, 294, 691, \(\ldots\)

9.1 Training Setup↩︎

[30] showed that applying a contrastive loss to sentence encoders leads to improved retrieval performance in downstream tasks. After initializing our model from the PaLM 2, we use a contrastive loss [26].

\[L = -\frac{1}{N} \sum_{i=1}^{N} \frac{e^{\text{sim}(\boldsymbol{x}_{i}, \boldsymbol{y}_{i})}}{\sum_{j=1}^{N} e^{\text{sim}(\boldsymbol{x}_{i}, \boldsymbol{y}_{j})}}\label{eq:loss}\tag{1}\]

Using equation 1 , our multi-modal DE will learn from paired speech and text embeddings \((\boldsymbol{x}_i, \boldsymbol{y}_i)\), where \(\boldsymbol{y}_{i}\) is considered as a positive example to \(\boldsymbol{x}_i\) while all other examples where \(i \neq j\) are negative ones. The model should learn to bring the positive transcriptions closer to the corresponding speech sample, while pushing away all the other negative transcriptions. In our training, the positive and negative distinction is done within the training batch. Hence, we apply an in-batch softmax as part of our loss computation. Lastly, sim() is a similarity function formulated as the dot product between the speech sample and the transcription embeddings.

To train our model, we use the sum of a contrastive loss with a spreadout loss [31] of both the speech and text embeddings. We calculate the contrastive loss [32] in a bidirectional way, by adding the loss in the speech-to-text and the text-to-speech direction.

We use the Adam [33] optimizer with a learning rate of \(1.0 \times 10^{-3}\) with linear ramp cosine decay scheduler with 2.5k warm up steps. We use a dropout probability of \(0.1\) and train for 100k steps with a batch size of 1024.

9.2 Expressing Tasks↩︎

For training and inference, we found that using a prefix improves speech-to-text retrieval performance. Therefore, we pre-pend a prefix containing the language and modality shown in in Table 3. In the case of a speech utterance, the prefix will be tokenized with the LLMs tokenizer and the remaining will be converted to audio tokens.

9.3 Data↩︎

Table 4: Training and evaluation datasets. CoVoST-2 is used for speech-to-text retrieval (S2T), Wikimatrix is for machine translation retrieval (MT), and FLEURS is for evaluating X \(\to\) En speech-to-text translation retrieval (S2TT) and also speech-to-text retrieval (S2T).
Dataset Type Task Langs. Split Size
CoVoST-2 Speech S2T 21 Train 900 h.
FLEURS Speech S2T 102 Test 283 h.
FLEURS Speech S2TT 102 Test 283 h.
Wikimatrix Text MT 4 Train 9M sents.
Table 5: Number of parallel sentences used in the machine translation mixture from Wikimatrix corpus.
# Sents. X \(\to\) En
German (de) 6.2M
Polish (pl) 2.1M
French (fr) 705k
Dutch (nl) 570k

Table 4 shows the training and evaluation datasets we used through out our experiments. We used 21 languages CoVoST-2 to train our model on speech-to-text retrieval which amounts to approximately 900 hours of speech. To evaluate our models speech-to-text retrieval capabilities, we evaluate on FLEURS speech-to-text test split on 102 languages. We use FLEURS speech-to-text translation test split to evaluate our models abilities on tasks that require cross-lingual and cross-modal knowledge. We evaluate of 4 different languages: German, Polish, French, and Dutch.

We find that combining speech-to-text retrieval data and readily available translation data improves our models cross-lingual and cross-modal abilities. Table 5 shows the number of parallel sentences we used during training from X \(\to\) En.

References↩︎

[1]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
[2]
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
[3]
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113.
[4]
Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351.
[5]
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
[6]
Aditya Siddhant, Ankur Bapna, Orhan Firat, Yuan Cao, Mia Xu Chen, Isaac Caswell, and Xavier Garcia. 2022. Towards the next 1000 languages in multilingual machine translation: Exploring the synergy between supervised and self-supervised learning. arXiv preprint arXiv:2201.03110.
[7]
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460.
[8]
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492–28518. PMLR.
[9]
Paul-Ambroise Duquenne, Hongyu Gong, and Holger Schwenk. 2021. Multimodal and multilingual embeddings for large-scale speech mining. Advances in Neural Information Processing Systems, 34:15748–15761.
[10]
Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, and Gregor Weber. 2019. Common voice: A massively-multilingual speech corpus. arXiv preprint arXiv:1912.06670.
[11]
Changhan Wang, Anne Wu, and Juan Pino. 2020. Covost 2 and massively multilingual speech-to-text translation. arXiv preprint arXiv:2007.10310.
[12]
Paul-Ambroise Duquenne, Hongyu Gong, Ning Dong, Jingfei Du, Ann Lee, Vedanuj Goswani, Changhan Wang, Juan Pino, Benoı̂t Sagot, and Holger Schwenk. 2022. Speechmatrix: A large-scale mined corpus of multilingual speech-to-speech translations. arXiv preprint arXiv:2211.04508.
[13]
Paul K Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, et al. 2023. Audiopalm: A large language model that can speak and listen. arXiv preprint arXiv:2306.12925.
[14]
Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. http://arxiv.org/abs/2301.02111.
[15]
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, et al. 2023. Textually pretrained speech language models. arXiv preprint arXiv:2305.13009.
[16]
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460.
[17]
Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2023. Fleurs: Few-shot learning evaluation of universal representations of speech. In 2022 IEEE Spoken Language Technology Workshop (SLT), pages 798–805. IEEE.
[18]
Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. https://doi.org/10.1162/tacl_a_00430. Transactions of the Association for Computational Linguistics, 9:1336–1354.
[19]
Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. 2023. Audiolm: a language modeling approach to audio generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[20]
Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen, Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, et al. 2023. Google usm: Scaling automatic speech recognition beyond 100 languages. arXiv preprint arXiv:2303.01037.
[21]
Changhan Wang, Anne Wu, Jiatao Gu, and Juan Miguel Pino. 2021. https://api.semanticscholar.org/CorpusID:239649657. In Interspeech.
[22]
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjan Krishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. https://api.semanticscholar.org/CorpusID:235358129. Transactions of the Association for Computational Linguistics, 10:522–538.
[23]
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. https://api.semanticscholar.org/CorpusID:196471198. ArXiv, abs/1907.05791.
[24]
Matt Post. 2018. https://doi.org/10.18653/v1/W18-6319. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–191, Brussels, Belgium. Association for Computational Linguistics.
[25]
Google, Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
[26]
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. https://api.semanticscholar.org/CorpusID:8281592. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), 2:1735–1742.
[27]
Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu. 2021. W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 244–250. IEEE.
[28]
Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, and Alexis Conneau. 2022. mslam: Massively multilingual joint pre-training for speech and text. arXiv preprint arXiv:2202.01374.
[29]
Paul-Ambroise Duquenne, Holger Schwenk, and Benoı̂t Sagot. 2023. Sentence-level multimodal and language-agnostic representations. arXiv preprint arXiv:2308.11466.
[30]
Jianmo Ni, Gustavo Hernandez Abrego, Noah Constant, Ji Ma, Keith Hall, Daniel Cer, and Yinfei Yang. 2022. https://doi.org/10.18653/v1/2022.findings-acl.146. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1864–1874, Dublin, Ireland. Association for Computational Linguistics.
[31]
Xu Zhang, Felix X. Yu, Sanjiv Kumar, and Shih-Fu Chang. 2017. https://api.semanticscholar.org/CorpusID:2507157. 2017 IEEE International Conference on Computer Vision (ICCV), pages 4605–4613.
[32]
Yinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. arXiv preprint arXiv:1902.08564.
[33]
Diederik P. Kingma and Jimmy Ba. 2014. https://api.semanticscholar.org/CorpusID:6628106. CoRR, abs/1412.6980.

  1. \({}\) Work done by Frank and Ramon during their internship in Google Research and Google DeepMind respectively.↩︎

  2. \({}^{\ddagger}\) Equal Advising Contributions.↩︎