Brian Thompson

<firstname>@alumni.caltech.edu
<firstname>jt@amazon.com

About Me

I am currently a Senior Applied Scientist in Amazon's Artificial General Intelligence (AGI) org, where I work on large language model (LLM) training. I have previously worked at Apple, Johns Hopkins University (where I also completed my PhD), MIT Lincoln Laboratory, and Rincon Research Corporation, on topics including machine translation (MT), automatic dubbing, text-to-speech (TTS), data curation and filtering, MT evaluation, multilingual modeling, paraphrasing, cross-language information retrieval, domain adaptation, and digital signal processing.

My recent work exploring the impact of machine translation on the web has been covered by Politico, The Atlantic, Slator, Vice, TechInsider, Futurism, and others.

Open Source Projects

I developed Vecalign for the ParaCrawl parallel data acquisition project. Vecalign is an accurate sentence alignment algorithm based on multilingual sentence embeddings which is linear in complexity with respect to the number of sentences being aligned. In conjunction with a multilingual sentence embedding model like LASER or LaBSE, Vecalign makes it easy to perform sentence alignment in about 100 languages (i.e. 100^2 language pairs), without the need for a machine translation system or lexicon. At the time of writing, Vecalign has the best reported performance on the test set released with Bleualign.

I also developed Prism, an automatic MT metric which uses a sequence-to-sequence paraphraser to score MT system outputs conditioned on their respective human references. Prism uses a multilingual neural MT model as a zero-shot paraphraser, which eliminates the need for synthetic paraphrase data and results in a single model which works in many languages (we release a model in 39 languages). At the time of publication, Prism outperformed or statistically tied with all metrics submitted to the WMT 2019 metrics shared task at segment-level human correlation. I developed bitext filtering code to preprocess the data used to train Prism, but the code is general enough to use for any MT training and is released here.

Education

The Johns Hopkins University

California Institute of Technology

Rose-Hulman Institute of Technology

Publications

Note: Google Scholar may be more up-to-date.