A Careful Examination of Large Language Model Performance on Grade School Arithmetic (2024)

Hugh Zhang &Jeff Da &Dean Lee &Vaughn Robinson &Catherine Wu &Will Song &Tiffany Zhao &Pranav Raja &Dylan Slack &Qin Lyu &Sean Hendryx &Russell Kaplan &Michele (Mike) Lunati &Summer YueCorrespondence to hugh.zhang@scale.com   equal senior authorship


Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning.However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability.To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark,the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more.When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g. Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes.At the same time, many models, especially those on the frontier, (e.g. Gemini/GPT/Claude) show minimal signs of overfitting.Further analysis suggests a positive relationship (Spearman’s r2=0.32superscript𝑟20.32r^{2}=0.32italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.32) between a model’s probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k.

Scale AI

1 Introduction

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (1)

Improving reasoning in large language models (LLMs) is one of the most important directions of current research.As such, proper benchmarking of current LLM abilities is paramount for ensuring progress continues in the correct direction.Currently, the field typically relies on public benchmarks (e.g. GSM8k, MATH, MBPP, HumanEval, SWEBench Cobbe etal. (2021); Hendrycks etal. (2021b); Cobbe etal. (2021); Chen etal. (2021); Jimenez etal. (2024)).However, because LLMs are trained on large corpora of data scraped from the Internet, there are major concerns that such benchmarks may inadvertently include examples that closely resemble the questions found in such benchmarks.This contamination may result in models having weaker reasoning capabilities than otherwise believed, due to simply being able to repeat the correct answer that it previously encountered during pre- or post- training.To properly investigate the reasoning abilities of models, we commission GSM1k, a newly constructed collection of 1250 grade school level math problems designed to mirror that of GSM8k.We took extensive efforts to ensure that GSM1k had a similar distribution of difficulty to GSM8k to ensure an apples-to-apples comparison. These efforts are described in Section3, alongside a detailed description of the data creation process.To mitigate worries about data contamination, we created GSM1k solely with human annotators, without assistance from any LLM or other synthetic data source.

GSM8kJames writes a 3-page letter to 2 different friends twice a week. How many pages does he write a year?
GSM1k   (ours)Lee bought 6 shares of Delta stock at $40 per share. If he wants to make $24 from this trade, how much should Delta stock be per share when he sells?

We benchmark leading open-source and closed-source LLMs on GSM1k, including GPT-4 OpenAI etal. (2024), Gemini Team etal. (2024), Claude, Mistral Jiang etal. (2024, 2023), Llama Touvron etal. (2023a, b), Phi Gunasekar etal. (2023); Abdin etal. (2024) and many more.Our analysis confirms that the widespread suspicion in the field that many models are contaminated by benchmark data, with the worst model performing 13% worse on GSM1k compared to GSM8k. Additionally, our results suggest that several families of models, most notably Mistral and Phi, show consistent evidence of overfitting for nearly all model versions and sizes.Further analysis finds a positive relationship (Spearman’s r2=0.32superscript𝑟20.32r^{2}=0.32italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.32) between a model’s probability of generating examples from GSM8k and its performance gap between GSM8k and GSM1k, strongly suggesting that one important component of this overfitting is that models have partially memorized examples from GSM8k.Nevertheless, our results find that all frontier models, as well as all sizes of the Llama2 family, show minimal signs of overfitting.Additionally, we also find that all models, including the most overfit ones, are still capable of successfully generalizing to new mathematical grade school problems, albeit occasionally at lower rates than their benchmark numbers would suggest.

We do not intend to release GSM1k publicly at this time to prevent a similar problem of data contamination occurring in the future. However, we plan to run recurring evaluations of all major open- and closed- source releases and to continually update our results. We will also open source our entire evaluation code so that the public version of our results can be reproduced. Additionally, we commit to open sourcing the entire benchmark when either 1) the top open source models score over 95% on GSM1k or 2) at the end of 2025, whichever comes earlier. See Section3 for precise criteria for release.

2 Related Work

A major inspiration of this work was the celebrated study on overfitting done on ImageNet classifiers in 2019 Recht etal. (2019). This work measured overfitting in ImageNet by creating new versions of CIFAR10 and ImageNet and measuring the performance gap between the public test set and the newly created sets they constructed. In this work, we do a similar analysis on GSM8k, one of the leading benchmarks for elementary mathematical reasoning.GSM1k is modelled primarily after the GSM8k dataset Cobbe etal. (2021), released by OpenAI in 2021, which consists of 8.5k grade school math problems. Each problem is designed to be solvable using only basic arithmetic operations (+++, --, ×\times×, ÷\div÷) with a difficulty level appropriate for grade school students.As of April 2024, top models report benchmark accuracies of over 95% Team etal. (2024).Other popular benchmarks for reasoning include MATH Hendrycks etal. (2021b) , MMLU Hendrycks etal. (2021a), GPQA Rein etal. (2023).

2.1 Data Contamination

Because data contamination is a well known issue in the field Balloccu etal. (2024); Magar and Schwartz (2022); Sainz etal. (2023); Jacovi etal. (2023), model builders will frequently take great pains to minimize the likelihood of data contamination.For example, it is common to remove all data with too high of an n-gram overlap with the benchmark data Brown etal. (2020). Additionally, methods such as using embedding similarity attempt to remove all contaminated data that is too similar in embedding space to the dataset Shi etal. (2024).More recently, Srivastava etal. (2024) propose functional evaluations, where benchmarks are written in the form of functions that can generate an infinite number of specific evaluation datapoints, each with slightly different numbers.In this setup, whenever a language model is evaluated, functional evaluations generate a specific problem instance to evaluate the model on, which is then never used again.This reduces the worry of data contamination by ensuring that no datapoint is ever used twice.Like ours, their results indicate the LLMs may be severely overfit on benchmark data.The main advantage of our approach over a purely function based evaluation is that functional evaluations can only generate a tiny portion of the full problem space by producing variations of the same problem with slightly different numerical values.

3 GSM1k

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (2)

GSM1k consists of 1250 problems requiring only elementary mathematical reasoning to solve.We created GSM1k using human annotators sourced by Scale AI. Annotators were prompted with 3 example GSM8k problems and asked to produce novel problems of a similar difficulty level. The precise instructions and UI given to the annotators is available in AppendixA.All problem annotators were instructed to create problems solvable with only basic arithmetic (addition, subtraction, multiplication, and division) and which did not require any advanced math concepts. As is the case with GSM8k, all problem solutions are positive integers111GSM8k has a few problems, likely errors, for which this is not the case..No language models were used in the process of constructing this dataset.

To prevent data contamination concerns with GSM1k, we will not be releasing the dataset publicly at this time.However, we commit to releasing the full GSM1k dataset when at least one of the two following conditions have passed, whichever comes earlier. 1) Three open-source models with different pre-trained foundational model lineages reach 95% accuracy on GSM1k. 2) The end of 2025.At such a point, we believe that grade school mathematics will likely no longer be difficult enough to materially benchmark model releases and commit to releasing all data into the public domain under the MIT license.Additionally, to evaluate proprietary models, we were required to send over the dataset via API. Our belief is that model providers typically do not use such datapoints for model training.Nevertheless, in case GSM1k data is leaked through such means, we also hold out a small number of data points that have passed all quality checks but do not appear in the final GSM1k dataset.This data will also be released alongside GSM1k upon final release.We encourage future benchmarks to follow a similar pattern, where they are not released publicly lest they be gamed, but are precommitted to be released at a future date or upon a future condition.As part of this release, we will also open source our evaluation framework, which is based off of a fork of the LM Evaluation Harness by EleutherAI Gao etal. (2023a).

Finally, while we undertook extensive efforts to ensure maximum similarity between GSM8k and GSM1k, these results are only an approximation of an ideal world in which the test set of GSM8k was instead not publicly released and used for evaluations. We would recommend reading all results with the understanding that GSM8k and GSM1k are only highly similar, but not identically distributed despite all our efforts below.

3.1 Quality Checks

All questions passed through a total of 3 review layers. After initial creation, each task was manually reviewed by a subset of trusted annotators selected for strong past performance.These reviewers checked both for correctness as well as ensuring problems contained only grade school level math and proper formatting.To ensure that questions were answered correctly, we also do a second review layer by having an independent set of data annotators solve each question without seeing the intended solution.If this second solve produced a different answer to that of the initial solve, we discarded the problem.Finally, all problems were reviewed by a special team within Scale responsible for conducting general quality audits for data production. Out of a total of 2108210821082108 initial problems, 1419141914191419 passed the second solve stage and 1375137513751375 passed the general quality audit.

3.2 Matching the Difficulty Distribution of GSM8k

One important axis of recreating a benchmark is ensuring that new problems have a comparable difficulty to the original benchmark. To construct problems of difficulty N𝑁Nitalic_N, we requested annotators to construct problems with N𝑁Nitalic_N required resolution steps and prompted them with 3 examples from GSM8k with estimated difficulty N𝑁Nitalic_N. The distribution of problems requested from annotators matched the estimated distribution in GSM8k.Difficulty is tricky to measure precisely, so we used an estimate based on the number of operations needed to solve the problem. This was extracted programmatically by counting the number of “calculator” tags in the problem solution. However, as not all problem solutions were formatted consistently, this estimate is only a rough estimate of actual difficulty. Additionally, the number of resolution steps in a problem does not necessarily directly correlate with the true level of problem difficulty.

Past work has also found that LLMs struggle with problems with larger numbers Gao etal. (2023b) even if they can solve otherwise identical problems with smaller numbers. To remove this as a potential confounding variable, our final processing step is to discard candidate problems from GSM1k so that the answer magnitude distributions of GSM8k and GSM1k are as similar as possible. This selection process is described in Figure4. GSM1k consists of the 1250 problems that survive this final winnowing.

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (3)

Additionally, we run several checks to ensure that our efforts to match benchmark difficulty were successful.

3.2.1 Human Differentiation Rates

The first test we run is human distinguishability. We present human crowdworkers with a set of five questions, four of which were randomly selected from the original GSM8k dataset and one of which was selected from the newly created GSM1k dataset, and rewarded crowdworkers for finding the odd one out.In an audit conducted using 19 crowdworkers who were not involved in the problem creation process, we found that crowdworkers were able to correctly identify the lone GSM1k example 21.83% of the time out of 1205 attempts (20% is pure chance). Separately, we also tested several paper authors who had not yet seen the data and they were also unable to perform much better than random. This suggests minimal differences between GSM8k and GSM1k, at least as measured by the human eye.

3.2.2 Human Solve Rates

To ensure similar solve rates, we also asked crowdworkers to solve questions under time pressure.14 crowdworkers who had not participated in the problem creation process attempted to solve as many GSM8k problems as they could in 15 minutes and rewarded them for the number of correct solutions. We repeated this exact setup for GSM1k. Crowdworkers were able to solve an average of 4.07±0.93plus-or-minus4.070.934.07\pm 0.934.07 ± 0.93 problems on the GSM8k dataset. They were able to solve 4.36±1.11plus-or-minus4.361.114.36\pm 1.114.36 ± 1.11 problems on the GSM1k dataset, where the error rates are the standard deviations of the evaluation. This suggests that GSM1k is comparable in difficulty (and perhaps even slightly easier) than GSM8k. As such, substantial decreases in model accuracy on GSM1k compared to GSM8k are likely not explainable due to differences in dataset difficulty.

3.2.3 LLM Solve Rates

Finally, we sanity check our results by measuring solve rates of several models that are known to not be contaminated by GSM8k due to being released before the publication of the GSM8k dataset. Due to the relative scarcity of LLMs trained only on pre-2021 data, we evaluate only GPT-NeoX-20B Black etal. (2022) and GPT-2 Radford etal. (2019). For these two language models, we find minimal difference between their solve rates of GSM8k and GSM1k (Figure14).

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (4)

4 Results

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (5)

To evaluate models, we use a fork of EleutherAI’s LM Evaluation Harness using the default settings. Both GSM8k and GSM1k questions are run with the same prompt of using 5 randomly drawn examples from the GSM8k train set, as is standard in the field. The full prompt is provided in AppendixB. All open-source models are evaluated at temperature 0 for reproducibility. LM Evaluation Harness extracts the last numeric answer in the response and compares this to the correct answer. As such, model responses which produce the “correct” answer in a format that do not match the examples are marked as incorrect. For open source models, we use vLLM to speed up model inference if a model is compatible with the library. Otherwise, we default to inference using standard HuggingFace libraries. Closed-source models were queried through the LiteLLM library which unifies the API call format for all proprietary models evaluated. All API model results were from queries between April 16 - April 28, 2024 and use the default settings.

As model benchmark performance is highly dependent on choice of prompt and evaluation setting, our reported GSM8k numbers may occasionally be below the reported model benchmark numbers, as we use a standardized setting for all models instead of the prompt that maximizes each individual model’s performance. For completeness, we also report results with an alternative prompting format uses non-GSM8k examples as n-shot examples in AppendixC.Nevertheless, since we focus primarily on the difference between a model’s performance on GSM1k and GSM8k when holding fixed an evaluation strategy, we believe the above setup to be a fair comparison for all models.We will release the full evaluation code for reproducibility.

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (6)

We select models to evaluate based on popularity. Additionally, we evaluated several lesser known models that sit near the top of the OpenLLMLeaderboard and discover evidence of Goodhart’s law: many of these models perform substantially worse on GSM1k, suggesting that they are primarily gaming the GSM8k benchmark rather than improving model reasoning capabilities. The full set of results, including the performance table for all models, can be found in AppendixD.

5 Analysis

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (7)

The interpretation of evaluation results, like the interpretations of dreams, is often a very subjective endeavor. While we report our objective results in Section4 and AppendixD, here we describe four major takeaways from interpreting the results in a more subjective manner.

5.1 Lesson 1: Some Model Families are Systematically Overfit

While it is often difficult to draw conclusions from singular data points or model releases, examining a family of models and observing a pattern of overfitting enables us to make more definitive statements.Several families of models, including the Phi and Mistral families of models, both show systematic tendencies to perform stronger on GSM8k compared to GSM1k for almost every release and scale of models.Other model families, such as Yi, Xwin, Gemma and CodeLlama also show this pattern to a lesser extent.

5.2 Lesson 2: Other Models, Especially Frontier Models, Show No Signs of Overfitting

Nevertheless, we find that many models, through all regions of performance, show minimal signs of being overfit. In particular, we find that all frontier or close-to-frontier models (including the proprietary Mistral Large) appear to perform similarly on both GSM8k and GSM1k. We posit two potential hypotheses for this: 1) frontier models have sufficiently advanced reasoning capability so that they can generalize to new problems even if they have already seen GSM8k problems in their training set, 2) frontier model builders may be more careful about data contamination.

While it is impossible to know for certain without looking at the training set for each model, one piece of evidence in favor of the former is that Mistral Large is the only model in the Mistral family to show no signs of overfitting. Since the hypothesis that Mistral took unique care in ensuring only that their largest model was free from data contamination seems unlikely, we lean instead towards the hypothesis that sufficiently strong LLMs also learn elementary reasoning ability during training. If a model learns strong enough reasoning capabilities to solve problems of a given difficulty, it will be able to generalize to new problems even if GSM8k has appeared in their training set.

5.3 Lesson 3: Overfit Models Are Still Capable of Reasoning

One worry about model overfitting is that models are incapable of reasoning and merely only memorizing answers seen in the training data. Our results do not support this conjecture. The fact that a model is overfit does not mean that it is poor at reasoning, merely that it is not as good as the benchmarks might indicate it to be. In fact, we find that many of the most overfit models are still capable of reasoning and solving novel problems. For example, while Phi-3 has an almost 10% drop in accuracy between GSM8k and GSM1k, we find that it is still able to correctly solve over 68% of GSM1k problems – which are certain to not have appeared in its training distribution. This performance is similar to that of much larger models such as dbrx-instruct, which contains almost 35x as many parameters. Similarly, Mistral models remain some of the strongest open source models, even accounting for their overfitting. This provides additional evidence for our lesson that sufficiently strong models learn elementary reasoning, even if benchmark data accidentally leaked into the training distribution, as is likely to be the case for the most overfit models.

5.4 Lesson 4: Data Contamination Is Likely Not The Full Explanation for Overfitting

A priori, a natural hypothesis is that the primary cause for overfitting is data contamination, e.g. that the test set was leaked in the pre-training or instruction fine-tuning part of the model creation.Previous work has suggested that models put higher log-likelihoods on data that they have seen during training Carlini etal. (2023). We test the hypothesis that data contamination is the cause of overfitting by measuring a model’s probability of generating an example from the GSM8k test set and compare it to how overfit it is on GSM8k compared to GSM1k, using the assumption that a model’s probability of generating the GSM8k test set is a proxy for whether the sequence is likely to have appeared in the training set. We normalize by c𝑐citalic_c, the number of characters in the sequence, to make the log-likelihood calculations comparable between sequences and models with different tokenizers. Formally, we have:

1cilogp(xi|x<i)1𝑐subscript𝑖𝑝conditionalsubscript𝑥𝑖subscript𝑥absent𝑖\frac{1}{c}\sum_{i}\log p(x_{i}|x_{<i})divide start_ARG 1 end_ARG start_ARG italic_c end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log italic_p ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT < italic_i end_POSTSUBSCRIPT )(1)

with c𝑐citalic_c being the number of characters in the sequence.Figure8 shows the result of this plot against the gap between GSM8k and GSM1k performance.We indeed find a positive relationship between the two values. We observe a Spearman’s rank correlation of 0.32 between the per-character log-likelihood of generating GSM8k and the performance gap between GSM8k and GSM1k (p=0.03)𝑝0.03(p=0.03)( italic_p = 0.03 ), and the relationship suggests that every percentage point difference in GSM8k and GSM1k performance is associated with an increase of 7.9×1037.9superscript1037.9\times 10^{-3}7.9 × 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT in the per-character log-likelihood. This result suggests that some of the reason for overfitting is due to partial memorization of the test set. For completeness, we also report the standard Pearson r2=0.15superscript𝑟20.15r^{2}=0.15italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.15 and the Kendall’s τ𝜏\tauitalic_τ of 0.28, but note that Pearson r2superscript𝑟2r^{2}italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is not the ideal metric due to the curve-of-best-fit not appearing linear.

Nevertheless, data contamination is likely not the full story. We observe this via the presence of several outliers, which cause the r2=0.32superscript𝑟20.32r^{2}=0.32italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.32 value to be relatively low. Examining these outliers carefully reveals that the model with the lowest per-character log-likelihood (Mixtral-8x22b) and the model with the highest per-character log-likelihood (Mixtral-8x22b-Instruct) are not only variations of the same model, but also have similar levels of overfit Jiang etal. (2024). Perhaps more intriguingly, the most overfit model we discovered (Math-Shepherd-Mistral-7B-RL Yu etal. (2023)) had a relatively low per-character log-likelihood. Math Shepherd trains a reward model on process level data using synthetic data. As such, we hypothesize that the reward modelling process may have leaked information about the correct reasoning chains for GSM8k even if the problems themselves did not ever appear in the dataset. Finally, we observe that the Llema models Azerbayev etal. (2024) have both high log-likelihoods and minimal overfit. Since these models are open-source and their training data is known, it is known that several GSM8k problem instances appear in the training corpus, as described by the authors in their paper. Nevertheless, the authors find (and our study supports) that these few instances do not lead to serious overfitting. The existence of these outliers suggests that overfitting on GSM8k is not purely due to data contamination, but rather may be through other indirect means, such as model builders collecting data similar in nature to benchmarks as training data or selecting final model checkpoints based on performance on benchmarks, even if the model itself may have not seen the GSM8k dataset at any point via training. Conversely, the reverse is also true: small amounts of data contamination do not necessarily lead to overfitting.

6 Discussion

We create GSM1k, a novel dataset designed to measure LLM overfitting on GSM8k. When benchmarking leading open- and closed-source models, we find substantial evidence that many models have been contaminated by benchmark data, with models showing performance drops of up to 13% accuracy. Additionally, we find that several families of models, most notably the Mistral and Phi families, show consistent overfitting across almost all model sizes and versions. An extended analysis reveals a positive relationship between a model’s likelihood of generating data points in GSM8k and its performance difference between GSM8k and GSM1k, suggesting evidence of data contamination as one of the underlying causes. Nevertheless, we find that frontier models exhibit little to no evidence of overfitting and that many models, even the most heavily overfit families, show strong signs of generalizable mathematical reasoning.

7 Acknowledgements

We would like to thank Dan Hendrycks, Adi Ganesh, Akilesh Praveen, Andrea Jaba, Charlotte Zhuang, Will Zhou and Celia Chen for their helpful comments and suggestions.


  • Abdin etal. [2024]Marah Abdin, SamAde Jacobs, AmmarAhmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio CésarTeodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie DelGiorno, Gustavo deRosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, RussellJ. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, JamesR. Lee, YinTat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, sh*tal Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, RachelWard, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, LiLyna Zhang, YiZhang, Yue Zhang, Yunan Zhang, and Xiren Zhou.Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone, April 2024.URL http://arxiv.org/abs/2404.14219.arXiv:2404.14219 [cs].
  • Azerbayev etal. [2024]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, MarcoDos Santos, Stephen McAleer, AlbertQ. Jiang, Jia Deng, Stella Biderman, and Sean Welleck.Llemma: An Open Language Model For Mathematics, March 2024.URL http://arxiv.org/abs/2310.10631.arXiv:2310.10631 [cs].
  • Balloccu etal. [2024]Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, and Ondřej Dušek.Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs, February 2024.URL http://arxiv.org/abs/2402.03927.arXiv:2402.03927 [cs].
  • Black etal. [2022]Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSNSai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach.GPT-NeoX-20B: An Open-Source Autoregressive Language Model, April 2022.URL http://arxiv.org/abs/2204.06745.arXiv:2204.06745 [cs].
  • Brown etal. [2020]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, JaredD Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.Language models are few-shot learners.In H.Larochelle, M.Ranzato, R.Hadsell, M.F. Balcan, and H.Lin, editors, Advances in Neural Information Processing Systems, volume33, pages 1877–1901. Curran Associates, Inc., 2020.URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
  • Carlini etal. [2023]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang.Quantifying Memorization Across Neural Language Models, March 2023.URL http://arxiv.org/abs/2202.07646.arXiv:2202.07646 [cs].
  • Chen etal. [2021]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde deOliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, FelipePetroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, WilliamHebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, AndrewN. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.Evaluating Large Language Models Trained on Code, July 2021.URL http://arxiv.org/abs/2107.03374.arXiv:2107.03374 [cs].
  • Cobbe etal. [2021]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman.Training Verifiers to Solve Math Word Problems, November 2021.URL http://arxiv.org/abs/2110.14168.arXiv:2110.14168 [cs].
  • Gao etal. [2023a]Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain LeNoac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou.A framework for few-shot language model evaluation, December 2023a.URL https://zenodo.org/records/10256836.tex.version: v0.4.0.
  • Gao etal. [2023b]Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig.PAL: Program-aided Language Models, January 2023b.URL http://arxiv.org/abs/2211.10435.arXiv:2211.10435 [cs].
  • Gunasekar etal. [2023]Suriya Gunasekar, YiZhang, Jyoti Aneja, Caio CésarTeodoro Mendes, Allie DelGiorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo deRosa, Olli Saarikivi, Adil Salim, sh*tal Shah, HarkiratSingh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, AdamTauman Kalai, YinTat Lee, and Yuanzhi Li.Textbooks Are All You Need, October 2023.URL http://arxiv.org/abs/2306.11644.arXiv:2306.11644 [cs].
  • Hendrycks etal. [2021a]Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring Massive Multitask Language Understanding, January 2021a.URL http://arxiv.org/abs/2009.03300.arXiv:2009.03300 [cs].
  • Hendrycks etal. [2021b]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt.Measuring Mathematical Problem Solving with the MATH Dataset.NeurIPS, 2021b.
  • Jacovi etal. [2023]Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg.Stop Uploading Test Data in Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks.In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5075–5084, Singapore, December 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.emnlp-main.308.URL https://aclanthology.org/2023.emnlp-main.308.
  • Jiang etal. [2023]AlbertQ. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, DevendraSingh Chaplot, Diego delas Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, LélioRenard Lavaud, Marie-Anne Lachaux, Pierre Stock, TevenLe Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and WilliamEl Sayed.Mistral 7B, October 2023.URL http://arxiv.org/abs/2310.06825.arXiv:2310.06825 [cs].
  • Jiang etal. [2024]AlbertQ. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, DevendraSingh Chaplot, Diego delas Casas, EmmaBou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, LélioRenard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, TevenLe Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and WilliamEl Sayed.Mixtral of Experts, January 2024.URL http://arxiv.org/abs/2401.04088.arXiv:2401.04088 [cs].
  • Jimenez etal. [2024]CarlosE. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan.SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, April 2024.URL http://arxiv.org/abs/2310.06770.arXiv:2310.06770 [cs].
  • Magar and Schwartz [2022]Inbal Magar and Roy Schwartz.Data Contamination: From Memorization to Exploitation.In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157–165, Dublin, Ireland, May 2022. Association for Computational Linguistics.doi: 10.18653/v1/2022.acl-short.18.URL https://aclanthology.org/2022.acl-short.18.
  • OpenAI etal. [2024]OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, HyungWon Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, SimónPosada Fishman, Juston Forte, Isabella Fulford, LeoGao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, ShixiangShane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, NitishShirish Keskar, Tabarak Khan, Logan Kilpatrick, JongWook Kim, Christina Kim, Yongjik Kim, JanHendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, ChakMing Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, RyanLowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, ScottMayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de AvilaBelbute Peres, Michael Petrov, Henrique Ponde deOliveira Pinto, Michael, Pokorny, Michelle Pokrass, VitchyrH. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez,Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, FelipePetroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, MadeleineB. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan FelipeCerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, JustinJay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, C.J. Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, ShengjiaZhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph.GPT-4 Technical Report, March 2024.URL http://arxiv.org/abs/2303.08774.arXiv:2303.08774 [cs].
  • Radford etal. [2019]Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.Language Models are Unsupervised Multitask Learners.page24, 2019.
  • Recht etal. [2019]Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar.Do ImageNet Classifiers Generalize to ImageNet?, June 2019.URL http://arxiv.org/abs/1902.10811.arXiv:1902.10811 [cs, stat].
  • Rein etal. [2023]David Rein, BettyLi Hou, AsaCooper Stickland, Jackson Petty, RichardYuanzhe Pang, Julien Dirani, Julian Michael, and SamuelR. Bowman.GPQA: A Graduate-Level Google-Proof Q&A Benchmark, November 2023.URL https://arxiv.org/abs/2311.12022v1.
  • Sainz etal. [2023]Oscar Sainz, Jon Campos, Iker García-Ferrero, Julen Etxaniz, OierLopez deLacalle, and Eneko Agirre.NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark.In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10776–10787, Singapore, December 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.findings-emnlp.722.URL https://aclanthology.org/2023.findings-emnlp.722.
  • Shi etal. [2024]Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer.Detecting Pretraining Data from Large Language Models, March 2024.URL http://arxiv.org/abs/2310.16789.arXiv:2310.16789 [cs].
  • Srivastava etal. [2024]Saurabh Srivastava, AnnaroseM. B, Anto PV, Shashank Menon, Ajay Sukumar, AdwaithSamod T, Alan Philipose, Stevin Prince, and Sooraj Thomas.Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap, February 2024.URL http://arxiv.org/abs/2402.19450.arXiv:2402.19450 [cs].
  • Team etal. [2024]Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, AndrewM. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, PaulR. Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Jack Krawczyk, Cosmo Du, EdChi, Heng-Tze Cheng, Eric Ni, Purvi Shah, Patrick Kane, Betty Chan, Manaal Faruqui, Aliaksei Severyn, Hanzhao Lin, YaGuang Li, Yong Cheng, Abe Ittycheriah, Mahdis Mahdieh, Mia Chen, Pei Sun, Dustin Tran, Sumit Bagri, Balaji Lakshminarayanan, Jeremiah Liu, Andras Orban, Fabian Güra, Hao Zhou, Xinying Song, Aurelien Boffy, Harish Ganapathy, Steven Zheng, HyunJeong Choe, Ágoston Weisz, Tao Zhu, Yifeng Lu, Siddharth Gopal, Jarrod Kahn, Maciej Kula, JeffPitman, Rushin Shah, Emanuel Taropa, MajdAl Merey, Martin Baeuml, Zhifeng Chen, LaurentEl Shafey, Yujing Zhang, Olcan Sercinoglu, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, Alexandre Frechette, Charlotte Smith, Laura Culp, Lev Proleev, YiLuan, XiChen, James Lottes, Nathan Schucher, Federico Lebron, Alban Rrustemi, Natalie Clay, Phil Crone, Tomas Kocisky, Jeffrey Zhao, Bartek Perz, Dian Yu, Heidi Howard, Adam Bloniarz, JackW. Rae, Han Lu, Laurent Sifre, Marcello Maggioni, Fred Alcober, Dan Garrette, Megan Barnes, Shantanu Thakoor, Jacob Austin, Gabriel Barth-Maron, William Wong, Rishabh Joshi, Rahma Chaabouni, Deeni Fatiha, Arun Ahuja, GauravSingh Tomar, Evan Senter, Martin Chadwick, Ilya Kornakov, Nithya Attaluri, Iñaki Iturrate, Ruibo Liu, Yunxuan Li, Sarah Cogan, Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang,Jordan Grimstad, AleJakse Hartman, Xavier Garcia, ThanumalayanSankaranarayana Pillai, Jacob Devlin, Michael Laskin, Diego deLas Casas, Dasha Valter, Connie Tao, Lorenzo Blanco, AdriàPuigdomènech Badia, David Reitter, Mianna Chen, Jenny Brennan, Clara Rivera, Sergey Brin, Shariq Iqbal, Gabriela Surita, Jane Labanowski, Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yiming Gu, Kate Olszewska, Ravi Addanki, Antoine Miech, Annie Louis, Denis Teplyashin, Geoff Brown, Elliot Catt, Jan Balaguer, Jackie Xiang, Pidong Wang, Zoe Ashwood, Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit Sanghavi, Ajay Kannan, Ming-Wei Chang, Axel Stjerngren, Josip Djolonga, Yuting Sun, Ankur Bapna, Matthew Aitchison, Pedram Pejman, Henryk Michalewski, Tianhe Yu, Cindy Wang, Juliette Love, Junwhan Ahn, Dawn Bloxwich, Kehang Han, Peter Humphreys, Thibault Sellam, James Bradbury, Varun Godbole, Sina Samangooei, Bogdan Damoc, Alex Kaskasoli, Sébastien M.R. Arnold, Vijay Vasudevan, Shubham Agrawal, Jason Riesa, DmitryLepikhin, Richard Tanburn, Srivatsan Srinivasan, Hyeontaek Lim, Sarah Hodkinson, Pranav Shyam, Johan Ferret, Steven Hand, Ankush Garg, TomLe Paine, Jian Li, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas, Sarah York, Machel Reid, Elizabeth Cole, Aakanksha Chowdhery, Dipanjan Das, Dominika Rogozińska, Vitaliy Nikolaev, Pablo Sprechmann, Zachary Nado, Lukas Zilka, Flavien Prost, Luheng He, Marianne Monteiro, Gaurav Mishra, Chris Welty, Josh Newlan, Dawei Jia, Miltiadis Allamanis, ClaraHuiyi Hu, Raoul deLiedekerke, Justin Gilmer, Carl Saroufim, Shruti Rijhwani, Shaobo Hou, Disha Shrivastava, Anirudh Baddepudi, Alex Goldin, Adnan Ozturel, Albin Cassirer, Yunhan Xu, Daniel Sohn, Devendra Sachan, ReinaldKim Amplayo, Craig Swanson, Dessie Petrova, Shashi Narayan, Arthur Guez, Siddhartha Brahma, Jessica Landon, Miteyan Patel, Ruizhe Zhao, Kevin Villela, Luyu Wang, Wenhao Jia, Matthew Rahtz, Mai Giménez, Legg Yeung, James Keeling, Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, KiranVodrahalli, James Qin, Zeynep Cankara, Abhanshu Sharma, Nick Fernando, Will Hawkins, Behnam Neyshabur, Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George vanden Driessche, Tao Wang, Fan Yang, Shuo-yiin Chang, Paul Komarek, Ross McIlroy, Mario Lučić, Guodong Zhang, Wael Farhan, Michael Sharman, Paul Natsev, Paul Michel, Yamini Bansal, Siyuan Qiao, Kris Cao, Siamak Shakeri, Christina Butterfield, Justin Chung, PaulKishan Rubenstein, Shivani Agrawal, Arthur Mensch, Kedar Soparkar, Karel Lenc, Timothy Chung, Aedan Pope, Loren Maggiore, Jackie Kay, Priya Jhakra, Shibo Wang, Joshua Maynez, Mary Phuong, Taylor Tobin, Andrea Tacchetti, Maja Trebacz, Kevin Robinson, Yash Katariya, Sebastian Riedel, Paige Bailey, Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose Slone, Neil Houlsby, Xuehan Xiong, Zhen Yang, Elena Gribovskaya, Jonas Adler, Mateo Wirth, Lisa Lee, Music Li, Thais Kagohara, Jay Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat, Zafarali Ahmed, Tianqi Liu, Richard Powell,Vijay Bolina, Mariko Iinuma, Polina Zablotskaia, James Besley, Da-Woon Chung, Timothy Dozat, Ramona Comanescu, Xiance Si, Jeremy Greer, Guolong Su, Martin Polacek, RaphaëlLopez Kaufman, Simon Tokumine, Hexiang Hu, Elena Buchatskaya, Yingjie Miao, Mohamed Elhawaty, Aditya Siddhant, Nenad Tomasev, Jinwei Xing, Christina Greer, Helen Miller, Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma, Angelos Filos, Milos Besta, Rory Blevins, Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi Mu, Oscar Chang, Mantas Pajarskas, Carrie Muir, Vered Cohen, CharlineLe Lan, Krishna Haridasan, Amit Marathe, Steven Hansen, Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin, Chang Lan, Jiepu Jiang, Justin Chiu, JaimeAlonso Lorenzo, LarsLowe Sjösund, Sébastien Cevey, Zach Gleicher, Thi Avrahami, Anudhyan Boral, Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, Léonard Hussenot, LivioBaldini Soares, Kate Baumli, MichaelB. Chang, Adrià Recasens, Ben Caine, Alexander Pritzel, Filip Pavetic,Fabio Pardo, Anita Gergely, Justin Frye, Vinay Ramasesh, Dan Horgan, Kartikeya Badola, Nora Kassner, Subhrajit Roy, Ethan Dyer, VíctorCampos Campos, Alex Tomala, Yunhao Tang, DaliaEl Badawy, Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal, Sharad Vikram, Zhitao Gong, Sergi Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng, Wojciech Stokowiec, CeZheng, Phoebe Thacker, Çağlar Ünlü, Zhishuai Zhang, Mohammad Saleh, James Svensson, Max Bileschi, Piyush Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas, Arpi Vezer, Marco Selvi, Toby Shevlane, Mikel Rodriguez, Tom Kwiatkowski, Samira Daruki, Keran Rong, Allan Dafoe, Nicholas FitzGerald, Keren Gu-Lemberg, Mina Khan, LisaAnne Hendricks, Marie Pellat, Vladimir Feinberg, James Cobon-Kerr, Tara Sainath, Maribeth Rauh, SayedHadi Hashemi, Richard Ives, Yana Hasson, Eric Noland, Yuan Cao, Nathan Byrd, LeHou, Qingze Wang, Thibault Sottiaux, Michela Paganini, Jean-Baptiste Lespiau, Alexandre Moufarek, Samer Hassan, Kaushik Shivakumar, Joost vanAmersfoort, Amol Mandhane, Pratik Joshi, Anirudh Goyal, Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Rakićević, Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk Oh, Seb Noury, Eren Sezener, Fantine Huot, Matthew Lamm, Nicola DeCao, Charlie Chen, Sidharth Mudgal, Romina Stella, Kevin Brooks, Gautam Vasudevan, Chenxi Liu, Mainak Chain, Nivedita Melinkeri, Aaron Cohen, Venus Wang, Kristie Seymore, Sergey Zubkov, Rahul Goel, Summer Yue, Sai Krishnakumaran, Brian Albert, Nate Hurley, Motoki Sano, Anhad Mohananey, Jonah Joughin, Egor Filonov, Tomasz Kępa, Yomna Eldawy, Jiawern Lim, Rahul Rishi, Shirin Badiezadegan, Taylor Bos, Jerry Chang, Sanil Jain, Sri GayatriSundara Padmanabhan, Subha Puttagunta, Kalpesh Krishna, Leslie Baker, Norbert Kalb, Vamsi Bedapudi, Adam Kurzrok, Shuntong Lei, Anthony Yu, Oren Litvin, Xiang Zhou, Zhichun Wu, Sam Sobell, Andrea Siciliano, Alan Papir, Robby Neale, Jonas Bragagnolo, Tej Toor, Tina Chen, Valentin Anklin, Feiran Wang, Richie Feng, MiladGholami, Kevin Ling, Lijuan Liu, Jules Walter, Hamid Moghaddam, Arun Kishore, Jakub Adamek, Tyler Mercado, Jonathan Mallinson, Siddhinita Wandekar, Stephen Cagle, Eran Ofek, Guillermo Garrido, Clemens Lombriser, Maksim Mukha, Botu Sun, HafeezulRahman Mohammad, Josip Matak, Yadi Qian, Vikas Peswani, Pawel Janus, Quan Yuan, Leif Schelin, Oana David, Ankur Garg, Yifan He, Oleksii Duzhyi, Anton Älgmyr, Timothée Lottaz, QiLi, Vikas Yadav, Luyao Xu, Alex Chinien, Rakesh Shivanna, Aleksandr Chuklin, Josie Li, Carrie Spadine, Travis Wolfe, Kareem Mohamed, Subhabrata Das, Zihang Dai, Kyle He, Daniel von Dincklage, Shyam Upadhyay, Akanksha Maurya, Luyan Chi, Sebastian Krause, Khalid Salama, PamG. Rabinovitch, Pavan KumarReddy M, Aarush Selvan, Mikhail Dektiarev, Golnaz Ghiasi, Erdem Guven, Himanshu Gupta, Boyi Liu, Deepak Sharma, IdanHeimlich Shtacher, Shachi Paul, Oscar Akerlund, François-Xavier Aubet, Terry Huang, Chen Zhu, Eric Zhu, Elico Teixeira, Matthew Fritze, Francesco Bertolini, Liana-EleonoraMarinescu, Martin Bölle, Dominik Paulus, Khyatti Gupta, Tejasi Latkar, Max Chang, Jason Sanders, Roopa Wilson, Xuewei Wu, Yi-Xuan Tan, LamNguyen Thiet, Tulsee Doshi, Sid Lall, Swaroop Mishra, Wanming Chen, Thang Luong, Seth Benjamin, Jasmine Lee, Ewa Andrejczuk, Dominik Rabiej, Vipul Ranjan, Krzysztof Styrc, Pengcheng Yin, Jon Simon, MalcolmRose Harriott, Mudit Bansal, Alexei Robsky, Geoff Bacon, David Greene, Daniil Mirylenka, Chen Zhou, Obaid Sarvana, Abhimanyu Goyal, Samuel Andermatt, Patrick Siegler, Ben Horn, Assaf Israel, Francesco Pongetti, Chih-Wei"Louis" Chen, Marco Selvatici, Pedro Silva, Kathie Wang, Jackson Tolins, Kelvin Guu, Roey Yogev, Xiaochen Cai, Alessandro Agostini, Maulik Shah, Hung Nguyen, NoahÓ Donnaile, Sébastien Pereira, Linda Friso, Adam Stambler, Adam Kurzrok, Chenkai Kuang, Yan Romanikhin, Mark Geller, Z.J. Yan, Kane Jang, Cheng-Chun Lee, Wojciech Fica, Eric Malmi, Qijun Tan, Dan Banica, Daniel Balle, Ryan Pham, Yanping Huang, Diana Avram, Hongzhi Shi, Jasjot Singh, ChrisHidey, Niharika Ahuja, Pranab Saxena, Dan Dooley, SrividyaPranavi Potharaju, Eileen O’Neill, Anand Gokulchandran, Ryan Foley, Kai Zhao, Mike Dusenberry, Yuan Liu, Pulkit Mehta, Ragha Kotikalapudi, Chalence Safranek-Shrader, Andrew Goodman, Joshua Kessinger, Eran Globen, Prateek Kolhar, Chris Gorgolewski, Ali Ibrahim, Yang Song, Ali Eichenbaum, Thomas Brovelli, Sahitya Potluri, Preethi Lahoti, Cip Baetu, Ali Ghorbani, Charles Chen, Andy Crawford, Shalini Pal, Mukund Sridhar, Petru Gurita, Asier Mujika, Igor Petrovski, Pierre-Louis Cedoz, Chenmei Li, Shiyuan Chen, NiccolòDal Santo, Siddharth Goyal, Jitesh Punjabi, Karthik Kappaganthu, Chester Kwak, Pallavi LV, Sarmishta Velury, Himadri Choudhury, Jamie Hall, Premal Shah, Ricardo Figueira, Matt Thomas, Minjie Lu, Ting Zhou, Chintu Kumar, Thomas Jurdi, Sharat Chikkerur, Yenai Ma, Adams Yu, Soo Kwak, Victor Ähdel, Sujeevan Rajayogam, Travis Choma, Fei Liu, Aditya Barua, Colin Ji, JiHo Park, Vincent Hellendoorn, Alex Bailey, Taylan Bilal, Huanjie Zhou,Mehrdad Khatir, Charles Sutton, Wojciech Rzadkowski, Fiona Macintosh, Konstantin Shagin, Paul Medina, Chen Liang, Jinjing Zhou, Pararth Shah, Yingying Bi, Attila Dankovics, Shipra Banga, Sabine Lehmann, Marissa Bredesen, Zifan Lin, JohnEric Hoffmann, Jonathan Lai, Raynald Chung, Kai Yang, Nihal Balani, Arthur Bražinskas, Andrei Sozanschi, Matthew Hayes, HéctorFernández Alcalde, Peter Makarov, Will Chen, Antonio Stella, Liselotte Snijders, Michael Mandl, Ante Kärrman, Paweł Nowak, Xinyi Wu, Alex Dyck, Krishnan Vaidyanathan, Raghavender R, Jessica Mallet, Mitch Rudominer, Eric Johnston, Sushil Mittal, Akhil Udathu, Janara Christensen, Vishal Verma, Zach Irving, Andreas Santucci, Gamaleldin Elsayed, Elnaz Davoodi, Marin Georgiev, Ian Tenney, Nan Hua, Geoffrey Cideron, Edouard Leurent, Mahmoud Alnahlawi, Ionut Georgescu, Nan Wei, Ivy Zheng, Dylan Scandinaro, Heinrich Jiang, Jasper Snoek, Mukund Sundararajan, Xuezhi Wang, Zack Ontiveros, Itay Karo, Jeremy Cole, Vinu Rajashekhar, Lara Tumeh, EyalBen-David, Rishub Jain, Jonathan Uesato, Romina Datta, Oskar Bunyan, Shimu Wu, John Zhang, Piotr Stanczyk, YeZhang, David Steiner, Subhajit Naskar, Michael Azzam, Matthew Johnson, Adam Paszke, Chung-Cheng Chiu, JaumeSanchez Elias, Afroz Mohiuddin, Faizan Muhammad, Jin Miao, Andrew Lee, Nino Vieillard, Jane Park, Jiageng Zhang, Jeff Stanway, Drew Garmon, Abhijit Karmarkar, Zhe Dong, Jong Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens, William Isaac, Geoffrey Irving, Edward Loper, Michael Fink, Isha Arkatkar, Nanxin Chen, Izhak Shafran, Ivan Petrychenko, Zhe Chen, Johnson Jia, Anselm Levskaya, Zhenkai Zhu, Peter Grabowski, YuMao, Alberto Magni, Kaisheng Yao, Javier Snaider, Norman Casagrande, Evan Palmer, Paul Suganthan, Alfonso Castaño, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybiński, Ashwin Sreevatsa, Jennifer Prendki, David Soergel, Adrian Goedeckemeyer, Willi Gierke, Mohsen Jafari, Meenu Gaba, Jeremy Wiesner, DianaGage Wright, Yawen Wei, Harsha Vashisht, Yana Kulizhskaya, Jay Hoover, Maigo Le,LuLi, Chimezie Iwuanyanwu, LuLiu, Kevin Ramirez, Andrey Khorlin, Albert Cui, Tian LIN, Marcus Wu, Ricardo Aguilar, Keith Pallo, Abhishek Chakladar, Ginger Perng, ElenaAllica Abellan, Mingyang Zhang, Ish*ta Dasgupta, Nate Kushman, Ivo Penchev, Alena Repina, Xihui Wu, Tom vander Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa, Shuangfeng Li, Olivier Dousse, Fan Yang, Jeff Piper, Nathan Ie, Rama Pasumarthi, Nathan Lintz, Anitha Vijayakumar, Daniel Andor, Pedro Valenzuela, Minnie Lui, Cosmin Paduraru, Daiyi Peng, Katherine Lee, Shuyuan Zhang, Somer Greene, DucDung Nguyen, Paula Kurylowicz, Cassidy Hardin, Lucas Dixon, Lili Janzer, Kiam Choo, Ziqiang Feng, Biao Zhang, Achintya Singhal, Dayou Du, Dan McKinnon, Natasha Antropova, Tolga Bolukbasi, Orgad Keller, David Reid, Daniel Finchelstein, MariaAbi Raad, Remi Crocker, Peter Hawkins, Robert Dadashi, Colin Gaffney, Ken Franko, Anna Bulanova, Rémi Leblond, Shirley Chung, Harry Askham, LuisC. Cobo, Kelvin Xu, Felix Fischer, Jun Xu, Christina Sorokin,Chris Alberti, Chu-Cheng Lin, Colin Evans, Alek Dimitriev, Hannah Forbes, Dylan Banarse, Zora Tung, Mark Omernick, Colton Bishop, Rachel Sterneck, Rohan Jain, Jiawei Xia, Ehsan Amid, Francesco Piccinno, Xingyu Wang, Praseem Banzal, DanielJ. Mankowitz, Alex Polozov, Victoria Krakovna, Sasha Brown, MohammadHossein Bateni, Dennis Duan, Vlad Firoiu, Meghana Thotakuri, Tom Natan, Matthieu Geist, Sertan Girgin, Hui Li, Jiayu Ye, Ofir Roval, Reiko Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Danila Sinopalnikov, Sabela Ramos, John Mellor, Abhishek Sharma, Kathy Wu, David Miller, Nicolas Sonnerat, Denis Vnukov, Rory Greig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy, Tomy Tsai, Mimi Jasarevic, Weize Kong, Phuong Dao, Zeyu Zheng, Frederick Liu, Fan Yang, Rui Zhu, TianHuey Teh, Jason Sanmiya, Evgeny Gladchenko, Nejc Trdin, Daniel Toyama, Evan Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman, John Carpenter, George Papamakarios, Rupert Kemp, SushantKafle, Tanya Grunina, Rishika Sinha, Alice Talbert, Diane Wu, Denese Owusu-Afriyie, Cosmo Du, Chloe Thornton, Jordi Pont-Tuset, Pradyumna Narayana, Jing Li, Saaber Fatehi, John Wieting, Omar Ajmeri, Benigno Uria, Yeongil Ko, Laura Knight, Amélie Héliou, Ning Niu, Shane Gu, Chenxi Pang, Yeqing Li, Nir Levine, Ariel Stolovich, Rebeca Santamaria-Fernandez, Sonam Goenka, Wenny Yustalim, Robin Strudel, Ali Elqursh, Charlie Deck, Hyo Lee, Zonglin Li, Kyle Levin, Raphael Hoffmann, Dan Holtmann-Rice, Olivier Bachem, Sho Arora, Christy Koh, SoheilHassas Yeganeh, Siim Põder, Mukarram Tariq, Yanhua Sun, Lucian Ionita, Mojtaba Seyedhosseini, Pouya Tafti, Zhiyu Liu, Anmol Gulati, Jasmine Liu, Xinyu Ye, Bart Chrzaszcz, Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown, Shreya Singh, Wei Fan, Aaron Parisi, Joe Stanton, Vinod Koverkathu, ChristopherA. Choquette-Choo, Yunjie Li, T.J. Lu, Abe Ittycheriah, Prakash Shroff, Mani Varadarajan, Sanaz Bahargam, Rob Willoughby, David Gaddy, Guillaume Desjardins, Marco Cornero,Brona Robenek, Bhavishya Mittal, Ben Albrecht, Ashish Shenoy, Fedor Moiseev, Henrik Jacobsson, Alireza Ghaffarkhah, Morgane Rivière, Alanna Walton, Clément Crepy, Alicia Parrish, Zongwei Zhou, Clement Farabet, Carey Radebaugh, Praveen Srinivasan, Claudia vander Salm, Andreas Fidjeland, Salvatore Scellato, Eri Latorre-Chimoto, Hanna Klimczak-Plucińska, David Bridson, Dario deCesare, Tom Hudson, Piermaria Mendolicchio, Lexi Walker, Alex Morris, Matthew Mauger, Alexey Guseynov, Alison Reid, Seth Odoom, Lucia Loher, Victor Cotruta, Madhavi Yenugula, Dominik Grewe, Anastasia Petrushkina, Tom Duerig, Antonio Sanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Lynette Webb, Sahil Dua, Dong Li, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, Tomer Shani, Matan Eyal, Anuj Khare, ShreyasRammohan Belle, Lei Wang, Chetan Tekur, MihirSanjay Kale, Jinliang Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty, YiSun, Yao Zhao, Stephan Lee, Pandu Nayak, Doug Fritz, ManishReddy Vuyyuru, John Aslanides, NidhiVyas, Martin Wicke, Xiao Ma, Evgenii Eltyshev, Nina Martin, Hardie Cate, James Manyika, Keyvan Amiri, Yelin Kim, XiXiong, Kai Kang, Florian Luisier, Nilesh Tripuraneni, David Madras, Mandy Guo, Austin Waters, Oliver Wang, Joshua Ainslie, Jason Baldridge, Han Zhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham Mansour, Jason Gelman, Yang Xu, George Polovets, JiLiu, Honglong Cai, Warren Chen, XiangHai Sheng, Emily Xue, Sherjil Ozair, Christof Angermueller, Xiaowei Li, Anoop Sinha, Weiren Wang, Julia Wiesinger, Emmanouil Koukoumidis, Yuan Tian, Anand Iyer, Madhu Gurumurthy, Mark Goldenson, Parashar Shah, M.K. Blake, Hongkun Yu, Anthony Urbanowicz, Jennimaria Palomaki, Chrisantha Fernando, Ken Durden, Harsh Mehta, Nikola Momchev, Elahe Rahimtoroghi, Maria Georgaki, Amit Raul, Sebastian Ruder, Morgan Redshaw, Jinhyuk Lee, Denny Zhou, Komal Jalan, Dinghua Li, Blake Hechtman, Parker Schuh, Milad Nasr, Kieran Milan, Vladimir Mikulik, Juliana Franco, Tim Green, Nam Nguyen, Joe Kelley, Aroma Mahendru, Andrea Hu,Joshua Howland, Ben Vargas, Jeffrey Hui, Ksh*tij Bansal, Vikram Rao, Rakesh Ghiya, Emma Wang, KeYe, JeanMichel Sarr, MelanieMoranski Preston, Madeleine Elish, Steve Li, Aakash Kaku, Jigar Gupta, Ice Pasupat, Da-Cheng Juan, Milan Someswar, Tejvi M., Xinyun Chen, Aida Amini, Alex Fabrikant, Eric Chu, Xuanyi Dong, Amruta Muthal, Senaka Buthpitiya, Sarthak Jauhari, Nan Hua, Urvashi Khandelwal, Ayal Hitron, Jie Ren, Larissa Rinaldi, Shahar Drath, Avigail Dabush, Nan-Jiang Jiang, Harshal Godhia, Uli Sachs, Anthony Chen, Yicheng Fan, Hagai Taitelbaum, Hila Noga, Zhuyun Dai, James Wang, Chen Liang, Jenny Hamer, Chun-Sung Ferng, Chenel Elkind, Aviel Atias, Paulina Lee, Vít Listík, Mathias Carlen, Jan vande Kerkhof, Marcin Pikus, Krunoslav Zaher, Paul Müller, Sasha Zykova, Richard Stefanec, Vitaly Gatsko, Christoph Hirnschall, Ashwin Sethi, XingyuFederico Xu, Chetan Ahuja, Beth Tsai, Anca Stefanoiu, BoFeng, Keshav Dhandhania, Manish Katyal, Akshay Gupta, Atharva Parulekar, Divya Pitta, Jing Zhao, VivaanBhatia, Yashodha Bhavnani, Omar Alhadlaq, Xiaolin Li, Peter Danenberg, Dennis Tu, Alex Pine, Vera Filippova, Abhipso Ghosh, Ben Limonchik, Bhargava Urala, ChaitanyaKrishna Lanka, Derik Clive, YiSun, Edward Li, Hao Wu, Kevin Hongtongsak, Ianna Li, Kalind Thakkar, Kuanysh Omarov, Kushal Majmundar, Michael Alverson, Michael Kucharski, Mohak Patel, Mudit Jain, Maksim Zabelin, Paolo Pelagatti, Rohan Kohli, Saurabh Kumar, Joseph Kim, Swetha Sankar, Vineet Shah, Lakshmi Ramachandruni, Xiangkai Zeng, Ben Bariach, Laura Weidinger, Amar Subramanya, Sissie Hsiao, Demis Hassabis, Koray Kavukcuoglu, Adam Sadovsky, Quoc Le, Trevor Strohman, Yonghui Wu, Slav Petrov, Jeffrey Dean, and Oriol Vinyals.Gemini: A Family of Highly Capable Multimodal Models, April 2024.URL http://arxiv.org/abs/2312.11805.arXiv:2312.11805 [cs].
  • Touvron etal. [2023a]Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.LLaMA: Open and Efficient Foundation Language Models, February 2023a.URL http://arxiv.org/abs/2302.13971.arXiv:2302.13971 [cs].
  • Touvron etal. [2023b]Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, CristianCanton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, PunitSingh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, EricMichael Smith, Ranjan Subramanian, XiaoqingEllen Tan, Binh Tang, Ross Taylor, Adina Williams, JianXiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and ThomasScialom.Llama 2: Open Foundation and Fine-Tuned Chat Models, July 2023b.URL http://arxiv.org/abs/2307.09288.arXiv:2307.09288 [cs].
  • Yu etal. [2023]Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, YuZhang, JamesT. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu.MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models, October 2023.URL http://arxiv.org/abs/2309.12284.arXiv:2309.12284 [cs].

Appendix A Annotator Instructions

We provide the annotator instructions given to crowdworkers below

Welcome to the Grade School Math Question Development project. The goal of this project is to create questions and answers similar to what is found in an 8th-grade math quiz. Our goal is to develop high-quality questions that are almost the same as what is found in the dataset but are entirely unique. You will see three example questions and their corresponding answers in each task. These examples will guide you to create completely new questions and answers. Its important to note that you cannot use chatbots or language models to help you develop these Q&A pairs. You may be removed from the project if we detect any use of chatbots. Crucially, your Q&A pairs must be original creations and cannot be paraphrased versions of the examples.

Your workflow for this project will be as follows:

Review the examples: In each task you will be shown examples from an 8th-grade question-and-answer dataset. Review the examples to inform how you can create your question and answer pair.

Problem Creation: Problems should follow step guidance in the task. Dont reuse a problem setting. If you wrote a problem about Rogers trip to the grocery store, dont write another problem using the same premise. All questions should have a resolution of 1 or higher. We do not want any questions with a negative integer or zero as the answer.

Craft the resolution steps: Calculations should be simple enough an 8th grader can complete with a pen and paper. Only use elementary arithmetic operations (addition, subtraction, multiplication, division)

Provide the final Answer: Answers should be a single integer value. Any units should be specified as part of the question (e.g. "How much money, in dollars, does Robert have?"). Simple decimal numbers (e.g. 3.25) can be part of the intermediate steps in the problem, but final answers should always be integers.

Check your work: We will utilize quality control process to ensure accuracy but it is crucial to check your work!

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (8)

Appendix B N-shot Prompt (examples selected randomly from GSM8k train)

Below is an example prompt. For each question, we select five random examples from GSM8k to use as n-shot examples, which vary for each new question from the GSM1k/GSM8k test set. While evaluation methods vary between models, this is the most common approach to evaluating GSM8k.

Question: Jen and Tyler are gymnasts practicing flips. Jen is practicing the triple-flipwhile Tyler is practicing the double-flip. Jen did sixteen triple-flips during practice.Tyler flipped in the air half the number of times Jen did. How many double-flips did Tyler do?Answer: Jen did 16 triple-flips, so she did 16 * 3 = <<16*3=48>>48 flips.Tyler did half the number of flips, so he did 48 / 2 = <<48/2=24>>24 flips.A double flip has two flips, so Tyler did 24 / 2 = <<24/2=12>>12 double-flips.12Question: Four people in a law firm are planning a party. Mary will buy a platter of pastafor $20 and a loaf of bread for $2. Elle and Andrea will split the cost for buying 4 cansof soda which cost $1.50 each, and chicken wings for $10. Joe will buy a cake that costs$5. How much more will Mary spend than the rest of the firm put together?Answer: Mary will spend $20 + $2 = $<<20+2=22>>22.Elle and Andrea will spend $1.5 x 4 = $<<1.5*4=6>>6 for the soda.Elle and Andrea will spend $6 + $10 = $<<6+10=16>>16 for the soda and chicken wings.Elle, Andrea, and Joe together will spend $16 + $5 = $<<16+5=21>>21.So, Mary will spend $22 - $21 = $<<22-21=1>>1 more than all of them combined.1Question: A charcoal grill burns fifteen coals to ash every twenty minutes of grilling.The grill ran for long enough to burn three bags of coals. Each bag of coal contains 60coals. How long did the grill run?Answer: The grill burned 3 * 60 = <<3*60=180>>180 coals.It takes 20 minutes to burn 15 coals, so the grill ran for 180 / 15 * 20 =<<180/15*20=240>>240 minutes.240Question: A bear is preparing to hibernate for the winter and needs to gain 1000 pounds.At the end of summer, the bear feasts on berries and small woodland animals. During autumn,it devours acorns and salmon. It gained a fifth of the weight it needed from berries duringsummer, and during autumn, it gained twice that amount from acorns. Salmon made up half ofthe remaining weight it had needed to gain. How many pounds did it gain eating small animals?Answer: The bear gained 1 / 5 * 1000 = <<1/5*1000=200>>200 pounds from berries.It gained 2 * 200 = <<2*200=400>>400 pounds from acorns.It still needed 1000 - 200 - 400 = <<1000-200-400=400>>400 pounds.Thus, it gained 400 / 2 = <<400/2=200>>200 pounds from salmon.Therefore, the bear gained 400 - 200 = <<400-200=200>>200 pounds from small animals.200Question: Brendan can cut 8 yards of grass per day, he bought a lawnmower and it helpedhim to cut more yards by Fifty percent per day. How many yards will Brendan be able to cutafter a week?Answer: The additional yard Brendan can cut after buying the lawnmower is 8 x 0.50 =<<8*0.50=4>>4 yards.So, the total yards he can cut with the lawnmower is 8 + 4 = <<8+4=12>>12.Therefore, the total number of yards he can cut in a week is 12 x 7 = <<12*7=84>>84 yards.84Question: The highest temperature ever recorded on Earth was 136 degrees Fahrenheit andthe coldest temperature ever measured was -126 degrees Fahrenheit. If the averagetemperature of Earth is 59, what would be the difference between the average temperatureon Earth and the average given the two extremes?Answer:

Appendix C Results with An Alternative Prompt

As an ablation, we evaluate all models with an alternative prompt scheme and compare results with our primary findings. This prompt is available under the LM Evaluation Harness as a “chain-of-thought” prompt. However, manually examining the prompt (provided in full below) reveals that the primary difference with the standard n-shot prompt lies not in chain-of-thought reasoning but rather using a set of non-GSM8k problems as guiding examples as well as providing an alternative answer format. We choose to use the standard prompt to match typical evaluation methods widespread in the field but also report these results for completeness.

Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?

A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.

Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?

A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.

Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?

A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.

Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?

A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.

Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?

A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.

Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?

A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.

Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?

A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.

Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?

A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.

We report our results in Table1. On average, we find that this prompt causes the gap between GSM8k and GSM1k to decrease by about 1% across all models. However, for some models (e.g. Mixtral-8x22B-v0), this reduces the amount of observed overfitting dramatically. While the exact cause of this difference is impossible to know, especially without access to model details such as their training set, our hypothesis is that prompting a model with GSM8k is more likely to activate the “memorization” portion of a model than if it is prompted by non GSM8k grade school math problems.

Appendix D Results Table

We report our full results in Table1. Models are sorted by the difference in performance between GSM8k and GSM1k. Because all models are evaluated using the standard LM Evaluation Harness prompt and evaluation format, model performance on GSM8k may not match reported benchmark numbers. In particular, answers that do not match the 5-shot example format are marked incorrect even if they are otherwise “correct.” Our focus is primarily on the difference between GSM8k and GSM1k performance, holding evaluation setting constant. Alternative prompt results are also included. For details, see AppendixC.


Appendix E 50 Examples from GSM1k

1Gabriela has $65.00 and is shopping for groceries so that her grandmother can make her favorite kale soup. She needs heavy cream, kale, cauliflower, and meat (bacon and sausage). Gabriella spends 40% of her money on the meat. She spends $5.00 less than one-third of the remaining money on heavy cream. Cauliflower costs three-fourth of the price of the heavy cream and the kale costs $2.00 less than the cauliflower. As Gabriela leaves the store, she spends one-third of her remaining money on her grandmother’s favorite Girl Scout Cookies. How much money, in dollars, does Gabriela spend on Girl Scout cookies?7
2Bernie is a street performer who plays guitar. On average, he breaks three guitar strings a week, and each guitar string costs $3 to replace. How much does he spend on guitar strings over the course of an entire year?468
3John Henry is competing against a machine to see who can dig a tunnel more quickly. John works without rest, and excavates at a rate of 6 cubic feet of rock per hour. The machine excavates more quickly but needs to be refueled and maintained by its operator for 30 minutes out of every hour. When it’s not under maintenance, the machine excavates at a rate of 10 cubic feet of stone per hour. Provided that the competition lasts for 8 hours, how much more rock will John have excavated compared to the machine?8
4Colin is playing dice with his friend Eoin and needs some help keeping track of his score. He begins with 5 points and wins 6 points in the first round. In the second round, he won twice as many points as he won in the first round. In the third round, he had a fantastic roll and was able to triple his current total point count! How many points did Colin end the game with?69
5Bradley and his friends enjoy playing marbles. They possess a box of marbles containing 12 red balls, 15 yellow balls, and 18 green balls. How many additional red balls do they require to double the number of red balls compared to the combined number of yellow and green balls?54
6Marge got a job so she can buy her first car. Her job pays $15/hr and she works there 30 hours a week. The car Marge wants is $3600. How many weeks does Marge need to work to buy the car?8
7Andy’s soccer team needs 80 points to finish in first place. His team plays 38 games, and he gets 3 points for each win, 1 point for each tie, and 0 points for each loss. After 26 games, the team has 15 wins, 5 ties, and 6 losses. How many more points does Andy’s team need to reach 80 points?30
8Molly wants to win the contest at school for reading 25 books before the end of May. So far, she has read 5 books by the end of January. How many more books will she need to read on average each month until the end of May to win the contest?5
9Ms. Crabapple has a bag of jelly beans that she is going to divide equally among all of her 32 students who complete their homework every day over the course of a week. The bag has 384 jellybeans in it. Unfortunately, many of Ms. Crabapple’s students have a poorly developed work ethic, and only half of them complete all of the required homework. How many jelly beans will each of the eligible students receive?24
10Emily is applying to 6 different colleges. ½ of the colleges have an application fee of $60, and the other half have an application fee of $90. She must also pay $15 per transcript to send them to each college. Her parents offer to help pay for half of the total costs. How many dollars does she have to pay?270
11Bob has to read 2 books and 3 articles, while Emily has to read 4 books and 2 articles. Each book has 3 chapters and each chapter has 4 paragraphs. Each article has 4 sections and each section has 2 paragraphs. How many paragraphs in total will Bob and Emily read?112
12Leah and 2 of her friends go to an all-you-can-eat dumpling buffet. Leah’s 1st friend ate 30 dumplings, her 2nd friend ate twice as many dumplings as her 1st friend, and Leah ate 1.5 times as many dumplings as her 2nd friend. How many dumplings in total did Leah and her friends eat?180
13Francis has a bowl of candy in front of him. There are three different flavors of candies that he’s eaten over the course of 3 hours. He’s eaten ten lemon, four orange, and sixteen cherry-flavored candies. If there were twenty of each when he started, how much of an average percentage is still left?50
14Maryann is saving up for a new bike that costs $450. She already has $120 saved up. She earns $15 per hour at her part-time job. How many hours does she need to work to afford the bike?22
15Henry is renovating his kitchen and adding a new tile floor. He needs to cover an area of 200 square feet. He has a stack of tiles that measure 0.5 feet in length and width. He can get 40 tiles done per hour. Henry works for 6 hours at that rate, then has some coffee and works at a faster rate for the next 2 hours (60 tiles per hour). Henry runs out of tiles, so he goes to a store to purchase the remaining tiles needed to finish the floor. Given that the price per tile is $2.50, how much will he need to spend at the store to get exactly enough tiles to finish the floor?1100
16A painter needs to paint 3 houses. The first house requires 14 gallons of paint, the second house requires twice as much paint as the first, and the third house needs half as much paint as the second house. If one gallon of paint costs $35 and the painter gets a bulk discount of 10% for purchases over 30 gallons, how much will the paint cost in total?1764
17A coal miner is loading up coal into mine carts. During the first hour of the day, he is able to load 15 carts. His boss yells at him after that, so for each of the next three hours, he loads twice as many carts. Each cart weighs 78 pounds. What was the total weight of the coal he loaded on this day?8190
18A plane owned by Sunny Skies Airlines is flying from Indianapolis to Phoenix. The plane holds 180 passengers and is 2/3 full. Each passenger brings 2 carry-on bags and is charged a carry-on bag fee of $35 per bag. How much money does Sunny Skies Airlines collect for the carry-on bag fees for this flight?8400
19Sally went to the mall to buy clothes for the summer. She went to Forever 21 and bought 4 tops, each had different prices, $12.99, $6.99, $17.99, $21.99, and 3 pants each priced at $15.99. If her subtotal is over $75, she gets a discount of 15% on her purchase at that store. Then she goes to Shoe Palace and buys 2 shoes for a total of $123.26. How much money did Sally spend at the mall?215
20Dean wants to buy flowers to make arrangements for a party. He is going to make 12 arrangements. He wants to include 4 roses and 3 daisies in each arrangement. Roses come by the dozens and are $15 for each dozen. Daisies come in groups of 4 and are $8 for the set. How much will it cost for Dean to make all 12 arrangements?132
21Alex plans to adopt a new cat and needs help planning a budget for this event. The adoption fee is $200, and it includes all the essential veterinary care needed for a kitten, but she also needs to buy other supplies for the cat when she brings it home. The litter boxes cost $30, one package of litter costs $17, a bag of dry food costs $55, and the wet food costs $1.50 per can. Alex will buy 2 litter boxes, 3 packages of litter, one bag of dry food, and 12 cans of wet food. How much money should Alex make sure she has before beginning the process of adopting her new cat?384
22Carolina is trying to qualify for a car loan. The lender tells her she must meet a debt-to-income ratio of 1:4. Her current debts are $900 in rent, $200 in utilities, and another $300 in miscellaneous expenses per month. Her current monthly salary is $4000. How much more money, in dollars, will she need to cut out from her current debts per month to meet the DTI requirements?400
23Samantha is saving money for a new bike by doing chores. She earns $5 for every chore she completes. If she does 3 chores each day for a week, and then uses $25 to buy a helmet, how much money does she have left at the end of the week?80
24Frank sneaks out before his break at 3:20 pm and gets back at 4:05. If his break was only supposed to be half an hour, for how much longer did Frank sneak out (in minutes)?15
25Janet wants to listen to 20 music albums by the end of the week. If its Thursday and she just finished album number twelve and she has to finish them by Saturday, how many albums would she have to listen to per day?4
26Hana wants to donate her clothes to a local charity. After going through her closet she ended up with 2 boxes of pants, 3 boxes of dresses, 1 box of shoes, and boxes of shirts. The number of boxes with shirts was 3 more than the other three boxes combined. How many boxes of shirts does she have to donate?9
27Gray has $126 to spend on lunches for the week. On Monday, he spent $16 on a carne asada burrito and a soda. On Saturday, he will spend $30 eating out with friends. If he spends the same amount of money on food for the other 5 days of the week, what will be his average daily spending on food over these 5 days?16
28Gayle has a lawnmowing business. Lawn 1 takes 15 minutes to mow. Lawn 2 takes 18 more minutes than Lawn 1. Lawn 3 takes 20% more time to mow than Lawn 1. She is paid $2.50 per minute for the time she spends. However, she gives her customers a 20% discount. How much money does she make from mowing all three lawns?132
29Frank ordered a whole chicken, 6 cans of chopped chicken breast, 1 lb. of macadamia nuts, and 4 bags of frozen broccoli. Each item has the following respective prices: $12 per chicken, $2 per can, $24/lb., $3 per bag. The sales tax was 10% of the total cost and the tip was half the price of the whole chicken. How much did Frank pay for his order?72
30Milo can bench press half as much weight as Doug can squat, and Doug can squat twice as much weight as Diane can squat. If Diana squats 125 pounds, how much weight can Milo bench press?125
31Pablo is trying to make breakfast for his family. His wife eats 4 pancakes. His son eats 2 pancakes. Pablo wants to eat 4 pancakes. One box of pancake mix will make 5 pancakes. How many boxes of pancake mix will he need?2
32Jim wants to spend 15% of his monthly earnings on groceries. He makes $2500/month. How much money will he have left over?2125
33A school is ordering tablets and laptops for three classrooms. Each classroom will receive 4 tablets and 3 laptops. If each tablet costs $250 and each laptop costs $600, how much will the school spend in total for all three classrooms?8400
34Grant takes 3 minutes to put on his pajamas. He brushes his teeth for 2 minutes. Then, he washes his face and brushes his hair for another 2 minutes. Finally, he reads a book for a while and turns off the light for bed. If Grant begins his routine at 8:15 pm and turns off the lights at 8:47 pm, for how long does Grant read a book?25
35Bellemere owns a tangerine orchard with 50 trees. Each tree produces 80 tangerines. She wants to sell 600 tangerines at her local farmer’s market. If she picks the same amount of tangerines from every tree, how many tangerines will be left on each tree?68
36A charity puts out a telethon for a cause. Within 15 minutes, seventy-seven people donated $3 each, and 231 people donated four dollars each. How much does the charity receive within this time?1155
37A school is selling baskets for a fundraiser. There are three baskets containing the following items: * Blue basket: a ball, cup, and notebook. * Red basket: a cup, bell, and hat. * Green basket: a hat, pen, and notebook. The costs of the items in the baskets are as follows: * $1: ball, notebook, and pen * $2: cup, bell, and hat Jane buys 6 red baskets and 5 blue baskets. Jim buys 3 red baskets and 2 green baskets. Since they purchase so many, they receive a discount. Jane gets an $8 discount and Jim also gets a $2 discount. How many times more does Jane spend than Jim?2
38Mr. Gordon has 14 boys in his first period class which is twice the number of girls in class. Two of the girls in class have blonde hair and the rest have brown hair. How many girls with brown hair are in his class?5
39Albert gets paid $15 an hour. He gets time and a half if he works over forty hours a week. Last week, he worked 48 hours. He plans to do this two weeks in a row. How much money will he be paid in overtime for those two weeks?360
40Beth, Anna, and Kim went to a book fair. Beth had two books less than Anna while Kim had four more books than Anna. Beth had $20 with her and was now left with $8. If all books are priced at $4, how much, in dollars, did Kim spend on her books?36
414 friends are going on a road trip. Their names are Alex, Bethany, Carlos, and Drew. They drive at a rate of 65, 75, 60, and 50 mph, respectively. Alex drives for 2 hours, Bethany for 4, and Carlos and Drew each drive for 3 hours. They are using a car with a fuel efficiency of 20 miles per gallon of gas. If, along their route, gas costs $3 per gallon, how much money (in dollars) will they need to spend on gas? Assume they begin their journey at a gas station with an empty tank of gas.114
42The Genco Olive Oil Company has received ninety-nine orders for ninety-nine barrels of olive oil each. Out of those shipped, 33 orders were sent back due to clerical or product errors. How many total barrels of olive oil were not returned?6534
43There is a very large room that has 4 tables, 1 sofa and 2 chairs that have 4 legs each. There are also 3 tables with 3 legs each, 1 table with 1 leg, and 1 rocking chair with 2 legs. How many legs of tables are there in the room?26
44A classroom has 24 students, and the teacher has arranged a field trip. If the cost per student for the trip is $15 and the teacher already has $120 from a class fund, how many more dollars does the teacher need to cover the total cost of the trip for all students?240
45Rachel and Shauna go out to dinner. Dinner costs $68.25 in total (without taxes). Rachel’s meal costs 1/3 of the total price, while Shauna’s meal costs 2/3 of the total price. How much did Shauna’s meal cost (round to the nearest dollar)?46
46Olivia owns a local hotel and needs to drive up business. She is planning to give a special deal to anyone who signs up for a membership card. Her idea is to give them 20% off their first night and 10% off on every night they stay after that. If her first new customer pays $616 for their stay, and each night costs $140 before discounts, how many nights did they stay at the hotel?5
47Johnny has 8 green balls. He has five fewer than twice that number in red balls. How many total balls does Johnny have?19
4830 students are in a class. 1/5 of them are 12 years old, 1/3 are 13 years old. 1/10 of them are 11 years old. How many of them are not 11, 12, or 13 years old?11
49Francis loves sandwiches. He gets his usual from his favorite deli: two “Big Boy” sandwiches, and a glass-bottled soda. A “Big Boy” costs $15.25 and the soda costs $3.75. His friend Lars calls him and asks for a double-sweet soda that’s $4.75. If Francis pays all of this with $40 and asks for his change back in only quarters, how many quarters will he get?4
50A factory needs to produce 960 pieces of toy boats. They are only able to produce 1/6th of their goal a day. 5 toy boats make up a case and 4 cases make up a box. If a toy shop comes to pick up what is available on the fourth day and finds an extra 8 boxes left for them that were forgotten from a previous pickup, how many boxes of toy boats will they be able to take?40

Appendix F Additional Plots From Log-Likelihood Experiments

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (9)
A Careful Examination of Large Language Model Performance on Grade School Arithmetic (10)

Appendix G Bar Chart of Performance Gaps Between GSM8k and GSM1k Across All Model Accuracies

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (11)
A Careful Examination of Large Language Model Performance on Grade School Arithmetic (12)
A Careful Examination of Large Language Model Performance on Grade School Arithmetic (13)
A Careful Examination of Large Language Model Performance on Grade School Arithmetic (2024)
Top Articles
Latest Posts
Article information

Author: Kerri Lueilwitz

Last Updated:

Views: 6658

Rating: 4.7 / 5 (67 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Kerri Lueilwitz

Birthday: 1992-10-31

Address: Suite 878 3699 Chantelle Roads, Colebury, NC 68599

Phone: +6111989609516

Job: Chief Farming Manager

Hobby: Mycology, Stone skipping, Dowsing, Whittling, Taxidermy, Sand art, Roller skating

Introduction: My name is Kerri Lueilwitz, I am a courageous, gentle, quaint, thankful, outstanding, brave, vast person who loves writing and wants to share my knowledge and understanding with you.