OpenAI Researcher Access Program
Description

[UPDATE 2/3/2025]: After making updates, applications for the Researcher Access Program are open again!

We’re interested in supporting researchers using our products to study areas related to the responsible deployment of AI and mitigating associated risks, as well as understanding the societal impact of AI systems. If you are interested in an opportunity for subsidized access to enable a research project, please apply for API credits through this program

We encourage applications from early stage researchers in countries supported by our API, and are especially interested in subsidizing work by researchers with limited financial and institutional resources. Researchers can apply for up to $1,000 of OpenAI API credits to support their work. Credits are valid for a period of 12 months and they can be applied towards any of our publicly available models.

Please note that applications are reviewed once every 3 months (in March, June, September, and December). Grants may take up to 4 weeks to credit to awardees after they receive their application decision.  API credits awarded through this program expire after 1 year, and cannot be extended or renewed, so please ensure you complete your research within 1 year.

Before applying, please take a moment to review our sharing & publication policy.

We support responsible research related to the safety of AI systems, however, researchers are still bound to our Usage Policies and other applicable OpenAI policies, which require users to comply with the law and not to use our service to harm yourself or others, among a few other things. If you are conducting research related to safety and have received a warning or had your access to our services suspended, and you believe this was done in error, please submit an appeal through our help center.


Areas of interest include:

Alignment: How can we understand what objective, if any, a model is best understood as pursuing? How do we increase the extent to which that objective is aligned with human preferences, such as via design or fine-tuning?

Fairness & representation: How should performance criteria be established for fairness and representation in language models? How can language models be improved in order to effectively support the goals of fairness and representation in specific, deployed contexts?

Societal Impact: How do we create measurements for AI’s impact on society? What impact does AI have on different domains and groups of people?

Interdisciplinary research: How can AI development draw on insights from other disciplines such as philosophy, cognitive science, and sociolinguistics?

Interpretability/transparency: How do these models work, mechanistically? Can we identify what concepts they’re using, extract latent knowledge from the model, make inferences about the training procedure, or predict surprising future behavior?

Misuse potential: How can systems like the API be misused? What sorts of “red teaming” approaches can we develop to help AI developers think about responsibly deploying technologies like this?

Robustness: How robust are large generative models to “natural” perturbations in the , such as phrasing the same idea in different ways or with typos? Can we predict the kinds of domains and tasks for which large generative models are more likely to be robust or not, and how does this relate to the training data? Are there techniques we can use to predict and mitigate worst-case behavior? How can robustness be measured in the context of few-shot learning (e.g., across variations in s)? Can we train models so that they satisfy safety properties with a very high level of reliability, even under adversarial inputs?

Other

We’re initially scoping to these areas, but welcome suggestions for future focus areas. The questions under each area are illustrative and we’d be delighted for research proposals that address different questions.

Apply

OpenAI Researcher Access Program


[UPDATE 2/3/2025]: After making updates, applications for the Researcher Access Program are open again!

We’re interested in supporting researchers using our products to study areas related to the responsible deployment of AI and mitigating associated risks, as well as understanding the societal impact of AI systems. If you are interested in an opportunity for subsidized access to enable a research project, please apply for API credits through this program

We encourage applications from early stage researchers in countries supported by our API, and are especially interested in subsidizing work by researchers with limited financial and institutional resources. Researchers can apply for up to $1,000 of OpenAI API credits to support their work. Credits are valid for a period of 12 months and they can be applied towards any of our publicly available models.

Please note that applications are reviewed once every 3 months (in March, June, September, and December). Grants may take up to 4 weeks to credit to awardees after they receive their application decision.  API credits awarded through this program expire after 1 year, and cannot be extended or renewed, so please ensure you complete your research within 1 year.

Before applying, please take a moment to review our sharing & publication policy.

We support responsible research related to the safety of AI systems, however, researchers are still bound to our Usage Policies and other applicable OpenAI policies, which require users to comply with the law and not to use our service to harm yourself or others, among a few other things. If you are conducting research related to safety and have received a warning or had your access to our services suspended, and you believe this was done in error, please submit an appeal through our help center.


Areas of interest include:

Alignment: How can we understand what objective, if any, a model is best understood as pursuing? How do we increase the extent to which that objective is aligned with human preferences, such as via design or fine-tuning?

Fairness & representation: How should performance criteria be established for fairness and representation in language models? How can language models be improved in order to effectively support the goals of fairness and representation in specific, deployed contexts?

Societal Impact: How do we create measurements for AI’s impact on society? What impact does AI have on different domains and groups of people?

Interdisciplinary research: How can AI development draw on insights from other disciplines such as philosophy, cognitive science, and sociolinguistics?

Interpretability/transparency: How do these models work, mechanistically? Can we identify what concepts they’re using, extract latent knowledge from the model, make inferences about the training procedure, or predict surprising future behavior?

Misuse potential: How can systems like the API be misused? What sorts of “red teaming” approaches can we develop to help AI developers think about responsibly deploying technologies like this?

Robustness: How robust are large generative models to “natural” perturbations in the , such as phrasing the same idea in different ways or with typos? Can we predict the kinds of domains and tasks for which large generative models are more likely to be robust or not, and how does this relate to the training data? Are there techniques we can use to predict and mitigate worst-case behavior? How can robustness be measured in the context of few-shot learning (e.g., across variations in s)? Can we train models so that they satisfy safety properties with a very high level of reliability, even under adversarial inputs?

Other

We’re initially scoping to these areas, but welcome suggestions for future focus areas. The questions under each area are illustrative and we’d be delighted for research proposals that address different questions.