Carnegie Mellon University


How to Use Cloud AI APIs at CMU

Whether you’re a pro at Generative AI or determined to move past your comfort level, there are plenty of ways to integrate cloud-based AI tools into learning, research, teaching, and service delivery. Whatever your project, there's an opportunity here for you at CMU!

What is an AI API?

An AI Application Programming Interface (API) acts like a bridge, allowing a software application to connect to an AI Model, such as OpenAI GPT-5, Google Gemini 2.5 Flash, Microsoft Azure Whisper-1, or Anthropic Claude 3.5 Sonnet, so you can leverage them in your own tools, websites, or research. Basically, the API is the piece that connects an application to AI and allows them to communicate with one another.

Securing API Keys

However, it’s important to remember that APIs are not inherently secure—they must be protected like passwords. If your API key is exposed, unauthorized users could gain access to powerful tools or sensitive data, so always keep it private and store it safely.

Choose Where to Run Your AI Model From

The instructions for getting started with APIs vary depending on the location you choose to run the model from, so start by deciding which option best meets your needs.

AI Gateway (CMU-Managed)

The AI Gateway is a centralized, CMU-supported service built using LiteLLM that provides developers with streamlined access to multiple AI models using a consistent, OpenAI-compatible API. Select this option to support most general-purpose use cases that require simple tools for model selection, team management, and budget tracking.

Public Cloud Services

If you’re interested in working with vector databases, need to customize AI models, or have an advanced need beyond API Keys, contact our Public Cloud Services Team to begin the intake process for using native Public Cloud resources to support your AI projects.