Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this tutorial, you'll build an intelligent AI application by integrating Azure OpenAI with a Node.js application and deploying it to Azure App Service. You'll create an Express app with a view and a controller that sends chat completion requests to a model in Azure OpneAI.
In this tutorial, you learn how to:
- Create an Azure OpenAI resource and deploy a language model.
- Build an Express.js application that connects to Azure OpenAI.
- Deploy the application to Azure App Service.
- Implement passwordless secure authentication both in the development environment and in Azure.
Prerequisites
- An Azure account with an active subscription
- A GitHub account for using GitHub Codespaces
1. Create an Azure OpenAI resource
In this section, you'll use GitHub Codespaces to create an Azure OpenAI resource with the Azure CLI.
Go to GitHub Codespaces and sign in with your GitHub account.
Find the Blank template by GitHub and select Use this template to create a new blank Codespace.
In the Codespace terminal, install the Azure CLI:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
Sign in to your Azure account:
az login
Follow the instructions in the terminal to authenticate.
Set environment variables for your resource group name, Azure OpenAI service name, and ___location:
export RESOURCE_GROUP="<group-name>" export OPENAI_SERVICE_NAME="<azure-openai-name>" export APPSERVICE_NAME="<app-name>" export LOCATION="eastus2"
Important
The region is critical as it's tied to the regional availability of the chosen model. Model availability and deployment type availability vary from region to region. This tutorial uses
gpt-4o-mini
, which is available ineastus2
under the Standard deployment type. If you deploy to a different region, this model might not be available or might require a different tier. Before changing regions, consult the Model summary table and region availability to verify model support in your preferred region.Create a resource group and an Azure OpenAI resource with a custom ___domain, then add a gpt-4o-mini model:
# Resource group az group create --name $RESOURCE_GROUP --___location $LOCATION # Azure OpenAI resource az cognitiveservices account create \ --name $OPENAI_SERVICE_NAME \ --resource-group $RESOURCE_GROUP \ --___location $LOCATION \ --custom-___domain $OPENAI_SERVICE_NAME \ --kind OpenAI \ --sku s0 # gpt-4o-mini model az cognitiveservices account deployment create \ --name $OPENAI_SERVICE_NAME \ --resource-group $RESOURCE_GROUP \ --deployment-name gpt-4o-mini \ --model-name gpt-4o-mini \ --model-version 2024-07-18 \ --model-format OpenAI \ --sku-name Standard \ --sku-capacity 1 # Cognitive Services OpenAI User role that lets the signed in Azure user to read models from Azure OpenAI az role assignment create \ --assignee $(az ad signed-in-user show --query id -o tsv) \ --role "Cognitive Services OpenAI User" \ --scope /subscriptions/$(az account show --query id -o tsv)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.CognitiveServices/accounts/$OPENAI_SERVICE_NAME
Now that you have an Azure OpenAI resource, you'll create a web application to interact with it.
2. Create and set up an Express.js web app
In your Codespace terminal, create an Express.js template in the workspace and try running it the first time.
npx express-generator . --view ejs npm audit fix --force npm install && npm start
You should see a notification in GitHub Codespaces indicating that the app is available at a specific port. Select Open in browser to launch the app in a new browser tab.
Back in the Codespace terminal, stop the app with Ctrl+C.
Install the NPM dependencies for working with Azure OpenAI:
npm install openai @azure/openai @azure/identity
Open views/index.ejs and replace it with the following code, for a simple chat interface.
<!DOCTYPE html> <html> <head> <title><%= title %></title> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css" rel="stylesheet"> </head> <body class="bg-light"> <div class="container py-4"> <h1 class="mb-4"><%= title %></h1> <div class="card mb-3"> <div class="card-body" style="min-height: 80px;"> <form action="/chat" method="POST" class="d-flex gap-2 mb-3"> <input type="text" name="message" class="form-control" placeholder="Type your message..." autocomplete="off" required /> <button type="submit" class="btn btn-primary">Send</button> </form> <% if (aiMessage) { %> <div class="mb-2"> <span class="fw-bold text-success">AI:</span> <span class="ms-2"><%= aiMessage %></span> </div> <% } %> </div> </div> </div> </body> </html>
Open routes/index.js and replace its content with the following code, for a simple chat completion call with Azure OpenAI:
var express = require('express'); var router = express.Router(); const { AzureOpenAI } = require('openai'); const { getBearerTokenProvider, DefaultAzureCredential } = require('@azure/identity'); const endpoint = process.env.AZURE_OPENAI_ENDPOINT; const deployment = 'gpt-4o-mini'; const apiVersion = '2024-10-21'; const credential = new DefaultAzureCredential(); const scope = 'https://cognitiveservices.azure.com/.default'; const azureADTokenProvider = getBearerTokenProvider(credential, scope); // Initialize Azure OpenAI client using Microsoft Entra authentication const openai = new AzureOpenAI({ endpoint, azureADTokenProvider, deployment, apiVersion }); router.get('/', function(req, res, next) { res.render('index', { title: 'Express Chat', aiMessage: null }); }); router.post('/chat', async function(req, res, next) { const userMessage = req.body.message; if (!userMessage) { return res.redirect('/'); } let aiMessage = ''; try { // Call Azure OpenAI chat completion const result = await openai.chat.completions.create({ model: deployment, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: userMessage } ], }); aiMessage = result.choices[0]?.message?.content || ''; } catch (err) { aiMessage = 'Error: Unable to get response from Azure OpenAI.'; } res.render('index', { title: 'Express Chat', aiMessage }); }); module.exports = router;
In the terminal, retrieve your OpenAI endpoint:
az cognitiveservices account show \ --name $OPENAI_SERVICE_NAME \ --resource-group $RESOURCE_GROUP \ --query properties.endpoint \ --output tsv
Run the app again by adding
AZURE_OPENAI_ENDPOINT
with its value from the CLI output:AZURE_OPENAI_ENDPOINT=<output-from-previous-cli-command> npm start
Select Open in browser to launch the app in a new browser tab. Submit a question and see if you get a response message.
3. Deploy to Azure App Service and configure OpenAI connection
Now that your app works locally, let's deploy it to Azure App Service and set up a service connection to Azure OpenAI using managed identity.
First, deploy your app to Azure App Service using the Azure CLI command
az webapp up
. This command creates a new web app and deploys your code to it:az webapp up \ --resource-group $RESOURCE_GROUP \ --___location $LOCATION \ --name $APPSERVICE_NAME \ --plan $APPSERVICE_NAME \ --sku B1 \ --os-type Linux \ --track-status false
This command might take a few minutes to complete. It creates a new web app in the same resource group as your OpenAI resource.
After the app is deployed, create a service connection between your web app and the Azure OpenAI resource using managed identity:
az webapp connection create cognitiveservices \ --resource-group $RESOURCE_GROUP \ --name $APPSERVICE_NAME \ --target-resource-group $RESOURCE_GROUP \ --account $OPENAI_SERVICE_NAME \ --connection azure-openai \ --system-identity
This command creates a connection between your web app and the Azure OpenAI resource by:
- Generating system-assigned managed identity for the web app.
- Adding the Cognitive Services OpenAI Contributor role to the managed identity for the Azure OpenAI resource.
- Adding the
AZURE_OPENAI_ENDPOINT
app setting to your web app.
Open the deployed web app in the browser. Find the URL of the deployed web app in the terminal output. Open your web browser and navigate to it.
az webapp browse
Type a message in the textbox and select "Send, and give the app a few seconds to reply with the message from Azure OpenAI.
Your app is now deployed and connected to Azure OpenAI with managed identity.
Frequently asked questions
- What if I want to connect to OpenAI instead of Azure OpenAI?
- Can I connect to Azure OpenAI with an API key instead?
- How does DefaultAzureCredential work in this tutorial?
What if I want to connect to OpenAI instead of Azure OpenAI?
To connect to OpenAI instead, use the following code:
const { OpenAI } = require('openai');
const client = new OpenAI({
apiKey: "<openai-api-key>",
});
For more information, see OpenAI API authentication.
When working with connection secrets in App Service, you should use Key Vault references instead of storing secrets directly in your codebase. This ensures that sensitive information remains secure and is managed centrally.
Can I connect to Azure OpenAI with an API key instead?
Yes, you can connect to Azure OpenAI using an API key instead of managed identity. For more information, see the Azure OpenAI JavaScript quickstart.
When working with connection secrets in App Service, you should use Key Vault references instead of storing secrets directly in your codebase. This ensures that sensitive information remains secure and is managed centrally.
How does DefaultAzureCredential work in this tutorial?
The DefaultAzureCredential
simplifies authentication by automatically selecting the best available authentication method:
- During local development: After you run
az login
, it uses your local Azure CLI credentials. - When deployed to Azure App Service: It uses the app's managed identity for secure, passwordless authentication.
This approach lets your code run securely and seamlessly in both local and cloud environments without modification.