skip to Main Content

Hello StackOverflow community,

I’ve developed a Next.js application that uses the Langchain library for chat functionality and is deployed on AWS Amplify. The application works perfectly when running locally, but fails after deployment on AWS Amplify.

The application uses the Langchain library, OpenAIEmbeddings for generating embeddings, and PineconeStore for storing vectors. I’ve ensured that my environment variables are correctly set up both locally and in the AWS Amplify console.

Here is the code snippet for my chat handler:

import type { NextApiRequest, NextApiResponse } from 'next';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { makeChain } from '@/utils/makechain';
import { pinecone } from '@/utils/pinecone-client';
import { PINECONE_INDEX_NAME, PINECONE_NAME_SPACE } from '@/config/pinecone';

export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse,
) {
  const { question, history } = req.body;

  if (req.method !== 'POST') {
    res.status(405).json({ error: 'Method not allowed' });
    return;
  }

  if (!question) {
    return res.status(400).json({ message: 'No question in the request' });
  }

  const sanitizedQuestion = question.trim().replaceAll('n', ' ');

  try {
    const index = pinecone.Index(PINECONE_INDEX_NAME);
    const vectorStore = await PineconeStore.fromExistingIndex(
      new OpenAIEmbeddings(),
      {
        pineconeIndex: index,
        textKey: 'text',
        namespace: PINECONE_NAME_SPACE,
      },
    );

    const chain = makeChain(vectorStore);
    const response = await chain.call({
      question: sanitizedQuestion,
      chat_history: history || [],
    });

    res.status(200).json(response);
  } catch (error: any) {
    console.log('chat.ts file error: ', error);
    res.status(500).json({ error: error.message || 'Something went wrong' });
  }
}

The error message I’m receiving suggests that the Langchain library is assuming I am running on an Azure instance and is expecting an Azure-specific environment variable. However, I am not using Azure, I’m using AWS. The error message states that an ‘azureOpenAIApiInstanceName’ is missing, which, as I understand, is only relevant if I was using Azure.

Has anyone encountered a similar issue, or have any insights into why this might be happening? I’ve been unable to find any information in the Langchain library documentation about this Azure dependency. Any help would be greatly appreciated!

2

Answers


  1. Chosen as BEST ANSWER

    I managed to resolve the issue by ensuring that the necessary environment variables were passed into the .env.production file during the build phase on AWS Amplify. The solution was to modify the amplify.yml build specification to include these environment variables.

    Here's a brief explanation for anyone encountering a similar problem:

    By default, a Next.js server component doesn't have access to environment variables set in AWS Amplify. So, even if you correctly set up the environment variables in the AWS Amplify Console, you'll need to ensure they're accessible at runtime for your Next.js application.

    Here's what I did:

    Modify the amplify.yml file:

    version: 1
    frontend:
      phases:
        preBuild:
          commands:
            - npm ci
        build:
          commands:
            - env | grep -e OPENAI_API_TYPE -e YOUR_OTHER_ENV_VAR >> .env.production
            - npm run build
      artifacts:
        baseDirectory: .next
        files:
          - '**/*'
      cache:
        paths:
          - node_modules/**/*
          - .next/cache/**/*
    

    Replace YOUR_OTHER_ENV_VAR with any other environment variables you might need.

    Deploy & Test: After updating the amplify.yml file, I pushed the changes, triggering a new build and deployment process on AWS Amplify. After deployment, the application accessed the environment variables correctly at runtime, and the error was resolved.

    In conclusion, the key takeaway is ensuring that any environment variables required during runtime are added to the .env.production file in the build phase on AWS Amplify. This makes them accessible to your Next.js server components when the application is live.

    I hope this helps others encountering a similar issue!


  2. The issue you’re experiencing seems to be related to the default configuration of the LangChain library. Specifically, the openai_api_type variable in the validate_environment method of the AzureChatOpenAI class is set to "azure" by default if it is not provided in the environment variables or the constructor parameters. This is why the LangChain library is assuming you are running on an Azure instance and expecting an Azure-specific environment variable 'azureOpenAIApiInstanceName'.

    Here is the relevant code snippet from the AzureChatOpenAI class:

    values["openai_api_type"] = get_from_dict_or_env(
        values, "openai_api_type", "OPENAI_API_TYPE", default="azure"
    )
    

    To resolve this issue, you should set the openai_api_type environment variable to the appropriate value for your AWS environment in your Next.js application. This will override the default "azure" value and should prevent the LangChain library from assuming you are running on an Azure instance.

    As for the ‘azureOpenAIApiInstanceName’ environment variable, I wasn’t able to find specific information about its purpose within the LangChain repository. It’s possible that it’s used to specify the name of the Azure OpenAI API instance that the LangChain library should interact with, but I would need more information to confirm this.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search