Integrate ChatGPT into SPFx solutions

Chat GPT (Generative Pre-trained Transformer) has been hot lately and all over the place. And I have to admit, it can accomplish great things! It gives you the feeling of fully understanding and responding appropriately. So how do we do it in SPFx solutions? Let’s move on to the fun part!

The source code for this web part, can be found in the PnP Samples

The api key

To start with, you need an API key, with this API key you can indicate that it is you towards Open AI and you can therefore authenticate with it. You can create and retrieve a key by going to Click on your account at the top right, after which a menu will appear. Here you click on “View API keys”.

You can then create an API key, save it somewhere because after that it will no longer be shown for security reasons. An API key is personal, so never share it or use it publicly as it would be intercepted (see further in this article)

The code

Once we have the API key, we can start writing the logic to use Open AI in our SPFx solution. It’s actually quite simple because there is a library that you can use. More information about this library can be found here.

First we start with installing the openai library, you can do this by using the following command:

   npm install openai

Then we can write the code, first we start by creating a configuration object that holds the key.

   import { Configuration, OpenAIApi } from 'openai';

   export default class Chatgpt extends React.Component<IChatgptProps, IChatgptState> {
     private static openai: OpenAIApi;
     public componentDidMount(): void {
       const key = new Configuration({
          apiKey: this.props.apiKey,
       Chatgpt.openai = new OpenAIApi(key);

Then with this configuration object we can call openai . We do this by calling the createCompletion function with a CreateCompletionRequest object. In this object we include the following:

   const response = await Chatgpt.openai.createCompletion({
     model: "text-davinci-003",
     prompt: this.state.question,
     max_tokens: 2048

– A model, at the time of writing we use text-davinci-003, more info about the models can be found here.
– With the prompt we pass along the actual question
– Finally, we provide the max tokens. By default this is not a mandatory option, but as a result we only get an answer of 28 characters long

In the object we get back we can retrieve final answer via following path:[0].text

How does it work?

If you don’t want to use the openai library, you can use the api directly by executing the POST request below. The special thing about that request is that you have to pass your personal key as a Bearer in the Authorization header. This means that your api key is visible to those who execute this request and can therefore be intercepted.

   Authorization: Bearer [YOUR KEY]

     model: "text-davinci-003",
     prompt: "Your question",
     max_tokens: 2048

Blog at

%d bloggers like this: