Use ONLYOFFICE macro to populate cells with OpenAI data
AI technology has made significant advancements and become a valuable tool that can automate our workflow. In our previous post, we talked about the ONLYOFFICE macro that can generate essays by using the OpenAI API. In this blog post, we will show you a macro that populates a spreadsheet with data obtained by OpenAI.
Using OpenAI API
Last time, we connected to the OpenAI API through the RapidAPI platform. It’s a quick and convenient way to mediate fetch requests. However, the basic plan offers only 100 requests a month. So, this time, we are going to take it up a notch and create our own Node.js Express server. It will utilize the OpenAI library to serve the API requests directly to the appropriate language model. It will also allow us to bypass CORS errors when making a fetch request from a browser.
Note! Making requests to OpenAI directly is available only upon registering an OpenAI account and acquiring a personal API key.
Setting up a server
First, we create a new folder for our server and set up a new Node.js project by executing the following command in the terminal:
$ npm init -y
Then we install the following dependencies:
- express package – is an essential framework that facilitates server functionality.
- cors package – provides middleware to enable CORS.
- openai package – provides convenient access to the OpenAI API from Node.js.
- dotenv package – loads environment variables from a .env file into process.env.
To install them run the npm instal command in the terminal:
$ npm install express
$ npm install openai
$ npm install dotenv --save
$ npm install cors
After that we create the .env file. We will store our API keys there:
OPEN_AI_API_KEY="<YourOpenAIkey"
SECOND_API_KEY="YourAPIkey"
The OPEN_AI_API_KEY parameter contains your API key obtained from OpenAI, and the SECOND_API_KEY parameter is the API key for our server. We will incorporate it in the incoming fetch request for security measures.
Then we initialize the installed packages in the index.js file:
const express = require('express');
const cors = require('cors');
const { Configuration, OpenAIApi } = require('openai');
require('dotenv').config();
const app = express();
const openai = new OpenAIApi(configuration);
app.use(cors());
app.use(express.json());
A successful request to OpenAI must include the OpenAI API key. To do that, we extract this API key from the .env file and store it in the configuration variable:
const configuration = new Configuration({
apiKey: process.env.OPEN_AI_API_KEY,
});
Then we add a post route from which we will send a request to OpenAI:
app.post('/completion', async (req, res) => {
}
We also want to implement our API key for security measures. This second API key is supposed to be included in a fetch request. In that regard, our server needs to check the incoming fetch request and validate the included API key. To attain that, we add the following code to our route:
app.post('/completion', async (req, res) => {
const requestApiKey = req.body.apiKey;
if (!requestApiKey) {
res.status(400).send({ error: "Error: API Key is missing." });
return;
}
if (requestApiKey !== process.env.SECOND_API_KEY) {
res.status(401).send({ error: "Error: API Key is incorrect." });
return;
}
If the API key is correct, we send a request to OpenAI:
const completion = await openai.createCompletion({
model: 'text-davinci-003',
prompt: req.body.prompt,
temperature: 0,
max_tokens: 300,
});
res.json(completion.data.choices[0].text);
- model parameter specifies the name of the language model to be used for generating text. In this case, it’s text-davinci-003, which is the most advanced model for text generation.
- prompt parameter specifies text or context for the language model to generate text from. We will obtain this value from the fetch request body.
- temperature parameter controls the randomness of the generated text. A value of 0 means the text is completely deterministic, while higher values will result in more varied and unexpected text.
- max-tokens parameter indicates the maximum length of the generated text in tokens (i.e. words or sub-words).
Lastly, we set up our server to listen on port 3000:
app.listen(3000, () => {
console.log('Server running on port 3000');
});
The entire index.js code is the following:
const express = require('express');
const cors = require('cors');
const { Configuration, OpenAIApi } = require('openai');
require('dotenv').config();
const app = express();
const configuration = new Configuration({
apiKey: process.env.OPEN_AI_API_KEY,
});
const openai = new OpenAIApi(configuration);
app.use(cors());
app.use(express.json());
app.post('/completion', async (req, res) => {
const requestApiKey = req.body.apiKey;
if (!requestApiKey) {
res.status(400).send({ error: "Error: API Key is missing." });
return;
}
if (requestApiKey !== process.env.SECOND_API_KEY) {
res.status(401).send({ error: "Error: API Key is incorrect." });
return;
}
const completion = await openai.createCompletion({
model: 'text-davinci-003',
prompt: req.body.prompt,
temperature: 0,
max_tokens: 300,
});
res.json(completion.data.choices[0].text);
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
Building the macro
First, we make a fetch request to our server:
fetch('http://<your_server_address>/completion', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
prompt: 'List the 10 wealthiest countries in the world. Provide GPT figures',
apiKey: '<YourAPIkey>'
})
})
We specified the prompt and apiKey values in the request body. Note that these parameters will differ depending on the data you want to receive and your server API key.
Next, we parse the response as JSON data:
.then(response => response.json())
We split the response text by line breaks into an array of strings and target the active spreadsheet:
.then(data => {
var arrAllRows = data.split(/\r?\n|\r/);
var wSheet = Api.GetActiveSheet()
Next, we set up a nested loop to go through rows and columns in the spreadsheet. We use the split() function to break down the data as needed. We use the SetValue method to insert the response data into each cell while making sure to get rid of any extra spaces. It’s important to mention that we start with the singleRow variable set to 2 to focus on the particular data we want from the arrAllRows array. Additionally, we use the AutoFit function to adjust the cell width for better display:
var i = 1;
var j = 1;
for (var singleRow = 2; singleRow < arrAllRows.length; singleRow++) {
var rowCells = arrAllRows[singleRow].split('-');
for (var rowCell = 0; rowCell < rowCells.length; rowCell++) {
var oRange = wSheet.GetCells(i,j);
oRange.SetValue(rowCells[rowCell].trim());
oRange.AutoFit(false, true)
j = j + 1;
}
i = i + 1;
j = 1;
}
});
Lastly, we execute the setInterval function to reload the spreadsheet:
let reload = setInterval(function(){
Api.asc_calculate(Asc.c_oAscCalculateType.All);
});
})();
The entire macro code is the following:
(function()
{
fetch('http://localhost:3000/completion', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
prompt: 'List the top 10 wealthiest countries and provide their GDP figures',
apiKey: '<YourAPIkey>'
})
})
.then(response => response.json())
.then(data => {
var arrAllRows = data.split(/\r?\n|\r/);
var wSheet = Api.GetActiveSheet();
var i = 1;
var j = 1;
for (var singleRow = 2; singleRow < arrAllRows.length; singleRow++) {
var rowCells = arrAllRows[singleRow].split('-');
for (var rowCell = 0; rowCell < rowCells.length; rowCell++) {
var oRange = wSheet.GetCells(i,j);
oRange.SetValue(rowCells[rowCell].trim());
oRange.AutoFit(false, true)
j = j + 1;
}
i = i + 1;
j = 1;
}
});
let reload = setInterval(function(){
Api.asc_calculate(Asc.c_oAscCalculateType.All);
});
})();
Now, let’s open the spreadsheet editor and run our macro!
We hope that you will utilize the concepts outlined in this blog post and make your work flow more efficient. We kindly encourage you to take the advantage of our various API methods and incorporate them into your daily routine tasks.
If you have any questions or ideas to share, feel free to leave comments or contact us. We are open to hearing your ideas and are eager to collaborate with you. Best of luck in your exploratory endeavors!