Recursive Summarization with OpenAI's GPT-4 and Next.js
When you need to summarize large text documents using the GPT-4 model, one hurdle you may encounter is the model's token limit. Given that GPT-4 only processes a certain number of tokens per request, you might find it challenging to compress lengthy text into a concise summary without losing key details. To address this, we can employ a technique called "Recursive Summarization."
If you are intrested in this and want to see it implemented in a software product, check out FinSmart
What is Recursive Summarization?
Recursive summarization is a strategy that involves summarizing a text multiple times until it fits within a certain length limit. The idea is to first break down the text into manageable chunks, summarize each chunk individually, then combine the summaries and repeat the process until the final summary is within the desired token limit.
Implementation Using GPT-4 and Next.js
To illustrate, we will walk you through a Next.js server-side application that uses recursive summarization to handle large texts.
Step 1: Setting Up OpenAI API
First, we initialize the OpenAI API using the users API key.
import { Configuration, OpenAIApi } from "openai";
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
Step 2: Text Slicing
The sliceText function accepts a text string and a maximum length. It breaks down the text into an array of chunks, each of which is less than or equal to the maximum length. It prioritizes cutting at newline characters for better segmentation.
function sliceText(text: string, maxLength: number): string[] {
const chunks: string[] = [];
while (text.length > maxLength) {
let end = maxLength;
while (text[end] !== "\n" && end > 0) {
end--;
}
if (end === 0) {
end = maxLength;
}
chunks.push(text.slice(0, end));
text = text.slice(end + 1);
}
if (text.length) {
chunks.push(text);
}
return chunks;
}
Step 3: Summarizing Chunks
async function summarizeChunk(chunk: string): Promise<string> {
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo-16k",
messages: [{
role: "user",
content: `You are a earnings transcript summarizer. Please provide a summary of the following text: Text: ${chunk}`,
}],
temperature: 0,
max_tokens: 1500,
});
if (response.data.choices && response.data.choices.length > 0 && response.data.choices[0].message) {
const summary = response.data.choices[0].message.content.trim();
return summary;
} else {
throw new Error("No choices found in API response");
}
}
Step 4: Recursive Summarization
The summarizeLargeText function is the heart of the recursive summarization. It slices the text into chunks, summarizes each chunk, and merges the summaries. If the merged summary exceeds the token limit, the function recursively calls itself to further compress the summary.
async function summarizeLargeText(text: string): Promise<string> {
const maxLength = 13500;
const chunks = sliceText(text, maxLength);
const summarizedChunks = await Promise.all(chunks.map(chunk => summarizeChunk(chunk)));
const mergedSummary = summarizedChunks.join(" ");
if (mergedSummary.split(" ").length > maxLength) {
return summarizeLargeText(mergedSummary);
} else {
return mergedSummary;
}
}
Step 5: API Handler Function
Finally, we have the API handler function. It validates the input, uses the summarizeLargeText function to summarize the text, and sends back the summary in the response.
export default async function handler(req: NextApiRequest, res: NextApiResponse): Promise<void> {
const { text } = req.body;
if (!text || typeof text !== 'string') {
res.status(400).json({ error: 'Invalid input text' });
return;
}
try {
const summary = await summarizeLargeText(text);
res.status(200).json({ summary });
} catch (error) {
res.status(500).json({ error: error.message });
}
}
Final Words
Recursive summarization is a powerful way to handle large texts using AI summarizers that have token limits. The strategy of breaking down the problem, applying the summarization individually, and piecing the solutions back together provides a neat workaround to token limitation issues. This can be applied not just in text summarization, but also in a broad range of AI text processing applications. If you are intrested in this and want to see it implemented in a software product, check out FinSmart