llm tools | cache chat logs | llm respond like a personality | Search

This code defines two asynchronous functions, askLlamaAboutConversation and askLlamaAboutCategory, that utilize a large language model for text analysis and summarization. The functions are designed to work with a create llm session module and are exported as a JavaScript module.

Run example

npm run import -- "ask llm about chat conversations"

ask llm about chat conversations


async function askLlamaAboutConversation(currentMessages) {
  const {llmPrompt} = await importer.import("create llm session")
  let q1 = 'Can you summarize in two sentences what this conversation is about:\n' + 
  currentMessages.join('\n') + '\nPlease discard any pleasantries, documentation only.'
  console.log("User: " + q1);
  const a1 = await llmPrompt(q1);
  console.log("AI: " + a1);
  return a1.trim()
}

async function askLlamaAboutCategory(currentMessages) {
  const {llmPrompt} = await importer.import("create llm session")
  let q1 = 'Categorize this conversation in two or three words:\n' + 
  currentMessages.join('\n') + '\nOnly respond with the category.'
  console.log("User: " + q1);
  const a1 = await llmPrompt(q1);
  console.log("AI: " + a1);
  return a1.trim().split(/\s*\n\s*|,\s*|\s*- |\s*\* /gi)[0]
}

module.exports = {
  askLlamaAboutConversation,
  askLlamaAboutCategory
}

What the code could have been:

const importer = require('./importer'); // assuming importer is defined in a separate file
const logger = require('./logger'); // assuming a logger is defined in a separate file

/**
 * Asks Llama about the conversation category.
 *
 * @param {string[]} currentMessages - The current messages in the conversation.
 * @returns {Promise} The category of the conversation.
 */
async function askLlamaAboutConversation(currentMessages) {
  // Import the LLM prompt function
  const { llmPrompt } = await importer.import('createLlmSession');

  // Prepare the prompt with the current messages
  const prompt = `Can you summarize in two sentences what this conversation is about:
${currentMessages.join('\n')}
Please discard any pleasantries, documentation only.`;

  // Log the user message
  logger.log(`User: ${prompt}`);

  try {
    // Ask Llama for a response
    const response = await llmPrompt(prompt);
    // Log the AI response
    logger.log(`AI: ${response}`);
    // Return the response
    return response.trim();
  } catch (error) {
    // Handle any errors that occur during the request
    logger.error(`Error asking Llama about conversation: ${error.message}`);
    throw error;
  }
}

/**
 * Asks Llama about the conversation category.
 *
 * @param {string[]} currentMessages - The current messages in the conversation.
 * @returns {Promise} The category of the conversation.
 */
async function askLlamaAboutCategory(currentMessages) {
  // Import the LLM prompt function
  const { llmPrompt } = await importer.import('createLlmSession');

  // Prepare the prompt with the current messages
  const prompt = `Categorize this conversation in two or three words:
${currentMessages.join('\n')}
Only respond with the category.`;

  // Log the user message
  logger.log(`User: ${prompt}`);

  try {
    // Ask Llama for a response
    const response = await llmPrompt(prompt);
    // Log the AI response
    logger.log(`AI: ${response}`);

    // Extract the category from the response
    const category = response.trim().split(/\s*\n\s*|,\s*|\s*- |\s*\* /gi)[0];

    // Return the category
    return category;
  } catch (error) {
    // Handle any errors that occur during the request
    logger.error(`Error asking Llama about category: ${error.message}`);
    throw error;
  }
}

module.exports = {
  askLlamaAboutConversation,
  askLlamaAboutCategory,
};

Code Breakdown

The provided code defines two asynchronous functions askLlamaAboutConversation and askLlamaAboutCategory that utilize a large language model (LLM) for text analysis and summarization. These functions are designed to work with a create llm session module.

askLlamaAboutConversation Function

This function:

  1. Imports the create llm session module using importer.
  2. Creates a prompt (q1) that asks the LLM to summarize the conversation in two sentences, discarding any non-essential information.
  3. Logs the user prompt to the console.
  4. Sends the prompt to the LLM using the llmPrompt function and logs the AI response.
  5. Returns the AI response after removing leading and trailing whitespace.

askLlamaAboutCategory Function

This function:

  1. Imports the create llm session module using importer.
  2. Creates a prompt (q1) that asks the LLM to categorize the conversation in two or three words.
  3. Logs the user prompt to the console.
  4. Sends the prompt to the LLM using the llmPrompt function and logs the AI response.
  5. Returns the first word of the AI response after removing leading and trailing whitespace, as well as any non-word characters or whitespace between the category and the rest of the response.

Exported Module

The code exports both functions as a module, allowing them to be used in other JavaScript files.

module.exports = {
  askLlamaAboutConversation,
  askLlamaAboutCategory
}