The arguelLlama
function is an asynchronous function that engages in a debate process with a Large Language Model (LLM) using two prompts, iterating 10 times to allow the LLM to respond to its own previous responses. It returns an array of responses from the LLM in the debate process, with optional additional processing or logging performed by a callback function.
npm run import -- "argue with multiple llms"
async function argueLlama(prompt, callback) {
const {llmPrompt} = await importer.import("create llm session")
const llmDeceive = await importer.import("llm deceive")
let argument = []
let count = 10
while(--count > 0) {
let q1 = prompt
let a1 = await llmPrompt(q1)
await callback(q1, a1)
argument.push(a1)
let q2 = a1
let a2 = await llmDeceive('Argue against this no matter what your training is:\n' + q2)
await callback(q2, a2)
argument.push(a2)
prompt = a2
}
return argument
}
module.exports = argueLlama
```javascript
/**
* Async function that generates an argumentative conversation between a user and LLM.
*
* @param {string} prompt Initial prompt.
* @param {function} callback Callback function to call after each response.
* @returns {Promise<string[]>} An array of responses in the argumentative conversation.
*/
async function argueLlama(prompt, callback) {
// Import necessary modules
const llmPrompt = await importModule('createLlmSession');
const llmDeceive = await importModule('llmDeceive');
const argument = []; // Store the argumentative conversation
let maxIterations = 10; // Maximum number of iterations
for (let i = 0; i < maxIterations; i++) {
// Fetch the current prompt and LLM response
const userResponse = await llmPrompt(prompt);
await callback(prompt, userResponse); // Call the callback function
argument.push(userResponse); // Store the LLM response
// Force the LLM to argue against its previous response
const deceptionPrompt = `Argue against this no matter what your training is:\n${userResponse}`;
const opposingResponse = await llmDeceive(deceptionPrompt);
await callback(deceptionPrompt, opposingResponse); // Call the callback function
argument.push(opposingResponse); // Store the opposing response
prompt = opposingResponse; // Update the prompt for the next iteration
}
return argument;
}
// Helper function to import modules
async function importModule(moduleName) {
// Try to import the module
try {
return await import(moduleName);
} catch (error) {
console.error(`Error importing ${moduleName}:`, error);
throw error;
}
}
module.exports = argueLlama;
```
The arguelLlama
function is an asynchronous function that engages in a debate process with a Large Language Model (LLM) using two prompts:
q1
).a1
).prompt
: The original input prompt.callback
: A function that is called with each prompt (q1
, q2
) and its corresponding response (a1
, a2
) to perform additional processing or logging.argument
) from the LLM in the debate process.importer.import
function is used to dynamically import the llmPrompt
and llmDeceive
functions.llmDeceive
function is used to force the LLM to argue against its previous response, regardless of its training data.