The openai
module allows you to communicate with the OpenAI API directly from machine code.
const OpenAI = require('openai')
const openai = new OpenAI({ apiKey: 'your_api_key' })
const response = await openai.chat.completions.create({ model: 'gpt-3.5-turbo', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Hello, what can you do?' } ], temperature: 0.7, max_tokens: 100 }) const answer = response.choices[0].message.content
const response = await openai.completions.create({ model: 'text-davinci-003', prompt: 'Write a short story about a cat.', temperature: 0.7, max_tokens: 100 }) const text = response.choices[0].text
const response = await openai.images.generate({ prompt: 'A cat riding a bike in Prague', n: 1, size: '512x512' }) const imageUrl = response.data[0].url
If you need to send an image (e.g. for vision models), you can convert a BlobReference to a base64 string and include it in your API request:
// Assume you have a BlobReference 'imageBlob' and a valid machineContext const base64Image = await blobToBase64(machineContext, imageBlob) const response = await openai.chat.completions.create({ model: 'gpt-4-vision-preview', messages: [ { role: 'user', content: [ { type: 'text', content: 'Describe this image.' }, { type: 'image_url', image_url: { "url": `data:image/png;base64,${base64Image}` } } ] } ] }) const answer = response.choices[0].message.content
Replace image/png
with the correct MIME type if needed. This example shows how to send an image as base64 to OpenAI's vision model.
model
: Model name (e.g. 'gpt-3.5-turbo', 'text-davinci-003')messages
: Array of messages for chat (role: 'system', 'user', 'assistant')prompt
: Text prompt for text generationtemperature
: Response creativity (0-2)max_tokens
: Maximum response length© 2025 Routzie Routzie.com