Guide: Working with AI
Because you access AI just like accessing any API, Qreli can work with any AI.
Open AI
Because it works across any API, Qreli includes
full support for Open AI (ChatGPT), including chat completions, streaming, files, vector stores etc.
To Authenticate, grab your API key from Open AI and enter it in the Authorization section below. Be sure to start with "Bearer ...."
Chat (Chat completion function)
This is the most common way of conversing with AI.
Of course, all you would need to do in your app is replace whatever field with data from your app, using the ${ ...} construct e.g.
When you send this API request, and assuming you are authenticated correctly, you would get a response back like this:
In your app, you would refer to the data you wish directly, e.g. above assuming the API call is V.chatgpt_ask
, you would access its content as ${V.chatgpt_ask}{choices[0].message.content}
.
Or, if you mapped the responses, such as:
you would simply call ${V.chatgpt_ask}{content}
By default, this function would prepare the response and send the result. Using the Showcase app in your account as an example, the result looks like this:
If, however, we turn on streaming, we will start seeing the results right away - this is especially useful for long AI responses:
and replace the output request from just
displaying the result of the API call:
to streaming it:
we get a streaming experience like this:
Generate an Image (Create image function)
To create an image with OpenAI, just use the Create image function:
The result will be link to a temporary URL holding your image:
If you wish to save the picture, save it somewhere on your end or directly in your Qreli account — see the second operation ${V.save_img}
below:
Once saved like this, it will always be available:
Claude
After importing Claude, we set up our Authentication — get your x-api-key from your Anthropic site:
Messages
Then, we call the messages API:
and we get the response back:
You would use Claude similarly to how you use ChatGPT (described above), but please note that streaming is not currently supported for Claude.
Gemini
Just like ChatGPT and Claude, we start by Authenticating to use Gemini:
Generate Content
To have Gemini generate content, we use this aptly named function:
We need to provide the model in the URL; for example, let's use models/gemini-1.5-flash
(you can see the available models by calling the List Models function):
We get a response back that looks like this:
Streaming is not yet available for Gemini; you use it the same way you would use ChatGPT above.
Others
Following their instructions, add the AI you wish to use (or ask us to do so), then authenticate your API and make use of whatever resources are provided.