3. AI Text and Images

This app will introduce you to API orchestration, Qreli's superpower.


We will create a little app that will interface with OpenAI (a.i. ChatGPT) to generate text and images based on our requirements.


Flow

This is the structure of our app. We'll offer to chat or generate images, then complete the request. Simple.


Before you can use this app, be sure to add your OpenAI API key in Sources.

You can get one by visiting https://platform.openai.com


The menu will show a list of options, and we'll use buttons pointing to a step in our app to navigate.

Since we will be going back to the main menu repeatedly, we will clear any chats and history every time, so that all new chats start from a blank slate. We want to do this in the BEFORE section, since AFTER will not get reached as we'll jump directly to another step when clicking on a button.

OpenAI can return results via API after it completes the processing on its end (non-streaming), or it could send chunks of results as it works to complete the full request (streaming).

Chat Non-Streaming



There are many ways of doing something - Qreli lets you implement your vision however you wish.


In this example, we will construct a loop where we continuously prompt the user for questions, which OpenAI will answer.


We will do this as follows:

  1. In the STEP interface, we show whatever chat history we've collected, separated by a line return — see Decorators for array-to-list formats (we'll store the history in an array as you will see shortly). Initially the history is empty therefore nothing will be shown.

    Then, we ask the user for a prompt, which we'll store into the chat_ask data field:

  1. Once our user enters a prompt, in the AFTER section we send the request to OpenAI and ask for a response. In this case, we are using their Chat Completions API function to pass the prompt and get a response.

A few things to note here:

a. There are many models to pick from, we just chose something cheap and simple to illustrate this concept, but you can try using different models. You would use something similar with Claude, Gemini, Grok etc.

b. You can reference and use any response from APIs. Sometimes, however, you may wish to map something to reference it easier later. For example, instead of referencing something like ${V.chat_ns}{choices[0].message.content} we map what we need to${V.chat_ns}{content} :

The advantage of mapping frequently used responses is that they show up as autocomplete options:

After we get the response from the API, we add it to a history array where we accumulate the user prompts and corresponding AI responses, separated by an empty line:

  1. Lastly, we simply loop back to the same step so it's an infinite ask-user-something followed by AI's response.

Users can click on the main menu button to go back to the main menu.




Chat (Streaming)

Now that we understand how Qreli works with API s, this one is easier.

In the STEP interface, we show the history object just like before, followed by the chat stream formatted as markdown, since this is AI's response:

AFTER our user asked the question, we prepare the history — note that we are using the completed response from AI here: ${V.chat}{htmlContent} instead of the stream in display above ${V.chat}{htmlStream}

This is where the magic happens: we give AI the previous context and ask the question. We also tell it to add the AI tag so it flows nicely (note that in the non-streaming version we added it ourselves).

A few more things to note here:

  • note that the payload property stream is set to true; this tells OpenAI to stream the data
  • note also that we use a double underscore as the cursor; you may use your own
  • depending where/how your Qreli app is deployed, you may wish to set the autoscroll to bottom
  • you may add a STOP button and place it somewhere; you may further run other actions when you stop the stream

Lastly, if AI returns images in the stream (e.g. you ask it to generate a graph from some data etc.), you would use the image resolver to call OpenAI's function Retrieve file contentto grab that image so it may be included in the htmlStream:

Finally, just like before, we transition back to the start of this step to continue interacting with AI:



Creating Images


OK, we've learned how to interact with APIs to generate text. You would use the same concept for other LLMs, but some may not yet support streaming via API.

Now, let's generate an image.

Let's start by asking the user to describe what they would like to be generated, as well as select the image size. We do this in the STEP interface, as that is where the interaction with users occurs:

After we receive the response, we transition automatically to the next STEP named Image Result. Here, BEFORE we show the image, we ask AI to generate it by using OpenAI's Create Image function.

Note that we pass in the prompt and image size from the user input.

For simplicity of working with it, we map the revised prompt (AI adds additional information to the user input), and the url generated.

Note that the generated images are only available temporary.

Save them somewhere (e.g. your app storage) if you wish to make them available permanently.

Once the image was generated, we show the revised prompt and the image.

Note that we use a Media Qreli object to show the image by passing in the generated url:


This is it!

We suggest you clone this app to have as reference and modify your copy as you explore various settings to see how your app behaves.

The way you access this API is the same way that you will be accessing all APIs.

In brief, this is how to work with any APIs:

  1. Add the API to your Sources (from Library, imported from Postman collection or add manually)
  2. Consult the API documentation how to connect, and add the details to the AUTH section.
  3. Verify that you have connected correctly before continuing, by testing an API function.
  4. Add whatever API functions you need to your app.
  5. Design, Test, Deploy : Done ✅
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.