[PAID] 🌟 ChatGPT extension to create fantastic conversations with gpt models



hello everyone I am here today to introduce my new extension called ChatGPT,

The ChatGPT extension is designed to engage in conversations with the ChatGPT and deliver the resulting response in an API-style structure. It allows MIT App Inventor users to integrate OpenAI’s powerful language model into their Android apps seamlessly.


Continuous Chat Conversation:

  • The code supports continuous chat conversations by sending a list of prompts, enabling dynamic and interactive conversations with the OpenAI bot. so the model don’t forget the previous messages that you have sent before

Streaming Support:

  • it enables streaming of responses in chunks, making it suitable for handling large responses.

Error Handling:

  • The extension includes error handling through the Error Block and related error events, which allow you to handle errors gracefully and provide feedback to users.

Audio Transcription:

  • The extension includes blocks for audio transcription using OpenAI’s Audio Transcriptions, with events for handling and displaying transcribed text.

Audio Translation:

  • It also provides functions for audio translation using OpenAI’s Audio Translations , with events for processing and displaying translated text.

Images generations:

  • It also provides functions for generating images using DALLE.


The SendMessage block is responsible for sending a conversation to the ChatGPT and processing the response. Here’s a breakdown of the code:

  1. Block Description: This Block allows users to interact with the OpenAI ChatGPT and receive structured API-style responses.
  2. Function Parameters:
  • prompts: A list of conversation prompts provided by the user like in the block above .
  • model: The name of the OpenAI model to be used.
  • apiKey: The API key for authorization.
  • maxTokens: The maximum number of tokens in the response.
  • temperature: A value controlling the randomness of the response.

The RespondedToChat Block is an event that is triggered when OpenAI provides a response to a user’s inquiry within the Chat block. This block carries various parameters, and here is an explanation of each parameter:

  • responseId: A string representing the unique identifier for the response generated by OpenAI.
  • responseType: A string indicating the type of the response object. It typically denotes the data structure used to encapsulate the response.
  • createdTimestamp: A long value representing the timestamp when the response was created. It is usually in Unix timestamp format.
  • responseModel: A string indicating the specific OpenAI model that was used to generate the response.
  • choiceIndex: An integer representing the index of the choice within the response. OpenAI often provides multiple choices, and this parameter indicates which choice is selected.
  • role: A string indicating the role of the message within the conversation. It can be used to distinguish between different roles, such as “system,” “user,” or “assistant.”
  • content: A string representing the content of the response. This is the actual text generated by the OpenAI model.
  • finishReason: A string indicating the reason for the completion of the response. It provides information about why the conversation ended.
  • promptTokens: An integer representing the number of tokens used in the conversation prompt. Tokens are units of text used by OpenAI models.
  • completionTokens: An integer representing the number of tokens used in the generated response completion.
  • totalTokens: An integer representing the total number of tokens used in the entire response, including both the prompt and the completion.


The StopStream Block and the associated StoppedStream event are used in the context of managing streaming operations in the code.


The StoppedStream Block is an essential component in managing streaming operations and is triggered when the streaming process is manually stopped by calling the StopStream Block

The SendStreamedMessage function is designed to retrieve a response in chunks from the ChatGPT model. It allows for ongoing communication with the model and is specifically used for streaming responses,


  • The function takes several parameters:
    • id (integer): An identifier for the stream.
    • prompts (YailList): A list of prompts (messages) that constitute the conversation with the model.
    • model (String): The model code used for the conversation.
    • apiKey (String): The API key required for authentication.
    • maxTokens (integer): The maximum number of tokens for the response.
    • temperature (double): A value that controls the randomness of the response.


The GotStream Block is used to notify when OpenAI has provided a response to a stream request during an ongoing streaming conversation.


The FinishedStream event is used to notify when all chunks of a stream have been returned through the GotStream event, indicating the completion of the streaming conversation.



Description: This function asynchronously requests content moderation using the OpenAI Moderation API. It takes an API key and input text as parameters, sends a POST request to the API endpoint, and processes the response.


  • apiKey (String): The API key for accessing the OpenAI Moderation API.
  • input (String): The input text or content to be moderated.



Description: This event is triggered when the moderation result is received from the OpenAI Moderation API. It provides information about whether the content is flagged, categories, and category scores as parameters.


  • flagged (boolean): Indicates whether the content is flagged.
  • categories (String): JSON representation of the detected categories.
  • categoryScores (String): JSON representation of the scores for each category.

Usage: Handle this event to perform actions based on the moderation result, such as updating the user interface or taking appropriate actions based on the moderation outcome.


RequestAudioSpeech Function

Description: This function is responsible for asynchronously requesting audio speech synthesis from OpenAI’s Audio Speech API. It takes various parameters such as API key, input text, model, voice, folder path, and file name. The resulting MP3 content is then written to a file.


  • apiKey (String): The API key for accessing OpenAI’s Audio Speech API.

  • text (String): The input text to be synthesized into speech.

  • model (String): The model to be used for speech synthesis. One of the available TTS models tts-1 or tts-1-hd

  • voice (String): The voice to use when generating the audio. Supported voices are alloy , echo , fable , onyx , nova , and shimmer

  • folderPath (String): The path to the folder where the MP3 file will be saved.

  • fileName (String): The name of the MP3 file to be saved.

Exambles :

  • Alloy :
  • Echo :
  • Fable :
  • Onyx :

You can try other voices


SpeechFileSaved Event

Description: This event is fired when the MP3 file has been successfully saved. It provides the file path as a parameter.


  • filePath (String): The path where the MP3 file has been saved.

Usage: Handle this event to perform actions after the MP3 file has been successfully saved.


SpeechSynthesisError Event

Description: This event is fired when an error occurs during the audio speech synthesis process. It provides an error message as a parameter.


  • errorMessage (String): The error message describing the issue encountered.

Usage: Handle this event to capture and handle errors during the speech synthesis process.


The RequestAudioTranscription Block is responsible for making a request to OpenAI’s Audio Transcriptions API to transcribe audio from a provided audio file. (Transcribes audio into the input language.)

The Blcok takes four parameters:

  • apiKey (API key for authentication),

  • audioFilePath (path to the audio file to be transcribed),

  • model (model configuration), you can set it as whisper-1

  • responseFormat ( The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt).


The AudioTranscriptionReceived block, its purpose is to notify the application when audio transcription data is received


The RequestAudioTranslation block is designed to request audio translation from OpenAI’s Audio Translations API and return the “text” value from the response.

The function takes three parameters:

  • apiKey (API key for authorization),

  • audioFilePath (path to the audio file to be translated),

  • model (the model used for translation).


This ReturnAudioTranslation event is triggered when the audio translation response is received.




This function initiates a request to the OpenAI DALL-E Images API to generate images based on a given prompt.


  • apiKey (String): The API key for authentication.
  • model (String): (Optional) The model to use for image generation, defaults to “dall-e-2”.
  • prompt (String): A text description of the desired image(s) (Required). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.
  • n (int): (Optional) The number of images to generate, defaults to 1. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
  • size (String): (Optional) The size of the generated images, defaults to “1024x1024”. Must be one of “256x256”, “512x512”, or “1024x1024” for dall-e-2. Must be one of “1024x1024”, “1792x1024”, or “1024x1792” for dall-e-3 models.



  • DALL_EImagesGenerated (List imageUrls): Fired when the DALL-E Images API successfully generates images. Returns a list of image URLs.


  • DALL_EImagesError (String errorMessage): Fired when an error occurs during the DALL-E Images API request. Returns an error message.

preview :

I also use this Extension in this project :

Aix file :

You can buy the AIX and the AIA file from here via PayPal the two files cost 5$ after you pay you will be automatically redirected to the download URL of the extension


The extension updated with new features

you can now add TTS of OpenAI to your apps

New update, now you can generate images with DALLE

1 Like

how do I get the new version

this is the last updated version