[PAID] 🌟 ChatGPT extension to create fantastic conversations with gpt models

#Mr_Koder

Introduction

hello everyone I am here today to introduce my new extension called ChatGPT,

The ChatGPT extension is designed to engage in conversations with the ChatGPT and deliver the resulting response in an API-style structure. It allows MIT App Inventor users to integrate OpenAI’s powerful language model into their Android apps seamlessly.

features

Continuous Chat Conversation:

  • The code supports continuous chat conversations by sending a list of prompts, enabling dynamic and interactive conversations with the OpenAI bot. so the model don’t forget the previous messages that you have sent before

Streaming Support:

  • it enables streaming of responses in chunks, making it suitable for handling large responses.

Error Handling:

  • The extension includes error handling through the Error Block and related error events, which allow you to handle errors gracefully and provide feedback to users.

Audio Transcription:

  • The extension includes blocks for audio transcription using OpenAI’s Audio Transcriptions, with events for handling and displaying transcribed text.

Audio Translation:

  • It also provides functions for audio translation using OpenAI’s Audio Translations , with events for processing and displaying translated text.

Images generations:

  • It also provides functions for generating images using DALLE.

Blocks

The SendMessage block is responsible for sending a conversation to the ChatGPT and processing the response. Here’s a breakdown of the code:

  1. Block Description: This Block allows users to interact with the OpenAI ChatGPT and receive structured API-style responses.
  2. Function Parameters:
  • prompts: A list of conversation prompts provided by the user like in the block above .
  • model: The name of the OpenAI model to be used.
  • apiKey: The API key for authorization.
  • maxTokens: The maximum number of tokens in the response.
  • temperature: A value controlling the randomness of the response.


The RespondedToChat Block is an event that is triggered when OpenAI provides a response to a user’s inquiry within the Chat block. This block carries various parameters, and here is an explanation of each parameter:

  • responseId: A string representing the unique identifier for the response generated by OpenAI.
  • responseType: A string indicating the type of the response object. It typically denotes the data structure used to encapsulate the response.
  • createdTimestamp: A long value representing the timestamp when the response was created. It is usually in Unix timestamp format.
  • responseModel: A string indicating the specific OpenAI model that was used to generate the response.
  • choiceIndex: An integer representing the index of the choice within the response. OpenAI often provides multiple choices, and this parameter indicates which choice is selected.
  • role: A string indicating the role of the message within the conversation. It can be used to distinguish between different roles, such as “system,” “user,” or “assistant.”
  • content: A string representing the content of the response. This is the actual text generated by the OpenAI model.
  • finishReason: A string indicating the reason for the completion of the response. It provides information about why the conversation ended.
  • promptTokens: An integer representing the number of tokens used in the conversation prompt. Tokens are units of text used by OpenAI models.
  • completionTokens: An integer representing the number of tokens used in the generated response completion.
  • totalTokens: An integer representing the total number of tokens used in the entire response, including both the prompt and the completion.



blocks

The StopStream Block and the associated StoppedStream event are used in the context of managing streaming operations in the code.



blocks

The StoppedStream Block is an essential component in managing streaming operations and is triggered when the streaming process is manually stopped by calling the StopStream Block



The SendStreamedMessage function is designed to retrieve a response in chunks from the ChatGPT model. It allows for ongoing communication with the model and is specifically used for streaming responses,

Parameters:

  • The function takes several parameters:
    • id (integer): An identifier for the stream.
    • prompts (YailList): A list of prompts (messages) that constitute the conversation with the model.
    • model (String): The model code used for the conversation.
    • apiKey (String): The API key required for authentication.
    • maxTokens (integer): The maximum number of tokens for the response.
    • temperature (double): A value that controls the randomness of the response.


blocks

The GotStream Block is used to notify when OpenAI has provided a response to a stream request during an ongoing streaming conversation.




blocks

The FinishedStream event is used to notify when all chunks of a stream have been returned through the GotStream event, indicating the completion of the streaming conversation.


component_method

RequestModeration

Description: This function asynchronously requests content moderation using the OpenAI Moderation API. It takes an API key and input text as parameters, sends a POST request to the API endpoint, and processes the response.

Parameters:

  • apiKey (String): The API key for accessing the OpenAI Moderation API.
  • input (String): The input text or content to be moderated.


component_method

ModerationResult

Description: This event is triggered when the moderation result is received from the OpenAI Moderation API. It provides information about whether the content is flagged, categories, and category scores as parameters.

Parameters:

  • flagged (boolean): Indicates whether the content is flagged.
  • categories (String): JSON representation of the detected categories.
  • categoryScores (String): JSON representation of the scores for each category.

Usage: Handle this event to perform actions based on the moderation result, such as updating the user interface or taking appropriate actions based on the moderation outcome.


component_method

RequestAudioSpeech Function

Description: This function is responsible for asynchronously requesting audio speech synthesis from OpenAI’s Audio Speech API. It takes various parameters such as API key, input text, model, voice, folder path, and file name. The resulting MP3 content is then written to a file.

Parameters:

  • apiKey (String): The API key for accessing OpenAI’s Audio Speech API.

  • text (String): The input text to be synthesized into speech.

  • model (String): The model to be used for speech synthesis. One of the available TTS models tts-1 or tts-1-hd

  • voice (String): The voice to use when generating the audio. Supported voices are alloy , echo , fable , onyx , nova , and shimmer

  • folderPath (String): The path to the folder where the MP3 file will be saved.

  • fileName (String): The name of the MP3 file to be saved.

Exambles :

  • Alloy :
  • Echo :
  • Fable :
  • Onyx :

You can try other voices



component_method

SpeechFileSaved Event

Description: This event is fired when the MP3 file has been successfully saved. It provides the file path as a parameter.

Parameters:

  • filePath (String): The path where the MP3 file has been saved.

Usage: Handle this event to perform actions after the MP3 file has been successfully saved.


component_method

SpeechSynthesisError Event

Description: This event is fired when an error occurs during the audio speech synthesis process. It provides an error message as a parameter.

Parameters:

  • errorMessage (String): The error message describing the issue encountered.

Usage: Handle this event to capture and handle errors during the speech synthesis process.




blocks

The RequestAudioTranscription Block is responsible for making a request to OpenAI’s Audio Transcriptions API to transcribe audio from a provided audio file. (Transcribes audio into the input language.)

The Blcok takes four parameters:

  • apiKey (API key for authentication),

  • audioFilePath (path to the audio file to be transcribed),

  • model (model configuration), you can set it as whisper-1

  • responseFormat ( The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt).




blocks

The AudioTranscriptionReceived block, its purpose is to notify the application when audio transcription data is received




blocks

The RequestAudioTranslation block is designed to request audio translation from OpenAI’s Audio Translations API and return the “text” value from the response.

Parameters:
The function takes three parameters:

  • apiKey (API key for authorization),

  • audioFilePath (path to the audio file to be translated),

  • model (the model used for translation).




blocks

This ReturnAudioTranslation event is triggered when the audio translation response is received.



blocks

RequestDALL_EImages

Description

This function initiates a request to the OpenAI DALL-E Images API to generate images based on a given prompt.

Parameters

  • apiKey (String): The API key for authentication.
  • model (String): (Optional) The model to use for image generation, defaults to “dall-e-2”.
  • prompt (String): A text description of the desired image(s) (Required). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.
  • n (int): (Optional) The number of images to generate, defaults to 1. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
  • size (String): (Optional) The size of the generated images, defaults to “1024x1024”. Must be one of “256x256”, “512x512”, or “1024x1024” for dall-e-2. Must be one of “1024x1024”, “1792x1024”, or “1024x1792” for dall-e-3 models.

Events

blocks

  • DALL_EImagesGenerated (List imageUrls): Fired when the DALL-E Images API successfully generates images. Returns a list of image URLs.

blocks

  • DALL_EImagesError (String errorMessage): Fired when an error occurs during the DALL-E Images API request. Returns an error message.


Function: RequestChatGPTVision(String apiKey, String imageUrl, String prompt)

Purpose: This function sends a request to OpenAI’s ChatGPT vision API to analyze an image and provide insights based on the given prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imageUrl: The URL of the image to analyze.

  • prompt: A text prompt to guide the analysis (e.g., “What’s in this image?”).

Function: RequestChatGPTVisionMultipleImages(String apiKey, YailList imageUrls, String prompt)

Purpose: This function sends a request to OpenAI’s ChatGPT vision API to analyze multiple images and provide insights based on the given prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imageUrls: A YailList containing the URLs of the images to analyze.

  • prompt: A text prompt to guide the analysis (e.g., “Compare these images”).
    Events:

blocks

  • ChatGPTVisionResponseReceived : This event is fired when the API response is successfully received and parsed. It provides the following parameters:

    • id: The unique ID of the response.

    • object: The type of object returned (“chat.completion”).

    • model: The model used to generate the response.

    • role: The role of the response (“assistant”).

    • content: The main content of the response, containing the analysis of the image.

blocks

  • ChatGPTVisionError(String errorMessage) : This event is fired if an error occurs during the API request. It provides the error message.

  • The response content will vary depending on the image and the prompt provided.



ChatGPT Extension- Embeddings Functionality


blocks

1. GetEmbeddings(String apiKey, String text, String model)

  • Description: This function sends a request to OpenAI’s Embeddings API to get the numerical representation (embedding) of a given text.

  • Parameters:

    • apiKey: Your OpenAI API key (required for authentication).

    • text: The text string you want to embed.

    • model: The specific embedding model you want to use (e.g., “text-embedding-ada-002, text-embedding-3-small ,text-embedding-3-lar”).002").

  • Functionality:

    • It constructs an API request with your text and the chosen model.

    • It sends this request to OpenAI’s server.

    • It then calls the processEmbeddingsAPIResponse function to handle the server’s response.

  • Events Triggered:

    • EmbeddingsReceived: Fired upon a successful response, containing the embeddings.

    • EmbeddingsError: Fired if an error occurs during the request.

component_event

2. EmbeddingsReceived(String embeddings)

  • Description: This event is fired when the GetEmbeddings function successfully receives a response from the OpenAI API.

  • Parameter:

    • embeddings: The text’s embedding, returned as a string representation of a list of numbers.

component_event1

3. EmbeddingsError(String errorMessage)

  • Description: This event is fired when an error occurs at any point during the embedding request process.

  • Parameter:

    • errorMessage: A descriptive error message to help you understand the issue.

preview :

I also use this Extension in this project :

Aix file :

You can buy the AIX and the AIA file from here via PayPal the two files cost 5$, after you pay you will be automatically redirected to the download URL of the extension

Note: the aia file for mit app inventor

6 Likes

The extension updated with new features

you can now add TTS of OpenAI to your apps

New update, now you can generate images with DALLE

1 Like

how do I get the new version

this is the last updated version

Hi Ahmed,

After importing the .aia file, Kodular Creator takes forever to load the project. Could you give me some advice? I am using a Premium account.

Does it also support Functions?

I advise you to try with a more stable and faster internet connection cause the AIA file has many extensions and these extensions make it with big size of 8.2 MB

And I am sorry for my late reply I was very busy with my graduation project at university

No, but I will add it soon
and I am sorry for my late reply I was very busy with my graduation project at the university

New update with gpt-4-vision-preview add to the extension

2 Likes

Big Update! ChatGPT Extension Now Supports GPT-4 and More! :rocket:

Exciting news! The ChatGPT extension has been significantly upgraded and now boasts full compatibility with the latest and greatest OpenAI models, including:

  • GPT-4

  • GPT-4-Turbo

  • GPT-4o

  • GPT-4o-mini

  • And all your favorite GPT-3.5 models!

This means you can now harness the incredible power of these advanced language models directly in your App Inventor projects. Build even more sophisticated chatbots, unlock next-level creative potential, and explore the cutting edge of AI-powered apps.

What’s New?

  • Seamless Integration: Easily switch between different GPT models within the extension.

  • Enhanced Streaming: Enjoy a smoother and more responsive streaming experience for real-time chat interactions.

  • Future-Proof: The extension is designed to support new OpenAI models as they become available, ensuring you stay ahead of the curve.

Ready to experience the future of AI?

Get the updated ChatGPT extension today! :arrow_right:Pay Here and you will be automatically redirected to the download page of the extension after completing payment

1 Like

I made the purchase but at the end of the payment the page was not directed to the download

I sent the project zip file ( aix ,aia) files to your pm and ,

I am sorry for such a bad experience ,

and thanks for buying my extension I hope this extension will help you with your projects

A problem occurred loading the project. Unable to load project with TextBox version 14 (maximum version is 13).

the project is for mit app inventor, not Kodular try to open it with mit app inventor however you can use the extension with all platforms

Do you have any problems with extension or aia file

ChatGPT Extension- Embeddings Functionality

All blocks
Screenshot 2024-07-30 145742

How Text Embeddings Will Help You in Your App ?

The GetEmbeddings function within the ChatGPT extension unlocks powerful capabilities for your app by converting text into meaningful numerical representations. Here’s how you can benefit:

1. Build a Smarter Search Engine:

  • Problem: Traditional keyword search often fails to understand the context of words. A search for “Apple phone” might not return results containing “iPhone,” leading to a poor user experience.

  • Solution:

    • Use GetEmbeddings to pre-calculate embeddings for all searchable content in your app (articles, product descriptions, etc.).

    • When a user searches, get the embedding of their query.

    • Compare the query embedding to your content’s embeddings. Content with embeddings closest to the query are likely the most semantically similar, even if they don’t share exact keywords.

  • Result: Your search becomes more intelligent, returning more relevant results based on meaning rather than just literal word matches.

2. Create a Personalized Recommendation System:

  • Problem: Recommending content solely based on basic categories can be inaccurate. A user who likes “romantic comedies” might not enjoy all films within that genre.

  • Solution:

    • Calculate embeddings for items (movies, products, articles) in your app.

    • Track user interactions (likes, purchases, views).

    • When recommending, find items with embeddings similar to those the user has positively interacted with in the past.

  • Result: Your recommendations become more tailored to individual user preferences, leading to increased engagement and satisfaction.

3. Organize Content Effectively:

  • Problem: Dealing with large volumes of uncategorized text data (customer reviews, social media posts) can be overwhelming.

  • Solution:

    • Use GetEmbeddings to obtain embeddings for each piece of text.

    • Apply clustering algorithms (available in various libraries) to group text snippets with similar embeddings.

  • Result: Automatically categorize your data based on topics, sentiment, or other patterns revealed by the embeddings, making it easier to analyze and present information to users.

4. Enhance Language Understanding:

  • Problem: Building features that require language comprehension (chatbots, sentiment analysis) is complex.

  • Solution: Embeddings can be used as input features for training machine learning models:

    • Sentiment Analysis: Train a model to classify text as positive, negative, or neutral based on its embedding.

    • Chatbot Improvement: Create a chatbot that understands the intent and meaning behind user messages, resulting in more natural conversations.

Is it possible to talk to a PDF or your own data in this extension?

I would like an application that uses the intelligence of GPT chat to interpret a specific database that I provide, that is, I want it to deliver specific information about this material with text that I provide

is this possible?

No, it only deal with images but also you can convert text to speach and speach to text or to transactions files like .srt file