[Freemium] GroqText: 30+ LLMs including DeepSeek, Llama, Gemma, ALLaM, Mixtral and Qwen (Search / Code Execution / Vision Models / Streaming and more)

đź§© GroqText

An extension for MIT App Inventor 2.
Extension to integrate AI Text and Vision Models in applications using Groq API with streaming support. My other extensions
Built by Sarthak Gupta

:memo: Specifications


:package: Package: com.sarthakdev.groqtext
:floppy_disk: Size: 20.68 KB
:gear: Version: 1.2
:mobile_phone: Minimum API Level: 7
:date: Updated On: 2025-05-25T18:30:00Z
:laptop: Built & documented using: FAST-CLI v2.8.1
Extension License: here

Introduction

  • Integrate a large number of AI Models in your app through the groq api
  • Includes a generous free plan with daily rate limit (without credit card)
  • Includes 2b, 8b and 70b parameter models

Features of Groq Inference

  • Lightning fast AI Models
  • Supports 30+ AI Models from 5+ providers
  • Best Free Plan that gives 500k tokens* daily for free to use in production

Events:

GroqText has total 5 events.

:yellow_heart: GroqStreamResponseReceived

Event triggered when a streaming response part is received

Parameter Type
partialContent text

:yellow_heart: CustomStructuredOutputReceived

Event triggered when custom structured output is received from the Groq API

Parameter Type
jsonOutput text

:yellow_heart: GroqResponseReceived

Event triggered when AI response is received

Parameter Type
statusCode number
response text
messageContent text
finishReason text
model text
queueTime text
promptTokens text
completionTokens text

:yellow_heart: GroqRequestError

Event triggered when an error occurs in Groq API request

Parameter Type
errorMessage text

:yellow_heart: GroqMessageContentReceived

Event triggered when AI message content is extracted

Parameter Type
messageContent text

Methods:

GroqText has total 5 methods.

:purple_heart: ExtractJSONValue

Extract a specific value from the JSON response

Parameter Type
jsonString text
fieldPath text

:purple_heart: AskQuestion

Ask a question to the AI

Parameter Type
userMessage text

:purple_heart: ProcessImage

Process an image with a text prompt using a local image path

Parameter Type
prompt text
imagePath text

:purple_heart: RequestCustomStructuredOutput

Request custom structured output from Groq API

Parameter Type
apiKey text
modelName text
systemInstruction text
userRequest text
customSchema text

:purple_heart: ResetConversation

Reset the conversation by clearing the chat history

Setters:

GroqText has total 11 setter properties.



:green_heart: ApiKey

Set the Groq API Key

  • Input type: text

:green_heart: ModelName

Set the AI Model Name for text tasks

  • Input type: text
  • Helper class: ModelName
  • Helper enums: Llama_3_3_70b_Versatile, Llama_3_1_8b_Instant, Llama3_70b_8192, Llama3_8b_8192, Gemma2_9b_It, Meta_Llama_Llama_Guard_4_12B, Allam_2_7b, DeepSeek_R1_Distill_Llama_70b, Meta_Llama_4_Maverick, Meta_Llama_4_Scout, Mistral_Saba_24b, Qwen_Qwq_32b, Compound_Beta, Compound_Beta_Mini

:green_heart: VisionModelName

Set the AI Model Name for vision tasks

  • Input type: text
  • Helper class: VisionModelName
  • Helper enums: Llama_4_Scout, Llama_4_Maverick

:green_heart: ApiUrl

Set the API Endpoint URL

  • Input type: text

:green_heart: SystemMessage

Set the system message for the AI

  • Input type: text

:green_heart: ChatHistoryEnabled

Enable or disable chat history

  • Input type: boolean

:green_heart: Temperature

Set the temperature

  • Input type: number

:green_heart: MaxTokens

Set the max tokens

  • Input type: number

:green_heart: TopP

Set the top P value

  • Input type: number

:green_heart: StreamResponses

Set whether to stream responses for text and vision tasks

  • Input type: boolean

:green_heart: Stop

Set the stop value

  • Input type: text

Getters:

GroqText has total 11 getter properties.

:green_circle: ApiKey

Set the Groq API Key

  • Return type: text

:green_circle: ModelName

Set the AI Model Name for text tasks

  • Return type: text

:green_circle: VisionModelName

Set the AI Model Name for vision tasks

  • Return type: text

:green_circle: ApiUrl

Set the API Endpoint URL

  • Return type: text

:green_circle: SystemMessage

Set the system message for the AI

  • Return type: text

:green_circle: ChatHistoryEnabled

Enable or disable chat history

  • Return type: boolean

:green_circle: Temperature

Set the temperature

  • Return type: number

:green_circle: MaxTokens

Set the max tokens

  • Return type: number

:green_circle: TopP

Set the top P value

  • Return type: number

:green_circle: StreamResponses

Set whether to stream responses for text and vision tasks

  • Return type: boolean

:green_circle: Stop

Set the stop value

  • Return type: text

Try the extension for free with GroqTextMini (Free)

This is the difference b/w free and paid version.

GroqTextMini GroqText
Free Paid(5.99$)
Use llama-8b model Use 30+ AI Models (Lllama, Gemma, Mixtral, DeepSeek, Qwen, Distilled models)
8b model 1b, 2b, 3b, 8b, 32b, 70b, 80b models
500 tokens Unlimited tokens depending on model capacity
No Image Support Image Support
No Code Execution Code Execution Support
No Search Support Search Support

GroqTextMini: :robot: com.sarthakdev.groqtextmini.aix (8.3 KB)

Purchase full GroqText extension from here for only 5.99$

Purchase Extension

You can purchase the extension instantly from the link below for just 5.99$

2 Likes

Extract JSON value Function

ExtractJSONValueBlock

Sample Json
{"id":"chatcmpl-7ceff50a-a5f1","object":"chat.completion","created":
1736604854,"model": "llama3-8b-8192","choices":
[{"index":0,"message":{"role":"assistant","content":"Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?"},"logprobs":null,"finish_reason":"stop"}],"usage": {"queue_time":0.017792521000000002,"prompt_tokens":
11,"prompt_time":0.001729597,"completion_tokens": 25,"completion_time":0.020833333,"total_tokens":
36,"total_time":0.02256293},"system_fin
gerprint":"fp_a9","x_groq": {"id":"req_01jhavea"}}
  • choices[0].message.content : Get content of message sent by assistant
  • choices[0].finish_reason: Get the reason for the stop of the generation
  • model : Get the model name
  • usage.queue_time : Get the queue time
  • usage.prompt_tokens : Get the prompt tokens
  • usage.completion_rokens : Get the completion tokens

Please contact me via dm for any questions or reply here. I will try to answer as soon as possible.

New Models Support

The extension now supports DeepSeek R1
Here’s the ID: deepseek-r1-distill-llama-70b

Introducing GroqTextMini (Free)

Features

  1. Use llama-8b-8192 model in your app
  2. Limit of maximum 500 tokens per response
  3. Check the difference between paid and free aix
GroqTextMini GroqText
Free Paid(5.99$)
Use llama-8b model Use 15+ AI Models (Lllama, Gemma, Mixtral, DeepSeek, Qwen, Distilled models)
8b model 1b, 2b, 3b, 8b, 32b, 70b, 80b models
500 tokens Unlimited tokens depending on model capacity

Download aix

:robot: com.sarthakdev.groqtextmini.aix (8.3 KB)

If you want to get GroqText extension you can get it from here for only 5.99$

Hi !

I purchased your extension after testing the mini version.

It’s truly extraordinary.

Thank you for this incredible extension.

1 Like

:loudspeaker: Qwen coder is now supported

The extension now supports Alibaba Qwen
Here’s the ID: qwen-2.5-coder-32b

:loudspeaker: Qwen is now supported

The extension now supports Alibaba Qwen QWQ
Here’s the ID: qwen-qwq-32b

:loudspeaker: Mistral Saba is now supported

The extension now supports Mistral Saba
Here’s the ID: mistral-saba-24b

:loudspeaker: Distilled models are now supported

The extension now supports distilled llama, qwen and deepseek models
Here’s the ID: deepseek-r1-distill-qwen-32b, deepseek-r1-distill-llama-70b-specdec, deepseek-r1-distill-llama-70b

:rocket: Updated Free Daily tokens and Rate Limits

Groq now supports 17 leading AI models from different providers all under a single API Key with daily free token usage

Model ID RPM RPD TPM TPD ASH ASD
deepseek-r1-distill-llama-70b 30 1,000 6,000 - - -
deepseek-r1-distill-qwen-32b 30 1,000 6,000 - - -
gemma2-9b-it 30 14,400 15,000 500,000 - -
llama-3.1-8b-instant 30 14,400 20,000 500,000 - -
llama-3.1-70b-versatile 30 14,400 6,000 200,000 - -
llama-3.2-1b-preview 30 7,000 7,000 500,000 - -
llama-3.2-3b-preview 30 7,000 7,000 500,000 - -
llama-3.3-70b-specdec 30 1,000 6,000 100,000 - -
llama-3.3-70b-versatile 30 1,000 6,000 100,000 - -
llama-guard-3-8b 30 14,400 15,000 500,000 - -
llama3-8b-8192 30 14,400 30,000 500,000 - -
llama3-70b-8192 30 14,400 6,000 500,000 - -
mistral-saba-24b 30 1,000 6,000 - - -
mixtral-8x7b-32768 30 14,400 5,000 500,000 - -
qwen-2.5-32b 30 1,000 6,000 - - -
qwen-2.5-coder-32b 30 1,000 6,000 - - -
qwen-qwq-32b 30 1,000 6,000 - - -

The extension is at a price of 6$ for this week only after that the prices will increase. So be quick :grinning_face:

Version 1.1

This is a major update with new blocks and awesome feature additions. Such as Chat history, System tone and Json structured output.

New Blocks :smile:

:speech_balloon: Chat History: The LLM now has access to all of your previously sent messages.

Disabled by default, Enable by setting the block to true

:wastebasket:Reset Conversation history

image

:speaking_head:Set Tone (System Instruction of the AI Model)

Eg. You are a developer.

:office_building: Structured JSON output

RequestCustomStructuredOutput

UserRequest= Question to ask to the AI
customScheme= The json type to generate

For eg.

Output

Existing users, please dm me to get the latest version.

Create Your Own AI “Mini-APIs” in App Inventor — The Simple Way with GroqText!

No more battling messy AI text! Say hello to clean, structured data with your own AI-powered features inside App Inventor.

Ever ask an AI something, only to get back a long blob of text that’s impossible to use without hours of parsing?
Wish you could just get exactly the info you need, in a format your app understands — right away?

it’s way simpler than you might think.

By combining the GroqText Extension with a smart prompt strategy — asking the AI to reply in JSON format — you can build reliable, custom AI tools that work seamlessly in your App Inventor apps.

Think of it like creating a lightweight, personal API — powered by Groq — without any server setup or advanced coding skills!


What Makes This So Powerful?

:white_check_mark: On-Demand AI Features
Instant translations, summaries, keyword detection, emotion analysis — you name it. You can create mini-AI tools for almost any use case.

:white_check_mark: Consistent, Structured Output
By using JSON responses, you get clean data like { "summary": "...", "keywords": [...] }, ready to use in your blocks with no guesswork.

:white_check_mark: Tidy, Simple Logic
Forget the spaghetti of text parsing. Your block logic becomes clean and easy to follow.


Why It’s a Perfect Fit for App Inventor

:brain: You Write the “Function” as a Prompt
Just describe what you want and how the response should look.

:wrench: Built-in Tools for Decoding

use the inbuilt json parsing block for parsing text using field path.

:package: Easy Data Handling
Use a simple “lookup in pairs” block to get the exact value you need from the dictionary — like the summary, translation, or anything else.


A Sample Workflow:

  1. Send a prompt with GroqText asking for JSON
  2. Get the response in the GotText event
  3. Decode it with ExtractValuefromJSON
  4. Done!

No external servers. No complicated parsing. No fuss.
Just visual blocks and the power of GroqText doing exactly what you ask — in a format that fits right into your app.

Ready to level up your projects? Install the GroqText extension, craft a clear JSON-based prompt, and start building smart, AI-enhanced features in minutes — right from App Inventor.

Developers literally after seeing the features of GroqText extension at such a low price

New Update

GroqVision is now free with GroqText

:loudspeaker: Llama 4 is now supported! :smiling_face_with_three_hearts:

The model that has overthrown Gemma, Mistral and Gemini 2.0

Llama 4 Scout, a 17 billion active parameter model with 16 experts, is the best multimodal model in the world in its class and is more powerful than all previous generation Llama models, while fitting in a single NVIDIA H100 GPU. Additionally, Llama 4 Scout offers an industry-leading context window of 10M and delivers better results than Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a broad range of widely reported benchmarks.

Here’s the ID: meta-llama/llama-4-scout-17b-16e-instruct

:fire: Llama 4 Maverick is here!

The model that has overthrown GPT 4o, Gemini 2.0 flash and DeepSeek v3

Llama 4 Maverick, a 17 billion active parameter model with 128 experts, is the best multimodal model in its class, beating GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—at less than half the active parameters. Llama 4 Maverick offers a best-in-class performance to cost ratio with an experimental chat version scoring ELO of 1417 on LMArena.

Here’s the ID: meta-llama/llama-4-maverick-17b-128e-instruct

Introducing Compound-Beta: Enhanced AI Capabilities with External Tools

While Large Language Models (LLMs) excel at generating text, Compound-Beta takes the next step. It’s an advanced AI system designed to solve problems by taking action and intelligently using external tools, alongside powerful Llama models.

Key Features

  • Access to Real-Time Information: Compound-Beta interacts with external environments, providing more accurate and up-to-date responses than an LLM alone.
  • Multi-Tool Support: Two agentic tool systems available: compound-beta and compound-beta-mini.

Agentic Tool Systems

Compound-Beta

  • Supports multiple tool calls per request.
  • Ideal for use cases requiring:
    • Multiple web searches per request.
    • Multiple code executions per request.

Compound-Beta-Mini

  • Supports a single tool call per request.
  • Ideal for use cases requiring:
    • A single web search per request.
    • A single code execution per request.
  • Performance: Average of 3x lower latency compared to Compound-Beta.

Available Tools

Both Compound-Beta systems support the following tools:

  • Web Search: Via Tavily.
  • Code Execution: Via E2B (Python support only).

Note: Custom user-provided tools are not supported at this time.

Power of Compound LLM

User: make a query to the spacex api and print its response
AI: To make a query to the SpaceX API and print its response, I will use the requests library in Python to send a GET request to the API endpoint. Based on the search results, I found that the SpaceX API has several endpoints for different types of data. The most relevant endpoint for this task appears to be https://api.spacexdata.com/v4/launches/latest. This endpoint returns information about the latest launch.

Here is the Python code to make a query to the SpaceX API and print its response:

import requests
import json

def get_latest_launch():
    url = 'https://api.spacexdata.com/v4/launches/latest'
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an exception for HTTP errors
        return response.json()
    except requests.RequestException as e:
        print(f"An error occurred: {e}")
        return None

latest_launch = get_latest_launch()
print(json.dumps(latest_launch, indent=4))

This code sends a GET request to the https://api.spacexdata.com/v4/launches/latest endpoint and prints the response in a JSON format.

The response from the API includes various details such as the launch date, rocket used, crew members, and more. Here is an example of the response:

{
    "fairings": null,
    "links": {
        "patch": {
            "small": "https://images2.imgbox.com/eb/d8/D1Yywp0w_o.png",
            "large": "https://images2.imgbox.com/33/2e/k6VE4iYl_o.png"
        },
        "reddit": {
            "campaign": null,
            "launch": "https://www.reddit.com/r/spacex/comments/xvm76j/rspacex_crew5_launchcoast_docking_discussion_and/",
            "media": null,
            "recovery": null
        },
        "flickr": {
            "small": [],
            "original": []
        },
        "presskit": null,
        "webcast": "https://youtu.be/5EwW8ZkArL4",
        "youtube_id": "5EwW8ZkArL4",
        "article": null,
        "wikipedia": "https://en.wikipedia.org/wiki/SpaceX_Crew-5"
    },
    "static_fire_date_utc": null,
    "static_fire_date_unix": null,
    "net": false,
    "window": null,
    "rocket": "5e9d0d95eda69973a809d1ec",
    "success": true,
    "failures": [],
    "details": null,
    "crew": [
        "62dd7196202306255024d13c",
        "62dd71c9202306255024d13d",
        "62dd7210202306255024d13e",
        "62dd7253202306255024d13f"
    ],
    "ships": [],
    "capsules": [
        "617c05591bad2c661a6e2909"
    ],
    "payloads": [
        "62dd73ed202306255024d145"
    ],
    "launchpad": "5e9e4502f509094188566f88",
    "flight_number": 187,
    "name": "Crew-5",
    "date_utc": "2022-10-05T16:00:00.000Z",
    "date_unix": 1664985600,
    "date_local": "2022-10-05T12:00:00-04:00",
    "date_precision": "hour",
    "upcoming": false,
    "cores": [
        {
            "core": "633d9da635a71d1d9c66797b",
            "flight": 1,
            "gridfins": true,
            "legs": true,
            "reused": false,
            "landing_attempt": true,
            "landing_success": true,
            "landing_type": "ASDS",
            "landpad": "5e9e3033383ecbb9e534e7cc"
        }
    ],
    "auto_update": true,
    "tbd": false,
    "launch_library_id": "f33d5ece-e825-4cd8-809f-1d4c72a2e0d3",
    "id": "62dd70d5202306255024d139"
}

This response provides information about the latest launch, including the launch date, rocket used, crew members, and more.

This is the power of Groq Compound LLMs search, execute code!

Does Groq extension has the ability to create continuous chat context or handle streamed chunks of text response ? these are very important features .

1 Like

Try the free version you’ll love it

The extension has the ability to enable chat history that means llm remembers what was said to it earlier