[FEATURE] Add AI-based /ask command #53

Closed
opened 2025-05-03 17:39:33 +00:00 by ihatenodejs · 5 comments
ihatenodejs commented 2025-05-03 17:39:33 +00:00 (Migrated from github.com)

This feature will utilize Ollama and gemma3:1b, a lightweight LLM to add an opt-in system for handling commands that were typed incorrectly. This could be extended further into other commands/usages very easily.

Goals

  • Unintrusive (user should not have to wait a long time, or be bombarded with AI bullshit)
  • Unlimited but prevents abuse (ratelimits, one request at a time, etc.)
  • Self-hosted and private (Ollama)

Example User Flow

User: /derice pixel 7 pro
TelegramBot: Hold on while I handle your command...

In this example, /derice is misspelled, and should be /device, but it could also handle something like /devicesearch which is more ambiguous and harder to handle.

[ Calls Ollama to run gemma3:1b to re-structure the command ]

The AI will be fed context pulled from our /help command, so it is able to figure out what the user wants. It will simply output it's best guess as to what the user meant, without changing the command too much.

AI: The command should be: /device pixel 7 pro

[ TelegramBot then executes that command like normal, editing the message as if the user entered the command correctly ]

This feature will utilize [Ollama](https://ollama.com/) and [gemma3:1b](https://ollama.com/library/gemma3:1b), a lightweight LLM to add an _opt-in_ system for handling commands that were typed incorrectly. This could be extended further into other commands/usages very easily. # Goals - [x] Unintrusive (user should not have to wait a long time, or be bombarded with AI bullshit) - [X] Unlimited but prevents abuse (ratelimits, one request at a time, etc.) - [X] Self-hosted and private (Ollama) # Example User Flow **User:** `/derice pixel 7 pro` **TelegramBot:** `Hold on while I handle your command...` > In this example, `/derice` is misspelled, and should be `/device`, but it could also handle something like `/devicesearch` which is more ambiguous and harder to handle. _[ Calls Ollama to run `gemma3:1b` to re-structure the command ]_ > The AI will be fed context pulled from our `/help` command, so it is able to figure out what the user wants. It will simply output it's best guess as to what the user meant, without changing the command too much. **AI:** `The command should be: /device pixel 7 pro` _[ TelegramBot then executes that command like normal, editing the message as if the user entered the command correctly ]_
lucmsilva651 commented 2025-05-04 01:17:29 +00:00 (Migrated from github.com)

It seems like AI bullshit if you speak in this way imo lmao

Anyway, we could do that like a proximity or a keyword thing, no real need for AI in this

We could really use AI for chatting (like /ask Why my thing is so small? 😆)

:shipit:

It seems like AI bullshit if you speak in this way imo lmao Anyway, we could do that like a proximity or a keyword thing, no real need for AI in this We could really use AI for chatting (like ``/ask Why my thing is so small?`` 😆) :shipit:
ihatenodejs commented 2025-05-04 02:15:37 +00:00 (Migrated from github.com)

Lmao, I really do speak like an AI

That's true, and there are fuzzy searching tools, though I'm not sure how easy they are to maintain.

I'll work on the /ask command since I already have most of the code to do it, and we can always come back to this feature.


Qwen3 wanted to share it's thoughts on your question btw:

It's hard to determine why your "thing" is small without more context. Here are a few common
reasons for a product or item being small:

  1. Design and Materials: If it's a product, small size might be due to lightweight materials,
    compact design, or limited space.
  2. Manufacturing Constraints: If it's a prototype or a component, production limitations (e.g.,
    limited parts) could lead to a small size.
  3. User Feedback: If the product is being used, users might have preferences for size.
  4. Weight or Weight Distribution: If it's a portable device, weight distribution or design
    could contribute to size.

If you can describe the "thing" (like a robot, gadget, or project), I can tailor a more specific
answer!

Lmao, I really do speak like an AI That's true, and there are fuzzy searching tools, though I'm not sure how easy they are to maintain. I'll work on the `/ask` command since I already have most of the code to do it, and we can always come back to this feature. --- Qwen3 wanted to share it's thoughts on your question btw: > It's hard to determine why your "thing" is small without more context. Here are a few common > reasons for a product or item being small: > > 1. **Design and Materials**: If it's a product, small size might be due to lightweight materials, > compact design, or limited space. > 2. **Manufacturing Constraints**: If it's a prototype or a component, production limitations (e.g., > limited parts) could lead to a small size. > 3. **User Feedback**: If the product is being used, users might have preferences for size. > 4. **Weight or Weight Distribution**: If it's a portable device, weight distribution or design > could contribute to size. > > If you can describe the "thing" (like a robot, gadget, or project), I can tailor a more specific > answer!
ihatenodejs commented 2025-05-05 18:32:45 +00:00 (Migrated from github.com)

Image

I've done three tests so far on light models and I agree, AI is not suitable for this.

![Image](https://github.com/user-attachments/assets/1daa7201-73b9-4e48-a52d-51a6c2a53057) I've done three tests so far on light models and I agree, AI is not suitable for this.
ihatenodejs commented 2025-05-07 19:18:33 +00:00 (Migrated from github.com)

As mentioned in #55, I believe we should rollback and test. I haven't fully finished checking over everything to make sure it's totally bug-free. It's not something that should be ran in production until we're certain that the code works at least most of time 😁

As you mentioned, it could cause abuse and put intense pressure on the servers I use for LibreCloud. We should definately try and avoid that lmao

As mentioned in #55, I believe we should rollback and test. I haven't fully finished checking over everything to make sure it's totally bug-free. It's not something that should be ran in production until we're certain that the code works at least most of time 😁 As you mentioned, it could cause abuse and put intense pressure on the servers I use for LibreCloud. We should definately try and avoid that lmao
ihatenodejs commented 2025-05-21 15:35:16 +00:00 (Migrated from github.com)

Hi all,

This has been unreviewed for quite a while 💀

Google IO brought gemma3n, which I believe will be a better alternative to any thinking model. This will be released in about a week. For now, I will rewrite this to use the Gemini API as LibreCloud is running out of RAM and wouldn't be able to load the model into RAM without crashing.

My idea is we give users 100 messages each by default, which comes out of this fucking insane 14,400 generations a day you can get with any Google account. If they go over the limit, they can set their API key in settings, and it will be saved to a database. I think using Drizzle will be great for this.

With a database already implemented for this command, we can implement other authenticated/unique commands for users. I promise Drizzle is easy to use @lucmsilva651, and @GiovaniFZ, I believe you said that you already worked with it. We can also use this database to store secrets outside of the .env file.

This is also important for the LastFM commands cuz user data 😛

Hi all, This has been unreviewed for quite a while 💀 Google IO brought `gemma3n`, which I believe will be a better alternative to any thinking model. This will be released in about a week. For now, I will rewrite this to use the Gemini API as LibreCloud is running out of RAM and wouldn't be able to load the model into RAM without crashing. My idea is we give users 100 messages each by default, which comes out of this fucking insane 14,400 generations a day you can get with any Google account. If they go over the limit, they can set their API key in settings, and it will be saved to a database. I think using Drizzle will be great for this. With a database already implemented for this command, we can implement other authenticated/unique commands for users. I promise Drizzle is easy to use @lucmsilva651, and @GiovaniFZ, I believe you said that you already worked with it. We can also use this database to store secrets outside of the `.env` file. This is also important for the LastFM commands cuz user data 😛
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: ABOCN/TelegramBot#53
No description provided.