[FEATURE] Add AI-based /ask command #53
Labels
No labels
bug
dependencies
documentation
duplicate
enhancement
good first issue
help wanted
invalid
question
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: ABOCN/TelegramBot#53
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This feature will utilize Ollama and gemma3:1b, a lightweight LLM to add an opt-in system for handling commands that were typed incorrectly. This could be extended further into other commands/usages very easily.
Goals
Example User Flow
User:
/derice pixel 7 pro
TelegramBot:
Hold on while I handle your command...
[ Calls Ollama to run
gemma3:1b
to re-structure the command ]AI:
The command should be: /device pixel 7 pro
[ TelegramBot then executes that command like normal, editing the message as if the user entered the command correctly ]
It seems like AI bullshit if you speak in this way imo lmao
Anyway, we could do that like a proximity or a keyword thing, no real need for AI in this
We could really use AI for chatting (like
/ask Why my thing is so small?
😆):shipit:
Lmao, I really do speak like an AI
That's true, and there are fuzzy searching tools, though I'm not sure how easy they are to maintain.
I'll work on the
/ask
command since I already have most of the code to do it, and we can always come back to this feature.Qwen3 wanted to share it's thoughts on your question btw:
I've done three tests so far on light models and I agree, AI is not suitable for this.
As mentioned in #55, I believe we should rollback and test. I haven't fully finished checking over everything to make sure it's totally bug-free. It's not something that should be ran in production until we're certain that the code works at least most of time 😁
As you mentioned, it could cause abuse and put intense pressure on the servers I use for LibreCloud. We should definately try and avoid that lmao
Hi all,
This has been unreviewed for quite a while 💀
Google IO brought
gemma3n
, which I believe will be a better alternative to any thinking model. This will be released in about a week. For now, I will rewrite this to use the Gemini API as LibreCloud is running out of RAM and wouldn't be able to load the model into RAM without crashing.My idea is we give users 100 messages each by default, which comes out of this fucking insane 14,400 generations a day you can get with any Google account. If they go over the limit, they can set their API key in settings, and it will be saved to a database. I think using Drizzle will be great for this.
With a database already implemented for this command, we can implement other authenticated/unique commands for users. I promise Drizzle is easy to use @lucmsilva651, and @GiovaniFZ, I believe you said that you already worked with it. We can also use this database to store secrets outside of the
.env
file.This is also important for the LastFM commands cuz user data 😛