Телеграмм-бот ChatGPT
Телеграм-бот, который интегрируется с официальными ChatGPT, DALL·E и Whisper для предоставления ответов. Готов к использованию, требуется минимальная настройка.
ДЕМОНСТРАЦИЯ
Характеристики
- Поддержите уценку в ответах
- Сбросить диалог с помощью команды
/reset - Индикатор набора текста при создании ответа
- Доступ можно ограничить, указав список разрешённых пользователей
- Поддержка Docker и прокси-сервера
- Генерация изображений с помощью DALL·E через команду
/image - Расшифровка аудио- и видеосообщений с помощью Whisper (может потребоваться ffmpeg)
- Автоматическое обобщение разговора во избежание чрезмерного использования токенов
- Отслеживание использования токенов каждым пользователем — от @AlexHTW
- Получите статистику использования личных токенов с помощью команды
/statsот @AlexHTW - Бюджеты пользователей и гостевые бюджеты — от @AlexHTW
- Поддержка потокового вещания
- Поддержка GPT-4
- Локализованный язык бота
- Улучшена поддержка встроенных запросов для групповых и личных чатов — @bugfloyd
- Чтобы воспользоваться этой функцией, включите встроенные запросы для своего бота в BotFather с помощью
/setinlineкоманды - Поддержка новых моделей объявлены 13 июня 2023 года
- Поддержка функций (плагинов) для расширения функциональности бота с помощью сторонних сервисов
- Погода, Spotify, веб-поиск, преобразование текста в речь и многое другое. Список доступных плагинов см. здесь
- Поддержка неофициальных API, совместимых с OpenAI, от @kristaller486
- (НОВИНКА!) Поддержка GPT-4 Turbo и DALL·E 3 объявлено 6 ноября 2023 года — от @AlexHTW
- (НОВИНКА!) Поддержка преобразования текста в речь объявлено 6 ноября 2023 года — от @gilcu3
- (НОВИНКА!) Поддержка Vision объявлена 6 ноября 2023 года — от @gilcu3
- (НОВИНКА!) Поддержка модели GPT-4o объявлена 12 мая 2024 года — от @err09r
- (НОВИНКА!) предварительная поддержка моделей o1 и o1-mini
Дополнительные функции — нужна помощь!
Если вы хотите помочь, загляните в раздел проблемы и внесите свой вклад!
Если вы хотите помочь с переводами, загляните в Руководство по переводам
Предварительные условия
- Python 3.9+
- Телеграм-бот и его токен (см. руководство)
- Аккаунт OpenAI (см. раздел конфигурация)
Приступая к работе
Конфигурация
Настройте конфигурацию, скопировав .env.example и переименовав его в .env, а затем отредактировав необходимые параметры по своему усмотрению:
Ваш ключ API OpenAI можно получить
здесьTELEGRAM_BOT_TOKENТокен вашего Telegram-бота, полученный с помощью
BotFather (см.
Идентификаторы пользователей Telegram, являющихся администраторами.
Эти пользователи имеют доступ к специальным административным командам, информации и не ограничены в бюджете.
Идентификаторы администраторов не нужно добавлять в
Примечание: по умолчанию администраторы (
Список идентификаторов пользователей Telegram, разделённых запятыми, которым разрешено взаимодействовать с ботом (используйте
getidsbot, чтобы узнать свой идентификатор пользователя).
Примечание: по умолчанию разрешено
Дополнительная конфигурация
Следующие параметры являются необязательными и могут быть заданы в файле .env:
Бюджеты
Определяет временные рамки, к которым применяются все бюджеты.
daily (бюджет сбрасывается каждый день),
monthly (бюджет сбрасывается первого числа каждого месяца),
all-time (бюджет никогда не сбрасывается).
Дополнительную информацию см. в
Руководстве по бюджетированиюmonthlyUSER_BUDGETSСписок сумм в долларах США на пользователя, разделённых запятыми, из списка
ALLOWED_TELEGRAM_USER_IDS для установки индивидуального лимита расходов на OpenAI API для каждого пользователя.
* пользователей каждому пользователю присваивается первое
без ограничений для всех пользователей (
Дополнительную информацию см. в
Руководстве по бюджету*GUEST_BUDGET$-сумма в качестве лимита использования для всех гостевых пользователей.
Гостевые пользователи — это участники групповых чатов, которых нет в списке
Значение игнорируется, если в бюджетах пользователей не установлены лимиты использования (
Дополнительную информацию см. в
Руководстве по бюджету100.0TOKEN_PRICEЦена в долларах за 1000 токенов, используемых для расчёта стоимости в статистике использования.
https://openai.com/pricing0.002IMAGE_PRICESСписок из трёх элементов, разделённых запятыми, с указанием цен на изображения разных размеров:
https://openai.com/pricing0.016,0.018,0.02TRANSCRIPTION_PRICEЦена в долларах США за одну минуту аудиозаписи.
https://openai.com/pricing0.006VISION_TOKEN_PRICEЦена в долларах США за 1000 токенов интерпретации изображений.
https://openai.com/pricing0.01TTS_PRICESСписок цен на модели tts, разделённый запятыми:
https://openai.com/pricing0.015,0.030
Ознакомьтесь с руководством по бюджету, чтобы узнать о возможных конфигурациях бюджета.
Дополнительные необязательные параметры конфигурации
Следует ли включать цитирование сообщений в личных чатах
Следует ли включать генерацию изображений с помощью команды
/imagetrueENABLE_TRANSCRIPTIONСледует ли включать расшифровку аудио- и видеосообщений
Следует ли включить преобразование текста в речь с помощью
/ttstrueENABLE_VISIONСледует ли включать функции машинного зрения в поддерживаемых моделях
Прокси-сервер для использования с OpenAI и ботом Telegram (например,
Прокси-сервер используется только для OpenAI (например,
Прокси-сервер используется только для Telegram-бота (например,
Модель OpenAI для генерации ответов.
Все доступные модели можно найти
здесьgpt-4oOPENAI_BASE_URLКонечный URL для неофициальных API, совместимых с OpenAI (например, LocalAI или text-generation-webui)
URL-адрес OpenAI API по умолчанию
Системное сообщение, которое задаёт тон и управляет поведением помощника
Следует ли отображать информацию об использовании токенов OpenAI после каждого ответа
Следует ли отправлять ответы потоком.
Примечание: при включении несовместимо с
Максимальное количество токенов, которое может вернуть API ChatGPT
1200 для GPT-3, 2400 для GPT-4
Верхняя граница количества токенов, возвращаемых моделями машинного зрения
Модель преобразования изображения в речь для использования.
gpt-4ogpt-4oENABLE_VISION_FOLLOW_UP_QUESTIONSЕсли это так, то после отправки изображения боту он будет использовать настроенную модель VISION_MODEL до завершения диалога.
В противном случае он будет использовать модель OPENAI_MODEL для продолжения диалога.
falsetrueMAX_HISTORY_SIZEМаксимальное количество сообщений, которые можно сохранить в памяти, после чего разговор будет обобщён, чтобы избежать чрезмерного использования токенов
Максимальное количество минут, в течение которых диалог должен оставаться активным после последнего сообщения, после чего диалог будет сброшен
VOICE_REPLY_WITH_TRANSCRIPT_ONLY
Стоит ли отвечать на голосовые сообщения только расшифровкой или ответом ChatGPT на расшифровку
Список фраз, разделённых точкой с запятой (т. е.
Если расшифровка начинается с одной из этих фраз, она будет рассматриваться как подсказка, даже если
VOICE_REPLY_WITH_TRANSCRIPT_ONLY установлено на
true-VISION_PROMPTФраза (например,
Визуальные модели используют её в качестве подсказки для интерпретации заданного изображения.
Если на изображении, отправленном боту, есть подпись, она заменяет этот параметр
Количество ответов, которые нужно сгенерировать для каждого входного сообщения.
STREAM включено, установка значения выше 1 не будет работать должным образом
Число от 0 до 2. Чем выше значение, тем более случайным будет результат
Число от -2,0 до 2,0. Положительные значения снижают значимость новых токенов в зависимости от того, встречаются ли они в тексте
Число от -2,0 до 2,0. Положительные значения снижают значимость новых токенов в зависимости от их частоты в тексте.
Режим получения изображений в Telegram.
photophotoIMAGE_MODELИспользуемая модель DALL·E.
dall-e-3, актуальные доступные модели можно найти
здесьdall-e-2IMAGE_QUALITYКачество изображений DALL·E доступно только для
Стиль для генерации изображений DALL·E доступен только для
Доступные стили можно посмотреть
Размер изображения, сгенерированного DALL·E.
1024x1024 для моделей dall-e-3.
Параметр детализации для моделей машинного зрения, описанный в
Руководстве по машинному зрению.
highautoGROUP_TRIGGER_KEYWORDЕсли этот параметр установлен, бот в групповых чатах будет отвечать только на сообщения, начинающиеся с этого ключевого слова
Если установлено значение true, бот не будет обрабатывать расшифровки в групповых чатах
Если установлено значение true, бот не будет обрабатывать запросы на распознавание изображений в групповых чатах
Вклад с дополнительными переводenWHISPER_PROMPTЧтобы повысить точность службы расшифровки Whisper, особенно в отношении конкретных имён или терминов, вы можете настроить пользовательское сообщение.
Преобразование речи в текст — подсказки-TTS_VOICEГолос для преобразования текста в речь.
shimmeralloyTTS_MODELИспользуемая модель преобразования текста в речь.
Более подробную информацию можно найти в официальном справочнике по API.
Функции
Стоит ли использовать функции (они же плагины).
Подробнее о функциях можно прочитать
здесьtrue (если доступно для данной модели)
FUNCTIONS_MAX_CONSECUTIVE_CALLS
Максимальное количество последовательных вызовов функций, которые модель может выполнить за один ответ, прежде чем отобразится сообщение для пользователя
Список плагинов, которые нужно включить (полный список см. ниже), например:
PLUGINS=wolfram,weather-SHOW_PLUGINS_USEDНужно ли показывать, какие плагины использовались для ответа
Доступные плагины
Требуемая переменная (ы) окружения
Погода на сегодня и прогноз на 7 дней для любого региона (на основе
Поиск изображения или GIF-файла (на базе
Курс криптовалют в реальном времени (по данным
CoinCap) — от
@stumpyfr-spotifyЛучшие треки/исполнители Spotify, текущая воспроизводимая песня и поиск контента (на базе
Spotify).
Требуется однократная авторизация.
SPOTIFY_CLIENT_ID, SPOTIFY_CLIENT_SECRET, SPOTIFY_REDIRECT_URI
Получите актуальное мировое время (с помощью
WorldTimeAPI) — от
@noriellecruzWORLDTIME_DEFAULT_TIMEZONEdiceОтправьте кубик в чат!
Извлечение аудио из видео с YouTube
Перевод текста на любой язык (на основе
DeepL) — от
@LedyBacerDEEPL_API_KEYgtts_text_to_speechПреобразование текста в речь (с помощью API Google Translate)
Запрос к базе данных доменов whois — от
@jnaskali-whoiswebshotСнимок экрана веб-сайта по указанному URL-адресу или доменному имени — от
@noriellecruz-auto_ttsПреобразование текста в речь с помощью API OpenAI — от
Переменные среды
Идентификатор приложения Wolfram Alpha (требуется только для плагина
Идентификатор клиента Spotify (требуется только для плагина
Секретный ключ клиента Spotify (требуется только для плагина
Перенаправляющий URI приложения Spotify (требуется только для плагина
Часовой пояс по умолчанию, то есть
Europe/Rome (требуется только для плагина
worldtimeapi, идентификаторы часовых поясов можно получить
moderate) (необязательно, применяется к
Ключ API DeepL (требуется для плагина
Установка
Клонируйте репозиторий и перейдите в каталог проекта:
git clone https://github.com/n3d1117/chatgpt-telegram-bot.git cd chatgpt-telegram-bot
Из источника
python -m venv venv
источник# Для Linux или macOS: venv/bin/activate # Для Windows: venv\Scripts\activate
pip install -r requirements.txt
python bot/main.py
Использование Docker Compose
Чтобы собрать и запустить образ Docker, выполните следующую команду:
создание докера
Готовые к использованию образы Docker
Вы также можете использовать образ Docker из Docker Hub:
docker pull n3d1117/chatgpt-telegram-bot: последняя версия docker run -it --env-file .env n3d1117/chatgpt-telegram-bot
или с помощью GitHub Container Registry:
docker pull ghcr.io/n3d1117/chatgpt-telegram-bot:latest docker run -it --env-file .env ghcr.io/n3d1117/chatgpt-telegram-bot
Ручная сборка Docker
docker build -t chatgpt-telegram-bot . docker run -it --env-file .env chatgpt-telegram-bot
Герой
Вот пример Procfile для развёртывания с помощью Heroku (спасибо err09r!):
worker: python -m venv venv && source venv/bin/activate && pip install -r requirements.txt && python bot/main.py
Реквизиты
Отказ от ответственности
Это личный проект, который никоим образом не связан с OpenAI.
Сам код:
✉ .github/ рабочие процессы
↪️Опубликовать обновление yaml.
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images
name: Publish Docker image
on:
release:
types: [published]jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2 - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 - name: Log in to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} - name: Log in to the Container registry
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} - name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: |
n3d1117/chatgpt-telegram-bot
ghcr.io/${{ github.repository }} - name: Build and push Docker images
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}✉Бот
✉ Плагины
↪️ auto_tts.py
import logging import tempfile from typing import Dict
from .plugin import Plugin
class AutoTextToSpeech(Plugin):
"""
A plugin to convert text to speech using Openai Speech API
""" def get_source_name(self) -> str:
return "TTS" def get_spec(self) -> [Dict]:
return [{
"name": "translate_text_to_speech",
"description": "Translate text to speech using OpenAI API",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "The text to translate to speech"},
},
"required": ["text"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
try:
bytes, text_length = await helper.generate_speech(text=kwargs['text'])
with tempfile.NamedTemporaryFile(delete=False, suffix='.opus') as temp_file:
temp_file.write(bytes.getvalue())
temp_file_path = temp_file.name
except Exception as e:
logging.exception(e)
return {"Result": "Exception: " + str(e)}
return {
'direct_result': {
'kind': 'file',
'format': 'path',
'value': temp_file_path
}Плагин ↪️crypto.py
from typing import Dict
import requests
from .plugin import Plugin
# Author: https://github.com/stumpyfr
class CryptoPlugin(Plugin):
"""
A plugin to fetch the current rate of various cryptocurrencies
"""
def get_source_name(self) -> str:
return "CoinCap" def get_spec(self) -> [Dict]:
return [{
"name": "get_crypto_rate",
"description": "Get the current rate of various crypto currencies",
"parameters": {
"type": "object",
"properties": {
"asset": {"type": "string", "description": "Asset of the crypto"}
},
"required": ["asset"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
return requests.get(f"https://api.coincap.io/v2/rates/{kwargs['asset']}").json()Плагин ↪️ddg_image_search.py
import os import random from itertools import islice from typing import Dict
from duckduckgo_search import DDGS
from .plugin import Plugin
class DDGImageSearchPlugin(Plugin):
"""
A plugin to search images and GIFs for a given query, using DuckDuckGo
"""
def __init__(self):
self.safesearch = os.getenv('DUCKDUCKGO_SAFESEARCH', 'moderate') def get_source_name(self) -> str:
return "DuckDuckGo Images" def get_spec(self) -> [Dict]:
return [{
"name": "search_images",
"description": "Search image or GIFs for a given query",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The query to search for"},
"type": {
"type": "string",
"enum": ["photo", "gif"],
"description": "The type of image to search for. Default to `photo` if not specified",
},
"region": {
"type": "string",
"enum": ['xa-ar', 'xa-en', 'ar-es', 'au-en', 'at-de', 'be-fr', 'be-nl', 'br-pt', 'bg-bg',
'ca-en', 'ca-fr', 'ct-ca', 'cl-es', 'cn-zh', 'co-es', 'hr-hr', 'cz-cs', 'dk-da',
'ee-et', 'fi-fi', 'fr-fr', 'de-de', 'gr-el', 'hk-tzh', 'hu-hu', 'in-en', 'id-id',
'id-en', 'ie-en', 'il-he', 'it-it', 'jp-jp', 'kr-kr', 'lv-lv', 'lt-lt', 'xl-es',
'my-ms', 'my-en', 'mx-es', 'nl-nl', 'nz-en', 'no-no', 'pe-es', 'ph-en', 'ph-tl',
'pl-pl', 'pt-pt', 'ro-ro', 'ru-ru', 'sg-en', 'sk-sk', 'sl-sl', 'za-en', 'es-es',
'se-sv', 'ch-de', 'ch-fr', 'ch-it', 'tw-tzh', 'th-th', 'tr-tr', 'ua-uk', 'uk-en',
'us-en', 'ue-es', 've-es', 'vn-vi', 'wt-wt'],
"description": "The region to use for the search. Infer this from the language used for the"
"query. Default to `wt-wt` if not specified",
}
},
"required": ["query", "type", "region"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
with DDGS() as ddgs:
image_type = kwargs.get('type', 'photo')
ddgs_images_gen = ddgs.images(
kwargs['query'],
region=kwargs.get('region', 'wt-wt'),
safesearch=self.safesearch,
type_image=image_type,
)
results = list(islice(ddgs_images_gen, 10))
if not results or len(results) == 0:
return {"result": "No results found"} # Shuffle the results to avoid always returning the same image
random.shuffle(results) return {
'direct_result': {
'kind': image_type,
'format': 'url',
'value': results[0]['image']
}
}Плагин ↪️ddg_web_search.py
import os from itertools import islice from typing import Dict
from duckduckgo_search import DDGS
from .plugin import Plugin
class DDGWebSearchPlugin(Plugin):
"""
A plugin to search the web for a given query, using DuckDuckGo
"""
def __init__(self):
self.safesearch = os.getenv('DUCKDUCKGO_SAFESEARCH', 'moderate') def get_source_name(self) -> str:
return "DuckDuckGo" def get_spec(self) -> [Dict]:
return [{
"name": "web_search",
"description": "Execute a web search for the given query and return a list of results",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "the user query"
},
"region": {
"type": "string",
"enum": ['xa-ar', 'xa-en', 'ar-es', 'au-en', 'at-de', 'be-fr', 'be-nl', 'br-pt', 'bg-bg',
'ca-en', 'ca-fr', 'ct-ca', 'cl-es', 'cn-zh', 'co-es', 'hr-hr', 'cz-cs', 'dk-da',
'ee-et', 'fi-fi', 'fr-fr', 'de-de', 'gr-el', 'hk-tzh', 'hu-hu', 'in-en', 'id-id',
'id-en', 'ie-en', 'il-he', 'it-it', 'jp-jp', 'kr-kr', 'lv-lv', 'lt-lt', 'xl-es',
'my-ms', 'my-en', 'mx-es', 'nl-nl', 'nz-en', 'no-no', 'pe-es', 'ph-en', 'ph-tl',
'pl-pl', 'pt-pt', 'ro-ro', 'ru-ru', 'sg-en', 'sk-sk', 'sl-sl', 'za-en', 'es-es',
'se-sv', 'ch-de', 'ch-fr', 'ch-it', 'tw-tzh', 'th-th', 'tr-tr', 'ua-uk', 'uk-en',
'us-en', 'ue-es', 've-es', 'vn-vi', 'wt-wt'],
"description": "The region to use for the search. Infer this from the language used for the"
"query. Default to `wt-wt` if not specified",
}
},
"required": ["query", "region"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
with DDGS() as ddgs:
ddgs_gen = ddgs.text(
kwargs['query'],
region=kwargs.get('region', 'wt-wt'),
safesearch=self.safesearch
)
results = list(islice(ddgs_gen, 3)) if results is None or len(results) == 0:
return {"Result": "No good DuckDuckGo Search Result was found"} def to_metadata(result: Dict) -> Dict[str, str]:
return {
"snippet": result["body"],
"title": result["title"],
"link": result["href"],
}
return {"result": [to_metadata(result) for result in results]}Плагин ↪️deepl.py
mport os from typing import Dict
import requests
from .plugin import Plugin
class DeeplTranslatePlugin(Plugin):
"""
A plugin to translate a given text from a language to another, using DeepL
"""
def __init__(self):
deepl_api_key = os.getenv('DEEPL_API_KEY')
if not deepl_api_key:
raise ValueError('DEEPL_API_KEY environment variable must be set to use DeepL Plugin')
self.api_key = deepl_api_key def get_source_name(self) -> str:
return "DeepL Translate" def get_spec(self) -> [Dict]:
return [{
"name": "translate",
"description": "Translate a given text from a language to another",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "The text to translate"},
"to_language": {"type": "string", "description": "The language to translate to (e.g. 'it')"}
},
"required": ["text", "to_language"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
if self.api_key.endswith(':fx'):
url = "https://api-free.deepl.com/v2/translate"
else:
url = "https://api.deepl.com/v2/translate"
headers = {
"Authorization": f"DeepL-Auth-Key {self.api_key}",
"User-Agent": "chatgpt-telegram-bot",
"Content-Type": "application/x-www-form-urlencoded",
"Accept-Encoding": "utf-8"
}
data = {
"text": kwargs['text'],
"target_lang": kwargs['to_language']
}
translated_text = requests.post(url, headers=headers, data=data).json()["translations"][0]["text"]
return translated_text.encode('unicode-escape').decode('unicode-escape')Плагин ↪️dice.py
from typing import Dict
from .plugin import Plugin
class DicePlugin(Plugin):
"""
A plugin to send a die in the chat
"""
def get_source_name(self) -> str:
return "Dice" def get_spec(self) -> [Dict]:
return [{
"name": "send_dice",
"description": "Send a dice in the chat, with a random number between 1 and 6",
"parameters": {
"type": "object",
"properties": {
"emoji": {
"type": "string",
"enum": ["🎲", "🎯", "🏀", "⚽", "🎳", "🎰"],
"description": "Emoji on which the dice throw animation is based."
"Dice can have values 1-6 for “🎲”, “🎯” and “🎳”, values 1-5 for “🏀” "
"and “⚽”, and values 1-64 for “🎰”. Defaults to “🎲”.",
}
},
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
return {
'direct_result': {
'kind': 'dice',
'format': 'dice',
'value': kwargs.get('emoji', '🎲')
}
}Плагин ↪️gtts_преобразование_текста_в речь с помощью API
import datetime from typing import Dict
from gtts import gTTS
from .plugin import Plugin
class GTTSTextToSpeech(Plugin):
"""
A plugin to convert text to speech using Google Translate's Text to Speech API
""" def get_source_name(self) -> str:
return "gTTS" def get_spec(self) -> [Dict]:
return [{
"name": "google_translate_text_to_speech",
"description": "Translate text to speech using Google Translate's Text to Speech API",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "The text to translate to speech"},
"lang": {
"type": "string", "description": "The language of the text to translate to speech."
"Infer this from the language of the text.",
},
},
"required": ["text", "lang"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
tts = gTTS(kwargs['text'], lang=kwargs.get('lang', 'en'))
output = f'gtts_{datetime.datetime.now().timestamp()}.mp3'
tts.save(output)
return {
'direct_result': {
'kind': 'file',
'format': 'path',
'value': output
}
}Плагин ↪️iplocation.py
import requests from typing import Dict
from .plugin import Plugin
class IpLocationPlugin(Plugin):
"""
A plugin to get geolocation and other information for a given IP address
""" def get_source_name(self) -> str:
return "IP.FM" def get_spec(self) -> [Dict]:
return [{
"name": "iplocation",
"description": "Get information for an IP address using the IP.FM API.",
"parameters": {
"type": "object",
"properties": {
"ip": {"type": "string", "description": "IP Address"}
},
"required": ["ip"],
},
}]
async def execute(self, function_name, helper, **kwargs) -> Dict:
ip = kwargs.get('ip')
BASE_URL = "https://api.ip.fm/?ip={}"
url = BASE_URL.format(ip)
try:
response = requests.get(url)
response_data = response.json()
country = response_data.get('data', {}).get('country', "None")
subdivisions = response_data.get('data', {}).get('subdivisions', "None")
city = response_data.get('data', {}).get('city', "None")
location = ', '.join(filter(None, [country, subdivisions, city])) or "None"
asn = response_data.get('data', {}).get('asn', "None")
as_name = response_data.get('data', {}).get('as_name', "None")
as_domain = response_data.get('data', {}).get('as_domain', "None")
return {"Location": location, "ASN": asn, "AS Name": as_name, "AS Domain": as_domain}
except Exception as e:
return {"Error": str(e)}Плагин ↪️plugin.py
from abc import abstractmethod, ABC from typing import Dict
class Plugin(ABC):
"""
A plugin interface which can be used to create plugins for the ChatGPT API.
""" @abstractmethod
def get_source_name(self) -> str:
"""
Return the name of the source of the plugin.
"""
pass @abstractmethod
def get_spec(self) -> [Dict]:
"""
Function specs in the form of JSON schema as specified in the OpenAI documentation:
https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions
"""
pass @abstractmethod
async def execute(self, function_name, helper, **kwargs) -> Dict:
"""
Execute the plugin and return a JSON serializable response
"""
passПлагин ↪️spotify.py
import os from typing import Dict
import spotipy from spotipy import SpotifyOAuth
from .plugin import Plugin
class SpotifyPlugin(Plugin):
"""
A plugin to fetch information from Spotify
"""
def __init__(self):
spotify_client_id = os.getenv('SPOTIFY_CLIENT_ID')
spotify_client_secret = os.getenv('SPOTIFY_CLIENT_SECRET')
spotify_redirect_uri = os.getenv('SPOTIFY_REDIRECT_URI')
if not spotify_client_id or not spotify_client_secret or not spotify_redirect_uri:
raise ValueError('SPOTIFY_CLIENT_ID, SPOTIFY_CLIENT_SECRET and SPOTIFY_REDIRECT_URI environment variables'
' are required to use SpotifyPlugin')
self.spotify = spotipy.Spotify(
auth_manager=SpotifyOAuth(
client_id=spotify_client_id,
client_secret=spotify_client_secret,
redirect_uri=spotify_redirect_uri,
scope="user-top-read,user-read-currently-playing",
open_browser=False
)
) def get_source_name(self) -> str:
return "Spotify" def get_spec(self) -> [Dict]:
time_range_param = {
"type": "string",
"enum": ["short_term", "medium_term", "long_term"],
"description": "The time range of the data to be returned. Short term is the last 4 weeks, "
"medium term is last 6 months, long term is last several years. Default to "
"short_term if not specified."
}
limit_param = {
"type": "integer",
"description": "The number of results to return. Max is 50. Default to 5 if not specified.",
}
type_param = {
"type": "string",
"enum": ["album", "artist", "track"],
"description": "Type of content to search",
}
return [
{
"name": "spotify_get_currently_playing_song",
"description": "Get the user's currently playing song",
"parameters": {
"type": "object",
"properties": {}
}
},
{
"name": "spotify_get_users_top_artists",
"description": "Get the user's top listened artists",
"parameters": {
"type": "object",
"properties": {
"time_range": time_range_param,
"limit": limit_param
}
}
},
{
"name": "spotify_get_users_top_tracks",
"description": "Get the user's top listened tracks",
"parameters": {
"type": "object",
"properties": {
"time_range": time_range_param,
"limit": limit_param
}
}
},
{
"name": "spotify_search_by_query",
"description": "Search spotify content by query",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query",
},
"type": type_param
},
"required": ["query", "type"]
}
},
{
"name": "spotify_lookup_by_id",
"description": "Lookup spotify content by id",
"parameters": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The exact id to lookup. Can be a track id, an artist id or an album id",
},
"type": type_param
},
"required": ["id", "type"]
}
}
] async def execute(self, function_name, helper, **kwargs) -> Dict:
time_range = kwargs.get('time_range', 'short_term')
limit = kwargs.get('limit', 5) if function_name == 'spotify_get_currently_playing_song':
return self.fetch_currently_playing()
elif function_name == 'spotify_get_users_top_artists':
return self.fetch_top_artists(time_range, limit)
elif function_name == 'spotify_get_users_top_tracks':
return self.fetch_top_tracks(time_range, limit)
elif function_name == 'spotify_search_by_query':
query = kwargs.get('query', '')
search_type = kwargs.get('type', 'track')
return self.search_by_query(query, search_type, limit)
elif function_name == 'spotify_lookup_by_id':
content_id = kwargs.get('id')
search_type = kwargs.get('type', 'track')
return self.search_by_id(content_id, search_type) def fetch_currently_playing(self) -> Dict:
"""
Fetch user's currently playing song from Spotify
"""
currently_playing = self.spotify.current_user_playing_track()
if not currently_playing:
return {"result": "No song is currently playing"}
result = {
'name': currently_playing['item']['name'],
'artist': currently_playing['item']['artists'][0]['name'],
'album': currently_playing['item']['album']['name'],
'url': currently_playing['item']['external_urls']['spotify'],
'__album_id': currently_playing['item']['album']['id'],
'__artist_id': currently_playing['item']['artists'][0]['id'],
'__track_id': currently_playing['item']['id'],
}
return {"result": result} def fetch_top_tracks(self, time_range='short_term', limit=5) -> Dict:
"""
Fetch user's top tracks from Spotify
"""
results = []
top_tracks = self.spotify.current_user_top_tracks(limit=limit, time_range=time_range)
if not top_tracks or 'items' not in top_tracks or len(top_tracks['items']) == 0:
return {"results": "No top tracks found"}
for item in top_tracks['items']:
results.append({
'name': item['name'],
'artist': item['artists'][0]['name'],
'album': item['album']['name'],
'album_release_date': item['album']['release_date'],
'url': item['external_urls']['spotify'],
'album_url': item['album']['external_urls']['spotify'],
'artist_url': item['artists'][0]['external_urls']['spotify'],
'__track_id': item['id'],
'__album_id': item['album']['id'],
'__artist_id': item['artists'][0]['id'],
})
return {'results': results} def fetch_top_artists(self, time_range='short_term', limit=5) -> Dict:
"""
Fetch user's top artists from Spotify
"""
results = []
top_artists = self.spotify.current_user_top_artists(limit=limit, time_range=time_range)
if not top_artists or 'items' not in top_artists or len(top_artists['items']) == 0:
return {"results": "No top artists found"}
for item in top_artists['items']:
results.append({
'name': item['name'],
'url': item['external_urls']['spotify'],
'__artist_id': item['id']
})
return {'results': results} def search_by_query(self, query, search_type, limit=5) -> Dict:
"""
Search content by query on Spotify
"""
results = {}
search_response = self.spotify.search(q=query, limit=limit, type=search_type)
if not search_response:
return {"results": "No content found"} if 'tracks' in search_response:
results['tracks'] = []
for item in search_response['tracks']['items']:
results['tracks'].append({
'name': item['name'],
'artist': item['artists'][0]['name'],
'album': item['album']['name'],
'album_release_date': item['album']['release_date'],
'url': item['external_urls']['spotify'],
'album_url': item['album']['external_urls']['spotify'],
'artist_url': item['artists'][0]['external_urls']['spotify'],
'__artist_id': item['artists'][0]['id'],
'__album_id': item['album']['id'],
'__track_id': item['id'],
})
if 'artists' in search_response:
results['artists'] = []
for item in search_response['artists']['items']:
results['artists'].append({
'name': item['name'],
'url': item['external_urls']['spotify'],
'__artist_id': item['id'],
})
if 'albums' in search_response:
results['albums'] = []
for item in search_response['albums']['items']:
results['albums'].append({
'name': item['name'],
'artist': item['artists'][0]['name'],
'url': item['external_urls']['spotify'],
'artist_url': item['artists'][0]['external_urls']['spotify'],
'release_date': item['release_date'],
'__artist_id': item['artists'][0]['id'],
'__album_id': item['id'],
})
return {'results': results} def search_by_id(self, content_id, search_type) -> Dict:
"""
Search content by exact id on Spotify
"""
if search_type == 'track':
search_response = self.spotify.track(content_id)
if not search_response:
return {"result": "No track found"}
return {'result': self._get_track(search_response)} elif search_type == 'artist':
search_response = self.spotify.artist(content_id)
if not search_response:
return {"result": "No artisti found"}
albums_response = self.spotify.artist_albums(artist_id=content_id, limit=3)
if not albums_response:
albums_response = {"items": []}
return {'result': self._get_artist(search_response, albums_response)} elif search_type == 'album':
search_response = self.spotify.album(content_id)
if not search_response:
return {"result": "No album found"}
return {'result': self._get_album(search_response)} else:
return {'error': 'Invalid search type. Must be track, artist or album'} @staticmethod
def _get_artist(response, albums):
return {
'name': response['name'],
'url': response['external_urls']['spotify'],
'__artist_id': response['id'],
'followers': response['followers']['total'],
'genres': response['genres'],
'albums': [
{
'name': album['name'],
'__album_id': album['id'],
'url': album['external_urls']['spotify'],
'release_date': album['release_date'],
'total_tracks': album['total_tracks'],
}
for album in albums['items']
],
} @staticmethod
def _get_track(response):
return {
'name': response['name'],
'artist': response['artists'][0]['name'],
'__artist_id': response['artists'][0]['id'],
'album': response['album']['name'],
'__album_id': response['album']['id'],
'url': response['external_urls']['spotify'],
'__track_id': response['id'],
'duration_ms': response['duration_ms'],
'track_number': response['track_number'],
'explicit': response['explicit'],
} @staticmethod
def _get_album(response):
return {
'name': response['name'],
'artist': response['artists'][0]['name'],
'__artist_id': response['artists'][0]['id'],
'url': response['external_urls']['spotify'],
'release_date': response['release_date'],
'total_tracks': response['total_tracks'],
'__album_id': response['id'],
'label': response['label'],
'tracks': [
{
'name': track['name'],
'url': track['external_urls']['spotify'],
'__track_id': track['id'],
'duration_ms': track['duration_ms'],
'track_number': track['track_number'],
'explicit': track['explicit'],
}
for track in response['tracks']['items']
]
}Плагин ↪️weather.py
from datetime import datetime from typing import Dict
import requests
from .plugin import Plugin
class WeatherPlugin(Plugin):
"""
A plugin to get the current weather and 7-day daily forecast for a location
""" def get_source_name(self) -> str:
return "OpenMeteo" def get_spec(self) -> [Dict]:
latitude_param = {"type": "string", "description": "Latitude of the location"}
longitude_param = {"type": "string", "description": "Longitude of the location"}
unit_param = {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the provided location.",
}
return [
{
"name": "get_current_weather",
"description": "Get the current weather for a location using Open Meteo APIs.",
"parameters": {
"type": "object",
"properties": {
"latitude": latitude_param,
"longitude": longitude_param,
"unit": unit_param,
},
"required": ["latitude", "longitude", "unit"],
},
},
{
"name": "get_forecast_weather",
"description": "Get daily weather forecast for a location using Open Meteo APIs."
f"Today is {datetime.today().strftime('%A, %B %d, %Y')}",
"parameters": {
"type": "object",
"properties": {
"latitude": latitude_param,
"longitude": longitude_param,
"unit": unit_param,
"forecast_days": {
"type": "integer",
"description": "The number of days to forecast, including today. Default is 7. Max 14. "
"Use 1 for today, 2 for today and tomorrow, and so on.",
},
},
"required": ["latitude", "longitude", "unit", "forecast_days"],
},
}
] async def execute(self, function_name, helper, **kwargs) -> Dict:
url = 'https://api.open-meteo.com/v1/forecast' \
f'?latitude={kwargs["latitude"]}' \
f'&longitude={kwargs["longitude"]}' \
f'&temperature_unit={kwargs["unit"]}'
if function_name == 'get_current_weather':
url += '¤t_weather=true'
return requests.get(url).json() elif function_name == 'get_forecast_weather':
url += '&daily=weathercode,temperature_2m_max,temperature_2m_min,precipitation_probability_mean,'
url += f'&forecast_days={kwargs["forecast_days"]}'
url += '&timezone=auto'
response = requests.get(url).json()
results = {}
for i, time in enumerate(response["daily"]["time"]):
results[datetime.strptime(time, "%Y-%m-%d").strftime("%A, %B %d, %Y")] = {
"weathercode": response["daily"]["weathercode"][i],
"temperature_2m_max": response["daily"]["temperature_2m_max"][i],
"temperature_2m_min": response["daily"]["temperature_2m_min"][i],
"precipitation_probability_mean": response["daily"]["precipitation_probability_mean"][i]
}
return {"today": datetime.today().strftime("%A, %B %d, %Y"), "forecast": results}Плагин ↪️webshot.py
import os, requests, random, string from typing import Dict from .plugin import Plugin
class WebshotPlugin(Plugin):
"""
A plugin to screenshot a website
"""
def get_source_name(self) -> str:
return "WebShot" def get_spec(self) -> [Dict]:
return [{
"name": "screenshot_website",
"description": "Show screenshot/image of a website from a given url or domain name.",
"parameters": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "Website url or domain name. Correctly formatted url is required. Example: https://www.google.com"}
},
"required": ["url"],
},
}]
def generate_random_string(self, length):
characters = string.ascii_letters + string.digits
return ''.join(random.choice(characters) for _ in range(length)) async def execute(self, function_name, helper, **kwargs) -> Dict:
try:
image_url = f'https://image.thum.io/get/maxAge/12/width/720/{kwargs["url"]}'
# preload url first
requests.get(image_url) # download the actual image
response = requests.get(image_url, timeout=30) if response.status_code == 200:
if not os.path.exists("uploads/webshot"):
os.makedirs("uploads/webshot") image_file_path = os.path.join("uploads/webshot", f"{self.generate_random_string(15)}.png")
with open(image_file_path, "wb") as f:
f.write(response.content) return {
'direct_result': {
'kind': 'photo',
'format': 'path',
'value': image_file_path
}
}
else:
return {'result': 'Unable to screenshot website'}
except:
if 'image_file_path' in locals():
os.remove(image_file_path)
return {'result': 'Unable to screenshot website'}Плагин ↪️whois_.py
from typing import Dict from .plugin import Plugin
import whois
class WhoisPlugin(Plugin):
"""
A plugin to query whois database
"""
def get_source_name(self) -> str:
return "Whois" def get_spec(self) -> [Dict]:
return [{
"name": "get_whois",
"description": "Get whois registration and expiry information for a domain",
"parameters": {
"type": "object",
"properties": {
"domain": {"type": "string", "description": "Domain name"}
},
"required": ["domain"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
try:
whois_result = whois.query(kwargs['domain'])
if whois_result is None:
return {'result': 'No such domain found'}
return whois_result.__dict__
except Exception as e:
return {'error': 'An unexpected error occurred: ' + str(e)}Плагин ↪️wolfram_alpha.py
import os from typing import Dict
import wolframalpha
from .plugin import Plugin
class WolframAlphaPlugin(Plugin):
"""
A plugin to answer questions using WolframAlpha.
"""
def __init__(self):
wolfram_app_id = os.getenv('WOLFRAM_APP_ID')
if not wolfram_app_id:
raise ValueError('WOLFRAM_APP_ID environment variable must be set to use WolframAlphaPlugin')
self.app_id = wolfram_app_id def get_source_name(self) -> str:
return "WolframAlpha" def get_spec(self) -> [Dict]:
return [{
"name": "answer_with_wolfram_alpha",
"description": "Get an answer to a question using Wolfram Alpha. Input should the the query in English.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The search query, in english (translate if necessary)"}
},
"required": ["query"]
}
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
client = wolframalpha.Client(self.app_id)
res = client.query(kwargs['query'])
try:
assumption = next(res.pods).text
answer = next(res.results).text
except StopIteration:
return {'answer': 'Wolfram Alpha wasn\'t able to answer it'} if answer is None or answer == "":
return {'answer': 'No good Wolfram Alpha Result was found'}
else:
return {'assumption': assumption, 'answer': answer}Плагин ↪️worldtimeapi.py
import os, requests from typing import Dict from datetime import datetime
from .plugin import Plugin
class WorldTimeApiPlugin(Plugin):
"""
A plugin to get the current time from a given timezone, using WorldTimeAPI
"""
def __init__(self):
default_timezone = os.getenv('WORLDTIME_DEFAULT_TIMEZONE')
if not default_timezone:
raise ValueError('WORLDTIME_DEFAULT_TIMEZONE environment variable must be set to use WorldTimeApiPlugin')
self.default_timezone = default_timezone def get_source_name(self) -> str:
return "WorldTimeAPI" def get_spec(self) -> [Dict]:
return [{
"name": "worldtimeapi",
"description": "Get the current time from a given timezone",
"parameters": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "The timezone identifier (e.g: `Europe/Rome`). Infer this from the location."
f"Use {self.default_timezone} if not specified."
}
},
"required": ["timezone"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
timezone = kwargs.get('timezone', self.default_timezone)
url = f'https://worldtimeapi.org/api/timezone/{timezone}' try:
wtr = requests.get(url).json().get('datetime')
wtr_obj = datetime.strptime(wtr, "%Y-%m-%dT%H:%M:%S.%f%z")
time_24hr = wtr_obj.strftime("%H:%M:%S")
time_12hr = wtr_obj.strftime("%I:%M:%S %p")
return {"24hr": time_24hr, "12hr": time_12hr}
except:
return {"result": "No result was found"}Плагин ↪️youtube_audio_extractor.py
import logging import re from typing import Dict
from pytube import YouTube
from .plugin import Plugin
class YouTubeAudioExtractorPlugin(Plugin):
"""
A plugin to extract audio from a YouTube video
""" def get_source_name(self) -> str:
return "YouTube Audio Extractor" def get_spec(self) -> [Dict]:
return [{
"name": "extract_youtube_audio",
"description": "Extract audio from a YouTube video",
"parameters": {
"type": "object",
"properties": {
"youtube_link": {"type": "string", "description": "YouTube video link to extract audio from"}
},
"required": ["youtube_link"],
},
}] async def execute(self, function_name, helper, **kwargs) -> Dict:
link = kwargs['youtube_link']
try:
video = YouTube(link)
audio = video.streams.filter(only_audio=True, file_extension='mp4').first()
output = re.sub(r'[^\w\-_\. ]', '_', video.title) + '.mp3'
audio.download(filename=output)
return {
'direct_result': {
'kind': 'file',
'format': 'path',
'value': output
}
}
except Exception as e:
logging.warning(f'Failed to extract audio from YouTube video: {str(e)}')
return {'result': 'Failed to extract audio'}↪️main.py
import logging import os
from dotenv import load_dotenv
from plugin_manager import PluginManager from openai_helper import OpenAIHelper, default_max_tokens, are_functions_available from telegram_bot import ChatGPTTelegramBot
def main():
# Read .env file
load_dotenv() # Setup logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO
)
logging.getLogger("httpx").setLevel(logging.WARNING) # Check if the required environment variables are set
required_values = ['TELEGRAM_BOT_TOKEN', 'OPENAI_API_KEY']
missing_values = [value for value in required_values if os.environ.get(value) is None]
if len(missing_values) > 0:
logging.error(f'The following environment values are missing in your .env: {", ".join(missing_values)}')
exit(1) # Setup configurations
model = os.environ.get('OPENAI_MODEL', 'gpt-4o')
functions_available = are_functions_available(model=model)
max_tokens_default = default_max_tokens(model=model)
openai_config = {
'api_key': os.environ['OPENAI_API_KEY'],
'show_usage': os.environ.get('SHOW_USAGE', 'false').lower() == 'true',
'stream': os.environ.get('STREAM', 'true').lower() == 'true',
'proxy': os.environ.get('PROXY', None) or os.environ.get('OPENAI_PROXY', None),
'max_history_size': int(os.environ.get('MAX_HISTORY_SIZE', 15)),
'max_conversation_age_minutes': int(os.environ.get('MAX_CONVERSATION_AGE_MINUTES', 180)),
'assistant_prompt': os.environ.get('ASSISTANT_PROMPT', 'You are a helpful assistant.'),
'max_tokens': int(os.environ.get('MAX_TOKENS', max_tokens_default)),
'n_choices': int(os.environ.get('N_CHOICES', 1)),
'temperature': float(os.environ.get('TEMPERATURE', 1.0)),
'image_model': os.environ.get('IMAGE_MODEL', 'dall-e-2'),
'image_quality': os.environ.get('IMAGE_QUALITY', 'standard'),
'image_style': os.environ.get('IMAGE_STYLE', 'vivid'),
'image_size': os.environ.get('IMAGE_SIZE', '512x512'),
'model': model,
'enable_functions': os.environ.get('ENABLE_FUNCTIONS', str(functions_available)).lower() == 'true',
'functions_max_consecutive_calls': int(os.environ.get('FUNCTIONS_MAX_CONSECUTIVE_CALLS', 10)),
'presence_penalty': float(os.environ.get('PRESENCE_PENALTY', 0.0)),
'frequency_penalty': float(os.environ.get('FREQUENCY_PENALTY', 0.0)),
'bot_language': os.environ.get('BOT_LANGUAGE', 'en'),
'show_plugins_used': os.environ.get('SHOW_PLUGINS_USED', 'false').lower() == 'true',
'whisper_prompt': os.environ.get('WHISPER_PROMPT', ''),
'vision_model': os.environ.get('VISION_MODEL', 'gpt-4o'),
'enable_vision_follow_up_questions': os.environ.get('ENABLE_VISION_FOLLOW_UP_QUESTIONS', 'true').lower() == 'true',
'vision_prompt': os.environ.get('VISION_PROMPT', 'What is in this image'),
'vision_detail': os.environ.get('VISION_DETAIL', 'auto'),
'vision_max_tokens': int(os.environ.get('VISION_MAX_TOKENS', '300')),
'tts_model': os.environ.get('TTS_MODEL', 'tts-1'),
'tts_voice': os.environ.get('TTS_VOICE', 'alloy'),
} if openai_config['enable_functions'] and not functions_available:
logging.error(f'ENABLE_FUNCTIONS is set to true, but the model {model} does not support it. '
'Please set ENABLE_FUNCTIONS to false or use a model that supports it.')
exit(1)
if os.environ.get('MONTHLY_USER_BUDGETS') is not None:
logging.warning('The environment variable MONTHLY_USER_BUDGETS is deprecated. '
'Please use USER_BUDGETS with BUDGET_PERIOD instead.')
if os.environ.get('MONTHLY_GUEST_BUDGET') is not None:
logging.warning('The environment variable MONTHLY_GUEST_BUDGET is deprecated. '
'Please use GUEST_BUDGET with BUDGET_PERIOD instead.') telegram_config = {
'token': os.environ['TELEGRAM_BOT_TOKEN'],
'admin_user_ids': os.environ.get('ADMIN_USER_IDS', '-'),
'allowed_user_ids': os.environ.get('ALLOWED_TELEGRAM_USER_IDS', '*'),
'enable_quoting': os.environ.get('ENABLE_QUOTING', 'true').lower() == 'true',
'enable_image_generation': os.environ.get('ENABLE_IMAGE_GENERATION', 'true').lower() == 'true',
'enable_transcription': os.environ.get('ENABLE_TRANSCRIPTION', 'true').lower() == 'true',
'enable_vision': os.environ.get('ENABLE_VISION', 'true').lower() == 'true',
'enable_tts_generation': os.environ.get('ENABLE_TTS_GENERATION', 'true').lower() == 'true',
'budget_period': os.environ.get('BUDGET_PERIOD', 'monthly').lower(),
'user_budgets': os.environ.get('USER_BUDGETS', os.environ.get('MONTHLY_USER_BUDGETS', '*')),
'guest_budget': float(os.environ.get('GUEST_BUDGET', os.environ.get('MONTHLY_GUEST_BUDGET', '100.0'))),
'stream': os.environ.get('STREAM', 'true').lower() == 'true',
'proxy': os.environ.get('PROXY', None) or os.environ.get('TELEGRAM_PROXY', None),
'voice_reply_transcript': os.environ.get('VOICE_REPLY_WITH_TRANSCRIPT_ONLY', 'false').lower() == 'true',
'voice_reply_prompts': os.environ.get('VOICE_REPLY_PROMPTS', '').split(';'),
'ignore_group_transcriptions': os.environ.get('IGNORE_GROUP_TRANSCRIPTIONS', 'true').lower() == 'true',
'ignore_group_vision': os.environ.get('IGNORE_GROUP_VISION', 'true').lower() == 'true',
'group_trigger_keyword': os.environ.get('GROUP_TRIGGER_KEYWORD', ''),
'token_price': float(os.environ.get('TOKEN_PRICE', 0.002)),
'image_prices': [float(i) for i in os.environ.get('IMAGE_PRICES', "0.016,0.018,0.02").split(",")],
'vision_token_price': float(os.environ.get('VISION_TOKEN_PRICE', '0.01')),
'image_receive_mode': os.environ.get('IMAGE_FORMAT', "photo"),
'tts_model': os.environ.get('TTS_MODEL', 'tts-1'),
'tts_prices': [float(i) for i in os.environ.get('TTS_PRICES', "0.015,0.030").split(",")],
'transcription_price': float(os.environ.get('TRANSCRIPTION_PRICE', 0.006)),
'bot_language': os.environ.get('BOT_LANGUAGE', 'en'),
} plugin_config = {
'plugins': os.environ.get('PLUGINS', '').split(',')
} # Setup and run ChatGPT and Telegram bot
plugin_manager = PluginManager(config=plugin_config)
openai_helper = OpenAIHelper(config=openai_config, plugin_manager=plugin_manager)
telegram_bot = ChatGPTTelegramBot(config=telegram_config, openai=openai_helper)
telegram_bot.run()
if __name__ == '__main__':
main()↪️openai_helper.py
from __future__ import annotations import datetime import logging import os
import tiktoken
import openai
import json import httpx import io from PIL import Image
from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type
from utils import is_direct_result, encode_image, decode_image from plugin_manager import PluginManager
# Models can be found here: https://platform.openai.com/docs/models/overview
# Models gpt-3.5-turbo-0613 and gpt-3.5-turbo-16k-0613 will be deprecated on June 13, 2024
GPT_3_MODELS = ("gpt-3.5-turbo", "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613")
GPT_3_16K_MODELS = ("gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-0125")
GPT_4_MODELS = ("gpt-4", "gpt-4-0314", "gpt-4-0613", "gpt-4-turbo-preview")
GPT_4_32K_MODELS = ("gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-0613")
GPT_4_VISION_MODELS = ("gpt-4o",)
GPT_4_128K_MODELS = ("gpt-4-1106-preview", "gpt-4-0125-preview", "gpt-4-turbo-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09")
GPT_4O_MODELS = ("gpt-4o", "gpt-4o-mini", "chatgpt-4o-latest")
O_MODELS = ("o1", "o1-mini", "o1-preview")
GPT_ALL_MODELS = GPT_3_MODELS + GPT_3_16K_MODELS + GPT_4_MODELS + GPT_4_32K_MODELS + GPT_4_VISION_MODELS + GPT_4_128K_MODELS + GPT_4O_MODELS + O_MODELSdef default_max_tokens(model: str) -> int:
"""
Gets the default number of max tokens for the given model.
:param model: The model name
:return: The default number of max tokens
"""
base = 1200
if model in GPT_3_MODELS:
return base
elif model in GPT_4_MODELS:
return base * 2
elif model in GPT_3_16K_MODELS:
if model == "gpt-3.5-turbo-1106":
return 4096
return base * 4
elif model in GPT_4_32K_MODELS:
return base * 8
elif model in GPT_4_VISION_MODELS:
return 4096
elif model in GPT_4_128K_MODELS:
return 4096
elif model in GPT_4O_MODELS:
return 4096
elif model in O_MODELS:
return 4096
def are_functions_available(model: str) -> bool:
"""
Whether the given model supports functions
"""
if model in ("gpt-3.5-turbo-0301", "gpt-4-0314", "gpt-4-32k-0314", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613"):
return False
if model in O_MODELS:
return False
return True
# Load translations
parent_dir_path = os.path.join(os.path.dirname(__file__), os.pardir)
translations_file_path = os.path.join(parent_dir_path, 'translations.json')
with open(translations_file_path, 'r', encoding='utf-8') as f:
translations = json.load(f)
def localized_text(key, bot_language):
"""
Return translated text for a key in specified bot_language.
Keys and translations can be found in the translations.json.
"""
try:
return translations[bot_language][key]
except KeyError:
logging.warning(f"No translation available for bot_language code '{bot_language}' and key '{key}'")
# Fallback to English if the translation is not available
if key in translations['en']:
return translations['en'][key]
else:
logging.warning(f"No english definition found for key '{key}' in translations.json")
# return key as text
return key
class OpenAIHelper:
"""
ChatGPT helper class.
""" def __init__(self, config: dict, plugin_manager: PluginManager):
"""
Initializes the OpenAI helper class with the given configuration.
:param config: A dictionary containing the GPT configuration
:param plugin_manager: The plugin manager
"""
http_client = httpx.AsyncClient(proxy=config['proxy']) if 'proxy' in config else None
self.client = openai.AsyncOpenAI(api_key=config['api_key'], http_client=http_client)
self.config = config
self.plugin_manager = plugin_manager
self.conversations: dict[int: list] = {} # {chat_id: history}
self.conversations_vision: dict[int: bool] = {} # {chat_id: is_vision}
self.last_updated: dict[int: datetime] = {} # {chat_id: last_update_timestamp} def get_conversation_stats(self, chat_id: int) -> tuple[int, int]:
"""
Gets the number of messages and tokens used in the conversation.
:param chat_id: The chat ID
:return: A tuple containing the number of messages and tokens used
"""
if chat_id not in self.conversations:
self.reset_chat_history(chat_id)
return len(self.conversations[chat_id]), self.__count_tokens(self.conversations[chat_id]) async def get_chat_response(self, chat_id: int, query: str) -> tuple[str, str]:
"""
Gets a full response from the GPT model.
:param chat_id: The chat ID
:param query: The query to send to the model
:return: The answer from the model and the number of tokens used
"""
plugins_used = ()
response = await self.__common_get_chat_response(chat_id, query)
if self.config['enable_functions'] and not self.conversations_vision[chat_id]:
response, plugins_used = await self.__handle_function_call(chat_id, response)
if is_direct_result(response):
return response, '0'answer = ''
if len(response.choices) > 1 and self.config['n_choices'] > 1:
for index, choice in enumerate(response.choices):
content = choice.message.content.strip()
if index == 0:
self.__add_to_history(chat_id, role="assistant", content=content)
answer += f'{index + 1}\u20e3\n'
answer += content
answer += '\n\n'
else:
answer = response.choices[0].message.content.strip()
self.__add_to_history(chat_id, role="assistant", content=answer) bot_language = self.config['bot_language']
show_plugins_used = len(plugins_used) > 0 and self.config['show_plugins_used']
plugin_names = tuple(self.plugin_manager.get_plugin_source_name(plugin) for plugin in plugins_used)
if self.config['show_usage']:
answer += "\n\n---\n" \
f"💰 {str(response.usage.total_tokens)} {localized_text('stats_tokens', bot_language)}" \
f" ({str(response.usage.prompt_tokens)} {localized_text('prompt', bot_language)}," \
f" {str(response.usage.completion_tokens)} {localized_text('completion', bot_language)})"
if show_plugins_used:
answer += f"\n🔌 {', '.join(plugin_names)}"
elif show_plugins_used:
answer += f"\n\n---\n🔌 {', '.join(plugin_names)}"return answer, response.usage.total_tokens
async def get_chat_response_stream(self, chat_id: int, query: str):
"""
Stream response from the GPT model.
:param chat_id: The chat ID
:param query: The query to send to the model
:return: The answer from the model and the number of tokens used, or 'not_finished'
"""
plugins_used = ()
response = await self.__common_get_chat_response(chat_id, query, stream=True)
if self.config['enable_functions'] and not self.conversations_vision[chat_id]:
response, plugins_used = await self.__handle_function_call(chat_id, response, stream=True)
if is_direct_result(response):
yield response, '0'
return answer = ''
async for chunk in response:
if len(chunk.choices) == 0:
continue
delta = chunk.choices[0].delta
if delta.content:
answer += delta.content
yield answer, 'not_finished'
answer = answer.strip()
self.__add_to_history(chat_id, role="assistant", content=answer)
tokens_used = str(self.__count_tokens(self.conversations[chat_id])) show_plugins_used = len(plugins_used) > 0 and self.config['show_plugins_used']
plugin_names = tuple(self.plugin_manager.get_plugin_source_name(plugin) for plugin in plugins_used)
if self.config['show_usage']:
answer += f"\n\n---\n💰 {tokens_used} {localized_text('stats_tokens', self.config['bot_language'])}"
if show_plugins_used:
answer += f"\n🔌 {', '.join(plugin_names)}"
elif show_plugins_used:
answer += f"\n\n---\n🔌 {', '.join(plugin_names)}"yield answer, tokens_used
@retry(
reraise=True,
retry=retry_if_exception_type(openai.RateLimitError),
wait=wait_fixed(20),
stop=stop_after_attempt(3)
)
async def __common_get_chat_response(self, chat_id: int, query: str, stream=False):
"""
Request a response from the GPT model.
:param chat_id: The chat ID
:param query: The query to send to the model
:return: The answer from the model and the number of tokens used
"""
bot_language = self.config['bot_language']
try:
if chat_id not in self.conversations or self.__max_age_reached(chat_id):
self.reset_chat_history(chat_id)self.last_updated[chat_id] = datetime.datetime.now()
self.__add_to_history(chat_id, role="user", content=query)
# Summarize the chat history if it's too long to avoid excessive token usage
token_count = self.__count_tokens(self.conversations[chat_id])
exceeded_max_tokens = token_count + self.config['max_tokens'] > self.__max_model_tokens()
exceeded_max_history_size = len(self.conversations[chat_id]) > self.config['max_history_size'] if exceeded_max_tokens or exceeded_max_history_size:
logging.info(f'Chat history for chat ID {chat_id} is too long. Summarising...')
try:
summary = await self.__summarise(self.conversations[chat_id][:-1])
logging.debug(f'Summary: {summary}')
self.reset_chat_history(chat_id, self.conversations[chat_id][0]['content'])
self.__add_to_history(chat_id, role="assistant", content=summary)
self.__add_to_history(chat_id, role="user", content=query)
except Exception as e:
logging.warning(f'Error while summarising chat history: {str(e)}. Popping elements instead...')
self.conversations[chat_id] = self.conversations[chat_id][-self.config['max_history_size']:] max_tokens_str = 'max_completion_tokens' if self.config['model'] in O_MODELS else 'max_tokens'
common_args = {
'model': self.config['model'] if not self.conversations_vision[chat_id] else self.config['vision_model'],
'messages': self.conversations[chat_id],
'temperature': self.config['temperature'],
'n': self.config['n_choices'],
max_tokens_str: self.config['max_tokens'],
'presence_penalty': self.config['presence_penalty'],
'frequency_penalty': self.config['frequency_penalty'],
'stream': stream
} if self.config['enable_functions'] and not self.conversations_vision[chat_id]:
functions = self.plugin_manager.get_functions_specs()
if len(functions) > 0:
common_args['functions'] = self.plugin_manager.get_functions_specs()
common_args['function_call'] = 'auto'
return await self.client.chat.completions.create(**common_args) except openai.RateLimitError as e:
raise e except openai.BadRequestError as e:
raise Exception(f"⚠️ _{localized_text('openai_invalid', bot_language)}._ ⚠️\n{str(e)}") from e except Exception as e:
raise Exception(f"⚠️ _{localized_text('error', bot_language)}._ ⚠️\n{str(e)}") from e async def __handle_function_call(self, chat_id, response, stream=False, times=0, plugins_used=()):
function_name = ''
arguments = ''
if stream:
async for item in response:
if len(item.choices) > 0:
first_choice = item.choices[0]
if first_choice.delta and first_choice.delta.function_call:
if first_choice.delta.function_call.name:
function_name += first_choice.delta.function_call.name
if first_choice.delta.function_call.arguments:
arguments += first_choice.delta.function_call.arguments
elif first_choice.finish_reason and first_choice.finish_reason == 'function_call':
break
else:
return response, plugins_used
else:
return response, plugins_used
else:
if len(response.choices) > 0:
first_choice = response.choices[0]
if first_choice.message.function_call:
if first_choice.message.function_call.name:
function_name += first_choice.message.function_call.name
if first_choice.message.function_call.arguments:
arguments += first_choice.message.function_call.arguments
else:
return response, plugins_used
else:
return response, plugins_used logging.info(f'Calling function {function_name} with arguments {arguments}')
function_response = await self.plugin_manager.call_function(function_name, self, arguments) if function_name not in plugins_used:
plugins_used += (function_name,) if is_direct_result(function_response):
self.__add_function_call_to_history(chat_id=chat_id, function_name=function_name,
content=json.dumps({'result': 'Done, the content has been sent'
'to the user.'}))
return function_response, plugins_used self.__add_function_call_to_history(chat_id=chat_id, function_name=function_name, content=function_response)
response = await self.client.chat.completions.create(
model=self.config['model'],
messages=self.conversations[chat_id],
functions=self.plugin_manager.get_functions_specs(),
function_call='auto' if times < self.config['functions_max_consecutive_calls'] else 'none',
stream=stream
)
return await self.__handle_function_call(chat_id, response, stream, times + 1, plugins_used) async def generate_image(self, prompt: str) -> tuple[str, str]:
"""
Generates an image from the given prompt using DALL·E model.
:param prompt: The prompt to send to the model
:return: The image URL and the image size
"""
bot_language = self.config['bot_language']
try:
response = await self.client.images.generate(
prompt=prompt,
n=1,
model=self.config['image_model'],
quality=self.config['image_quality'],
style=self.config['image_style'],
size=self.config['image_size']
) if len(response.data) == 0:
logging.error(f'No response from GPT: {str(response)}')
raise Exception(
f"⚠️ _{localized_text('error', bot_language)}._ "
f"⚠️\n{localized_text('try_again', bot_language)}."
) return response.data[0].url, self.config['image_size']
except Exception as e:
raise Exception(f"⚠️ _{localized_text('error', bot_language)}._ ⚠️\n{str(e)}") from e async def generate_speech(self, text: str) -> tuple[any, int]:
"""
Generates an audio from the given text using TTS model.
:param prompt: The text to send to the model
:return: The audio in bytes and the text size
"""
bot_language = self.config['bot_language']
try:
response = await self.client.audio.speech.create(
model=self.config['tts_model'],
voice=self.config['tts_voice'],
input=text,
response_format='opus'
) temp_file = io.BytesIO()
temp_file.write(response.read())
temp_file.seek(0)
return temp_file, len(text)
except Exception as e:
raise Exception(f"⚠️ _{localized_text('error', bot_language)}._ ⚠️\n{str(e)}") from e async def transcribe(self, filename):
"""
Transcribes the audio file using the Whisper model.
"""
try:
with open(filename, "rb") as audio:
prompt_text = self.config['whisper_prompt']
result = await self.client.audio.transcriptions.create(model="whisper-1", file=audio, prompt=prompt_text)
return result.text
except Exception as e:
logging.exception(e)
raise Exception(f"⚠️ _{localized_text('error', self.config['bot_language'])}._ ⚠️\n{str(e)}") from e @retry(
reraise=True,
retry=retry_if_exception_type(openai.RateLimitError),
wait=wait_fixed(20),
stop=stop_after_attempt(3)
)
async def __common_get_chat_response_vision(self, chat_id: int, content: list, stream=False):
"""
Request a response from the GPT model.
:param chat_id: The chat ID
:param query: The query to send to the model
:return: The answer from the model and the number of tokens used
"""
bot_language = self.config['bot_language']
try:
if chat_id not in self.conversations or self.__max_age_reached(chat_id):
self.reset_chat_history(chat_id)self.last_updated[chat_id] = datetime.datetime.now()
if self.config['enable_vision_follow_up_questions']:
self.conversations_vision[chat_id] = True
self.__add_to_history(chat_id, role="user", content=content)
else:
for message in content:
if message['type'] == 'text':
query = message['text']
break
self.__add_to_history(chat_id, role="user", content=query) # Summarize the chat history if it's too long to avoid excessive token usage
token_count = self.__count_tokens(self.conversations[chat_id])
exceeded_max_tokens = token_count + self.config['max_tokens'] > self.__max_model_tokens()
exceeded_max_history_size = len(self.conversations[chat_id]) > self.config['max_history_size'] if exceeded_max_tokens or exceeded_max_history_size:
logging.info(f'Chat history for chat ID {chat_id} is too long. Summarising...')
try:
last = self.conversations[chat_id][-1]
summary = await self.__summarise(self.conversations[chat_id][:-1])
logging.debug(f'Summary: {summary}')
self.reset_chat_history(chat_id, self.conversations[chat_id][0]['content'])
self.__add_to_history(chat_id, role="assistant", content=summary)
self.conversations[chat_id] += [last]
except Exception as e:
logging.warning(f'Error while summarising chat history: {str(e)}. Popping elements instead...')
self.conversations[chat_id] = self.conversations[chat_id][-self.config['max_history_size']:] message = {'role':'user', 'content':content} common_args = {
'model': self.config['vision_model'],
'messages': self.conversations[chat_id][:-1] + [message],
'temperature': self.config['temperature'],
'n': 1, # several choices is not implemented yet
'max_tokens': self.config['vision_max_tokens'],
'presence_penalty': self.config['presence_penalty'],
'frequency_penalty': self.config['frequency_penalty'],
'stream': stream
}
# vision model does not yet support functions # if self.config['enable_functions']:
# functions = self.plugin_manager.get_functions_specs()
# if len(functions) > 0:
# common_args['functions'] = self.plugin_manager.get_functions_specs()
# common_args['function_call'] = 'auto'
return await self.client.chat.completions.create(**common_args) except openai.RateLimitError as e:
raise e except openai.BadRequestError as e:
raise Exception(f"⚠️ _{localized_text('openai_invalid', bot_language)}._ ⚠️\n{str(e)}") from e except Exception as e:
raise Exception(f"⚠️ _{localized_text('error', bot_language)}._ ⚠️\n{str(e)}") from e
async def interpret_image(self, chat_id, fileobj, prompt=None):
"""
Interprets a given PNG image file using the Vision model.
"""
image = encode_image(fileobj)
prompt = self.config['vision_prompt'] if prompt is None else prompt content = [{'type':'text', 'text':prompt}, {'type':'image_url', \
'image_url': {'url':image, 'detail':self.config['vision_detail'] } }]response = await self.__common_get_chat_response_vision(chat_id, content)
# functions are not available for this model
# if self.config['enable_functions']:
# response, plugins_used = await self.__handle_function_call(chat_id, response)
# if is_direct_result(response):
# return response, '0'answer = ''
if len(response.choices) > 1 and self.config['n_choices'] > 1:
for index, choice in enumerate(response.choices):
content = choice.message.content.strip()
if index == 0:
self.__add_to_history(chat_id, role="assistant", content=content)
answer += f'{index + 1}\u20e3\n'
answer += content
answer += '\n\n'
else:
answer = response.choices[0].message.content.strip()
self.__add_to_history(chat_id, role="assistant", content=answer) bot_language = self.config['bot_language']
# Plugins are not enabled either
# show_plugins_used = len(plugins_used) > 0 and self.config['show_plugins_used']
# plugin_names = tuple(self.plugin_manager.get_plugin_source_name(plugin) for plugin in plugins_used)
if self.config['show_usage']:
answer += "\n\n---\n" \
f"💰 {str(response.usage.total_tokens)} {localized_text('stats_tokens', bot_language)}" \
f" ({str(response.usage.prompt_tokens)} {localized_text('prompt', bot_language)}," \
f" {str(response.usage.completion_tokens)} {localized_text('completion', bot_language)})"
# if show_plugins_used:
# answer += f"\n🔌 {', '.join(plugin_names)}"
# elif show_plugins_used:
# answer += f"\n\n---\n🔌 {', '.join(plugin_names)}"return answer, response.usage.total_tokens
async def interpret_image_stream(self, chat_id, fileobj, prompt=None):
"""
Interprets a given PNG image file using the Vision model.
"""
image = encode_image(fileobj)
prompt = self.config['vision_prompt'] if prompt is None else prompt content = [{'type':'text', 'text':prompt}, {'type':'image_url', \
'image_url': {'url':image, 'detail':self.config['vision_detail'] } }]response = await self.__common_get_chat_response_vision(chat_id, content, stream=True)
# if self.config['enable_functions']:
# response, plugins_used = await self.__handle_function_call(chat_id, response, stream=True)
# if is_direct_result(response):
# yield response, '0'
# return answer = ''
async for chunk in response:
if len(chunk.choices) == 0:
continue
delta = chunk.choices[0].delta
if delta.content:
answer += delta.content
yield answer, 'not_finished'
answer = answer.strip()
self.__add_to_history(chat_id, role="assistant", content=answer)
tokens_used = str(self.__count_tokens(self.conversations[chat_id])) #show_plugins_used = len(plugins_used) > 0 and self.config['show_plugins_used']
#plugin_names = tuple(self.plugin_manager.get_plugin_source_name(plugin) for plugin in plugins_used)
if self.config['show_usage']:
answer += f"\n\n---\n💰 {tokens_used} {localized_text('stats_tokens', self.config['bot_language'])}"
# if show_plugins_used:
# answer += f"\n🔌 {', '.join(plugin_names)}"
# elif show_plugins_used:
# answer += f"\n\n---\n🔌 {', '.join(plugin_names)}"yield answer, tokens_used
def reset_chat_history(self, chat_id, content=''):
"""
Resets the conversation history.
"""
if content == '':
content = self.config['assistant_prompt']
self.conversations[chat_id] = [{"role": "assistant" if self.config['model'] in O_MODELS else "system", "content": content}]
self.conversations_vision[chat_id] = False def __max_age_reached(self, chat_id) -> bool:
"""
Checks if the maximum conversation age has been reached.
:param chat_id: The chat ID
:return: A boolean indicating whether the maximum conversation age has been reached
"""
if chat_id not in self.last_updated:
return False
last_updated = self.last_updated[chat_id]
now = datetime.datetime.now()
max_age_minutes = self.config['max_conversation_age_minutes']
return last_updated < now - datetime.timedelta(minutes=max_age_minutes) def __add_function_call_to_history(self, chat_id, function_name, content):
"""
Adds a function call to the conversation history
"""
self.conversations[chat_id].append({"role": "function", "name": function_name, "content": content}) def __add_to_history(self, chat_id, role, content):
"""
Adds a message to the conversation history.
:param chat_id: The chat ID
:param role: The role of the message sender
:param content: The message content
"""
self.conversations[chat_id].append({"role": role, "content": content}) async def __summarise(self, conversation) -> str:
"""
Summarises the conversation history.
:param conversation: The conversation history
:return: The summary
"""
messages = [
{"role": "assistant", "content": "Summarize this conversation in 700 characters or less"},
{"role": "user", "content": str(conversation)}
]
response = await self.client.chat.completions.create(
model=self.config['model'],
messages=messages,
temperature=1 if self.config['model'] in O_MODELS else 0.4
)
return response.choices[0].message.content def __max_model_tokens(self):
base = 4096
if self.config['model'] in GPT_3_MODELS:
return base
if self.config['model'] in GPT_3_16K_MODELS:
return base * 4
if self.config['model'] in GPT_4_MODELS:
return base * 2
if self.config['model'] in GPT_4_32K_MODELS:
return base * 8
if self.config['model'] in GPT_4_VISION_MODELS:
return base * 31
if self.config['model'] in GPT_4_128K_MODELS:
return base * 31
if self.config['model'] in GPT_4O_MODELS:
return base * 31
elif self.config['model'] in O_MODELS:
# https://platform.openai.com/docs/models#o1
if self.config['model'] == "o1":
return 100_000
elif self.config['model'] == "o1-preview":
return 32_768
else:
return 65_536
raise NotImplementedError(
f"Max tokens for model {self.config['model']} is not implemented yet."
) # https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
def __count_tokens(self, messages) -> int:
"""
Counts the number of tokens required to send the given messages.
:param messages: the messages to send
:return: the number of tokens required
"""
model = self.config['model']
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("o200k_base") if model in GPT_ALL_MODELS:
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(f"""num_tokens_from_messages() is not implemented for model {model}.""")
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
if key == 'content':
if isinstance(value, str):
num_tokens += len(encoding.encode(value))
else:
for message1 in value:
if message1['type'] == 'image_url':
image = decode_image(message1['image_url']['url'])
num_tokens += self.__count_tokens_vision(image)
else:
num_tokens += len(encoding.encode(message1['text']))
else:
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens# no longer needed
def __count_tokens_vision(self, image_bytes: bytes) -> int:
"""
Counts the number of tokens for interpreting an image.
:param image_bytes: image to interpret
:return: the number of tokens required
"""
image_file = io.BytesIO(image_bytes)
image = Image.open(image_file)
model = self.config['vision_model']
if model not in GPT_4_VISION_MODELS:
raise NotImplementedError(f"""count_tokens_vision() is not implemented for model {model}.""")
w, h = image.size
if w > h: w, h = h, w
# this computation follows https://platform.openai.com/docs/guides/vision and https://openai.com/pricing#gpt-4-turbo
base_tokens = 85
detail = self.config['vision_detail']
if detail == 'low':
return base_tokens
elif detail == 'high' or detail == 'auto': # assuming worst cost for auto
f = max(w / 768, h / 2048)
if f > 1:
w, h = int(w / f), int(h / f)
tw, th = (w + 511) // 512, (h + 511) // 512
tiles = tw * th
num_tokens = base_tokens + tiles * 170
return num_tokens
else:
raise NotImplementedError(f"""unknown parameter detail={detail} for model {model}.""") # No longer works as of July 21st 2023, as OpenAI has removed the billing API
# def get_billing_current_month(self):
# """Gets billed usage for current month from OpenAI API.
#
# :return: dollar amount of usage this month
# """
# headers = {
# "Authorization": f"Bearer {openai.api_key}"
# }
# # calculate first and last day of current month
# today = date.today()
# first_day = date(today.year, today.month, 1)
# _, last_day_of_month = monthrange(today.year, today.month)
# last_day = date(today.year, today.month, last_day_of_month)
# params = {
# "start_date": first_day,
# "end_date": last_day
# }
# response = requests.get("https://api.openai.com/dashboard/billing/usage", headers=headers, params=params)
# billing_data = json.loads(response.text)
# usage_month = billing_data["total_usage"] / 100 # convert cent amount to dollars
# return usage_month↪️plugin_manager.py
import json
from plugins.gtts_text_to_speech import GTTSTextToSpeech from plugins.auto_tts import AutoTextToSpeech from plugins.dice import DicePlugin from plugins.youtube_audio_extractor import YouTubeAudioExtractorPlugin from plugins.ddg_image_search import DDGImageSearchPlugin from plugins.spotify import SpotifyPlugin from plugins.crypto import CryptoPlugin from plugins.weather import WeatherPlugin from plugins.ddg_web_search import DDGWebSearchPlugin from plugins.wolfram_alpha import WolframAlphaPlugin from plugins.deepl import DeeplTranslatePlugin from plugins.worldtimeapi import WorldTimeApiPlugin from plugins.whois_ import WhoisPlugin from plugins.webshot import WebshotPlugin from plugins.iplocation import IpLocationPlugin
class PluginManager:
"""
A class to manage the plugins and call the correct functions
""" def __init__(self, config):
enabled_plugins = config.get('plugins', [])
plugin_mapping = {
'wolfram': WolframAlphaPlugin,
'weather': WeatherPlugin,
'crypto': CryptoPlugin,
'ddg_web_search': DDGWebSearchPlugin,
'ddg_image_search': DDGImageSearchPlugin,
'spotify': SpotifyPlugin,
'worldtimeapi': WorldTimeApiPlugin,
'youtube_audio_extractor': YouTubeAudioExtractorPlugin,
'dice': DicePlugin,
'deepl_translate': DeeplTranslatePlugin,
'gtts_text_to_speech': GTTSTextToSpeech,
'auto_tts': AutoTextToSpeech,
'whois': WhoisPlugin,
'webshot': WebshotPlugin,
'iplocation': IpLocationPlugin,
}
self.plugins = [plugin_mapping[plugin]() for plugin in enabled_plugins if plugin in plugin_mapping] def get_functions_specs(self):
"""
Return the list of function specs that can be called by the model
"""
return [spec for specs in map(lambda plugin: plugin.get_spec(), self.plugins) for spec in specs] async def call_function(self, function_name, helper, arguments):
"""
Call a function based on the name and parameters provided
"""
plugin = self.__get_plugin_by_function_name(function_name)
if not plugin:
return json.dumps({'error': f'Function {function_name} not found'})
return json.dumps(await plugin.execute(function_name, helper, **json.loads(arguments)), default=str) def get_plugin_source_name(self, function_name) -> str:
"""
Return the source name of the plugin
"""
plugin = self.__get_plugin_by_function_name(function_name)
if not plugin:
return ''
return plugin.get_source_name() def __get_plugin_by_function_name(self, function_name):
return next((plugin for plugin in self.plugins
if function_name in map(lambda spec: spec.get('name'), plugin.get_spec())), N↪️telegram_bot.py
from __future__ import annotations
import asyncio import logging import os import io
from uuid import uuid4
from telegram import BotCommandScopeAllGroupChats, Update, constants
from telegram import InlineKeyboardMarkup, InlineKeyboardButton, InlineQueryResultArticle
from telegram import InputTextMessageContent, BotCommand
from telegram.error import RetryAfter, TimedOut, BadRequest
from telegram.ext import ApplicationBuilder, CommandHandler, MessageHandler, \
filters, InlineQueryHandler, CallbackQueryHandler, Application, ContextTypes, CallbackContextfrom pydub import AudioSegment from PIL import Image
from utils import is_group_chat, get_thread_id, message_text, wrap_with_indicator, split_into_chunks, \
edit_message_with_retry, get_stream_cutoff_values, is_allowed, get_remaining_budget, is_admin, is_within_budget, \
get_reply_to_message_id, add_chat_request_to_usage_tracker, error_handler, is_direct_result, handle_direct_result, \
cleanup_intermediate_files
from openai_helper import OpenAIHelper, localized_text
from usage_tracker import UsageTracker
class ChatGPTTelegramBot:
"""
Class representing a ChatGPT Telegram Bot.
""" def __init__(self, config: dict, openai: OpenAIHelper):
"""
Initializes the bot with the given configuration and GPT bot object.
:param config: A dictionary containing the bot configuration
:param openai: OpenAIHelper object
"""
self.config = config
self.openai = openai
bot_language = self.config['bot_language']
self.commands = [
BotCommand(command='help', description=localized_text('help_description', bot_language)),
BotCommand(command='reset', description=localized_text('reset_description', bot_language)),
BotCommand(command='stats', description=localized_text('stats_description', bot_language)),
BotCommand(command='resend', description=localized_text('resend_description', bot_language))
]
# If imaging is enabled, add the "image" command to the list
if self.config.get('enable_image_generation', False):
self.commands.append(BotCommand(command='image', description=localized_text('image_description', bot_language))) if self.config.get('enable_tts_generation', False):
self.commands.append(BotCommand(command='tts', description=localized_text('tts_description', bot_language))) self.group_commands = [BotCommand(
command='chat', description=localized_text('chat_description', bot_language)
)] + self.commands
self.disallowed_message = localized_text('disallowed', bot_language)
self.budget_limit_message = localized_text('budget_limit', bot_language)
self.usage = {}
self.last_message = {}
self.inline_queries_cache = {} async def help(self, update: Update, _: ContextTypes.DEFAULT_TYPE) -> None:
"""
Shows the help menu.
"""
commands = self.group_commands if is_group_chat(update) else self.commands
commands_description = [f'/{command.command} - {command.description}' for command in commands]
bot_language = self.config['bot_language']
help_text = (
localized_text('help_text', bot_language)[0] +
'\n\n' +
'\n'.join(commands_description) +
'\n\n' +
localized_text('help_text', bot_language)[1] +
'\n\n' +
localized_text('help_text', bot_language)[2]
)
await update.message.reply_text(help_text, disable_web_page_preview=True) async def stats(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Returns token usage statistics for current day and month.
"""
if not await is_allowed(self.config, update, context):
logging.warning(f'User {update.message.from_user.name} (id: {update.message.from_user.id}) '
'is not allowed to request their usage statistics')
await self.send_disallowed_message(update, context)
return logging.info(f'User {update.message.from_user.name} (id: {update.message.from_user.id}) '
'requested their usage statistics') user_id = update.message.from_user.id
if user_id not in self.usage:
self.usage[user_id] = UsageTracker(user_id, update.message.from_user.name) tokens_today, tokens_month = self.usage[user_id].get_current_token_usage()
images_today, images_month = self.usage[user_id].get_current_image_count()
(transcribe_minutes_today, transcribe_seconds_today, transcribe_minutes_month,
transcribe_seconds_month) = self.usage[user_id].get_current_transcription_duration()
vision_today, vision_month = self.usage[user_id].get_current_vision_tokens()
characters_today, characters_month = self.usage[user_id].get_current_tts_usage()
current_cost = self.usage[user_id].get_current_cost() chat_id = update.effective_chat.id
chat_messages, chat_token_length = self.openai.get_conversation_stats(chat_id)
remaining_budget = get_remaining_budget(self.config, self.usage, update)
bot_language = self.config['bot_language']
text_current_conversation = (
f"*{localized_text('stats_conversation', bot_language)[0]}*:\n"
f"{chat_messages} {localized_text('stats_conversation', bot_language)[1]}\n"
f"{chat_token_length} {localized_text('stats_conversation', bot_language)[2]}\n"
"----------------------------\n"
)
# Check if image generation is enabled and, if so, generate the image statistics for today
text_today_images = ""
if self.config.get('enable_image_generation', False):
text_today_images = f"{images_today} {localized_text('stats_images', bot_language)}\n" text_today_vision = ""
if self.config.get('enable_vision', False):
text_today_vision = f"{vision_today} {localized_text('stats_vision', bot_language)}\n" text_today_tts = ""
if self.config.get('enable_tts_generation', False):
text_today_tts = f"{characters_today} {localized_text('stats_tts', bot_language)}\n"
text_today = (
f"*{localized_text('usage_today', bot_language)}:*\n"
f"{tokens_today} {localized_text('stats_tokens', bot_language)}\n"
f"{text_today_images}" # Include the image statistics for today if applicable
f"{text_today_vision}"
f"{text_today_tts}"
f"{transcribe_minutes_today} {localized_text('stats_transcribe', bot_language)[0]} "
f"{transcribe_seconds_today} {localized_text('stats_transcribe', bot_language)[1]}\n"
f"{localized_text('stats_total', bot_language)}{current_cost['cost_today']:.2f}\n"
"----------------------------\n"
)
text_month_images = ""
if self.config.get('enable_image_generation', False):
text_month_images = f"{images_month} {localized_text('stats_images', bot_language)}\n" text_month_vision = ""
if self.config.get('enable_vision', False):
text_month_vision = f"{vision_month} {localized_text('stats_vision', bot_language)}\n" text_month_tts = ""
if self.config.get('enable_tts_generation', False):
text_month_tts = f"{characters_month} {localized_text('stats_tts', bot_language)}\n"
# Check if image generation is enabled and, if so, generate the image statistics for the month
text_month = (
f"*{localized_text('usage_month', bot_language)}:*\n"
f"{tokens_month} {localized_text('stats_tokens', bot_language)}\n"
f"{text_month_images}" # Include the image statistics for the month if applicable
f"{text_month_vision}"
f"{text_month_tts}"
f"{transcribe_minutes_month} {localized_text('stats_transcribe', bot_language)[0]} "
f"{transcribe_seconds_month} {localized_text('stats_transcribe', bot_language)[1]}\n"
f"{localized_text('stats_total', bot_language)}{current_cost['cost_month']:.2f}"
) # text_budget filled with conditional content
text_budget = "\n\n"
budget_period = self.config['budget_period']
if remaining_budget < float('inf'):
text_budget += (
f"{localized_text('stats_budget', bot_language)}"
f"{localized_text(budget_period, bot_language)}: "
f"${remaining_budget:.2f}.\n"
)
# No longer works as of July 21st 2023, as OpenAI has removed the billing API
# add OpenAI account information for admin request
# if is_admin(self.config, user_id):
# text_budget += (
# f"{localized_text('stats_openai', bot_language)}"
# f"{self.openai.get_billing_current_month():.2f}"
# ) usage_text = text_current_conversation + text_today + text_month + text_budget
await update.message.reply_text(usage_text, parse_mode=constants.ParseMode.MARKDOWN) async def resend(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Resend the last request
"""
if not await is_allowed(self.config, update, context):
logging.warning(f'User {update.message.from_user.name} (id: {update.message.from_user.id})'
' is not allowed to resend the message')
await self.send_disallowed_message(update, context)
return chat_id = update.effective_chat.id
if chat_id not in self.last_message:
logging.warning(f'User {update.message.from_user.name} (id: {update.message.from_user.id})'
' does not have anything to resend')
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=localized_text('resend_failed', self.config['bot_language'])
)
return # Update message text, clear self.last_message and send the request to prompt
logging.info(f'Resending the last prompt from user: {update.message.from_user.name} '
f'(id: {update.message.from_user.id})')
with update.message._unfrozen() as message:
message.text = self.last_message.pop(chat_id)await self.prompt(update=update, context=context)
async def reset(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Resets the conversation.
"""
if not await is_allowed(self.config, update, context):
logging.warning(f'User {update.message.from_user.name} (id: {update.message.from_user.id}) '
'is not allowed to reset the conversation')
await self.send_disallowed_message(update, context)
return logging.info(f'Resetting the conversation for user {update.message.from_user.name} '
f'(id: {update.message.from_user.id})...') chat_id = update.effective_chat.id
reset_content = message_text(update.message)
self.openai.reset_chat_history(chat_id=chat_id, content=reset_content)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=localized_text('reset_done', self.config['bot_language'])
) async def image(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Generates an image for the given prompt using DALL·E APIs
"""
if not self.config['enable_image_generation'] \
or not await self.check_allowed_and_within_budget(update, context):
return image_query = message_text(update.message)
if image_query == '':
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=localized_text('image_no_prompt', self.config['bot_language'])
)
return logging.info(f'New image generation request received from user {update.message.from_user.name} '
f'(id: {update.message.from_user.id})') async def _generate():
try:
image_url, image_size = await self.openai.generate_image(prompt=image_query)
if self.config['image_receive_mode'] == 'photo':
await update.effective_message.reply_photo(
reply_to_message_id=get_reply_to_message_id(self.config, update),
photo=image_url
)
elif self.config['image_receive_mode'] == 'document':
await update.effective_message.reply_document(
reply_to_message_id=get_reply_to_message_id(self.config, update),
document=image_url
)
else:
raise Exception(f"env variable IMAGE_RECEIVE_MODE has invalid value {self.config['image_receive_mode']}")
# add image request to users usage tracker
user_id = update.message.from_user.id
self.usage[user_id].add_image_request(image_size, self.config['image_prices'])
# add guest chat request to guest usage tracker
if str(user_id) not in self.config['allowed_user_ids'].split(',') and 'guests' in self.usage:
self.usage["guests"].add_image_request(image_size, self.config['image_prices']) except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('image_fail', self.config['bot_language'])}: {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
)await wrap_with_indicator(update, context, _generate, constants.ChatAction.UPLOAD_PHOTO)
async def tts(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Generates an speech for the given input using TTS APIs
"""
if not self.config['enable_tts_generation'] \
or not await self.check_allowed_and_within_budget(update, context):
return tts_query = message_text(update.message)
if tts_query == '':
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=localized_text('tts_no_prompt', self.config['bot_language'])
)
return logging.info(f'New speech generation request received from user {update.message.from_user.name} '
f'(id: {update.message.from_user.id})') async def _generate():
try:
speech_file, text_length = await self.openai.generate_speech(text=tts_query) await update.effective_message.reply_voice(
reply_to_message_id=get_reply_to_message_id(self.config, update),
voice=speech_file
)
speech_file.close()
# add image request to users usage tracker
user_id = update.message.from_user.id
self.usage[user_id].add_tts_request(text_length, self.config['tts_model'], self.config['tts_prices'])
# add guest chat request to guest usage tracker
if str(user_id) not in self.config['allowed_user_ids'].split(',') and 'guests' in self.usage:
self.usage["guests"].add_tts_request(text_length, self.config['tts_model'], self.config['tts_prices']) except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('tts_fail', self.config['bot_language'])}: {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
)await wrap_with_indicator(update, context, _generate, constants.ChatAction.UPLOAD_VOICE)
async def transcribe(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Transcribe audio messages.
"""
if not self.config['enable_transcription'] or not await self.check_allowed_and_within_budget(update, context):
return if is_group_chat(update) and self.config['ignore_group_transcriptions']:
logging.info('Transcription coming from group chat, ignoring...')
return chat_id = update.effective_chat.id
filename = update.message.effective_attachment.file_unique_id async def _execute():
filename_mp3 = f'{filename}.mp3'
bot_language = self.config['bot_language']
try:
media_file = await context.bot.get_file(update.message.effective_attachment.file_id)
await media_file.download_to_drive(filename)
except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=(
f"{localized_text('media_download_fail', bot_language)[0]}: "
f"{str(e)}. {localized_text('media_download_fail', bot_language)[1]}"
),
parse_mode=constants.ParseMode.MARKDOWN
)
return try:
audio_track = AudioSegment.from_file(filename)
audio_track.export(filename_mp3, format="mp3")
logging.info(f'New transcribe request received from user {update.message.from_user.name} '
f'(id: {update.message.from_user.id})') except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=localized_text('media_type_fail', bot_language)
)
if os.path.exists(filename):
os.remove(filename)
return user_id = update.message.from_user.id
if user_id not in self.usage:
self.usage[user_id] = UsageTracker(user_id, update.message.from_user.name) try:
transcript = await self.openai.transcribe(filename_mp3) transcription_price = self.config['transcription_price']
self.usage[user_id].add_transcription_seconds(audio_track.duration_seconds, transcription_price) allowed_user_ids = self.config['allowed_user_ids'].split(',')
if str(user_id) not in allowed_user_ids and 'guests' in self.usage:
self.usage["guests"].add_transcription_seconds(audio_track.duration_seconds, transcription_price) # check if transcript starts with any of the prefixes
response_to_transcription = any(transcript.lower().startswith(prefix.lower()) if prefix else False
for prefix in self.config['voice_reply_prompts'])if self.config['voice_reply_transcript'] and not response_to_transcription:
# Split into chunks of 4096 characters (Telegram's message limit)
transcript_output = f"_{localized_text('transcript', bot_language)}:_\n\"{transcript}\""
chunks = split_into_chunks(transcript_output) for index, transcript_chunk in enumerate(chunks):
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update) if index == 0 else None,
text=transcript_chunk,
parse_mode=constants.ParseMode.MARKDOWN
)
else:
# Get the response of the transcript
response, total_tokens = await self.openai.get_chat_response(chat_id=chat_id, query=transcript) self.usage[user_id].add_chat_tokens(total_tokens, self.config['token_price'])
if str(user_id) not in allowed_user_ids and 'guests' in self.usage:
self.usage["guests"].add_chat_tokens(total_tokens, self.config['token_price']) # Split into chunks of 4096 characters (Telegram's message limit)
transcript_output = (
f"_{localized_text('transcript', bot_language)}:_\n\"{transcript}\"\n\n"
f"_{localized_text('answer', bot_language)}:_\n{response}"
)
chunks = split_into_chunks(transcript_output) for index, transcript_chunk in enumerate(chunks):
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update) if index == 0 else None,
text=transcript_chunk,
parse_mode=constants.ParseMode.MARKDOWN
) except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('transcribe_fail', bot_language)}: {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
)
finally:
if os.path.exists(filename_mp3):
os.remove(filename_mp3)
if os.path.exists(filename):
os.remove(filename)await wrap_with_indicator(update, context, _execute, constants.ChatAction.TYPING)
async def vision(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
Interpret image using vision model.
"""
if not self.config['enable_vision'] or not await self.check_allowed_and_within_budget(update, context):
return chat_id = update.effective_chat.id
prompt = update.message.caption if is_group_chat(update):
if self.config['ignore_group_vision']:
logging.info('Vision coming from group chat, ignoring...')
return
else:
trigger_keyword = self.config['group_trigger_keyword']
if (prompt is None and trigger_keyword != '') or \
(prompt is not None and not prompt.lower().startswith(trigger_keyword.lower())):
logging.info('Vision coming from group chat with wrong keyword, ignoring...')
return
image = update.message.effective_attachment[-1]
async def _execute():
bot_language = self.config['bot_language']
try:
media_file = await context.bot.get_file(image.file_id)
temp_file = io.BytesIO(await media_file.download_as_bytearray())
except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=(
f"{localized_text('media_download_fail', bot_language)[0]}: "
f"{str(e)}. {localized_text('media_download_fail', bot_language)[1]}"
),
parse_mode=constants.ParseMode.MARKDOWN
)
return
# convert jpg from telegram to png as understood by openaitemp_file_png = io.BytesIO()
try:
original_image = Image.open(temp_file)
original_image.save(temp_file_png, format='PNG')
logging.info(f'New vision request received from user {update.message.from_user.name} '
f'(id: {update.message.from_user.id})') except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=localized_text('media_type_fail', bot_language)
)
user_id = update.message.from_user.id
if user_id not in self.usage:
self.usage[user_id] = UsageTracker(user_id, update.message.from_user.name)if self.config['stream']:
stream_response = self.openai.interpret_image_stream(chat_id=chat_id, fileobj=temp_file_png, prompt=prompt)
i = 0
prev = ''
sent_message = None
backoff = 0
stream_chunk = 0 async for content, tokens in stream_response:
if is_direct_result(content):
return await handle_direct_result(self.config, update, content) if len(content.strip()) == 0:
continue stream_chunks = split_into_chunks(content)
if len(stream_chunks) > 1:
content = stream_chunks[-1]
if stream_chunk != len(stream_chunks) - 1:
stream_chunk += 1
try:
await edit_message_with_retry(context, chat_id, str(sent_message.message_id),
stream_chunks[-2])
except:
pass
try:
sent_message = await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=content if len(content) > 0 else "..."
)
except:
pass
continue cutoff = get_stream_cutoff_values(update, content)
cutoff += backoff if i == 0:
try:
if sent_message is not None:
await context.bot.delete_message(chat_id=sent_message.chat_id,
message_id=sent_message.message_id)
sent_message = await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=content,
)
except:
continue elif abs(len(content) - len(prev)) > cutoff or tokens != 'not_finished':
prev = content try:
use_markdown = tokens != 'not_finished'
await edit_message_with_retry(context, chat_id, str(sent_message.message_id),
text=content, markdown=use_markdown) except RetryAfter as e:
backoff += 5
await asyncio.sleep(e.retry_after)
continue except TimedOut:
backoff += 5
await asyncio.sleep(0.5)
continue except Exception:
backoff += 5
continueawait asyncio.sleep(0.01)
i += 1
if tokens != 'not_finished':
total_tokens = int(tokens)
else: try:
interpretation, total_tokens = await self.openai.interpret_image(chat_id, temp_file_png, prompt=prompt)
try:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=interpretation,
parse_mode=constants.ParseMode.MARKDOWN
)
except BadRequest:
try:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=interpretation
)
except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('vision_fail', bot_language)}: {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
)
except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('vision_fail', bot_language)}: {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
)
vision_token_price = self.config['vision_token_price']
self.usage[user_id].add_vision_tokens(total_tokens, vision_token_price) allowed_user_ids = self.config['allowed_user_ids'].split(',')
if str(user_id) not in allowed_user_ids and 'guests' in self.usage:
self.usage["guests"].add_vision_tokens(total_tokens, vision_token_price)await wrap_with_indicator(update, context, _execute, constants.ChatAction.TYPING)
async def prompt(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
"""
React to incoming messages and respond accordingly.
"""
if update.edited_message or not update.message or update.message.via_bot:
return if not await self.check_allowed_and_within_budget(update, context):
return logging.info(
f'New message received from user {update.message.from_user.name} (id: {update.message.from_user.id})')
chat_id = update.effective_chat.id
user_id = update.message.from_user.id
prompt = message_text(update.message)
self.last_message[chat_id] = prompt if is_group_chat(update):
trigger_keyword = self.config['group_trigger_keyword'] if prompt.lower().startswith(trigger_keyword.lower()) or update.message.text.lower().startswith('/chat'):
if prompt.lower().startswith(trigger_keyword.lower()):
prompt = prompt[len(trigger_keyword):].strip() if update.message.reply_to_message and \
update.message.reply_to_message.text and \
update.message.reply_to_message.from_user.id != context.bot.id:
prompt = f'"{update.message.reply_to_message.text}" {prompt}'
else:
if update.message.reply_to_message and update.message.reply_to_message.from_user.id == context.bot.id:
logging.info('Message is a reply to the bot, allowing...')
else:
logging.warning('Message does not start with trigger keyword, ignoring...')
return try:
total_tokens = 0 if self.config['stream']:
await update.effective_message.reply_chat_action(
action=constants.ChatAction.TYPING,
message_thread_id=get_thread_id(update)
) stream_response = self.openai.get_chat_response_stream(chat_id=chat_id, query=prompt)
i = 0
prev = ''
sent_message = None
backoff = 0
stream_chunk = 0 async for content, tokens in stream_response:
if is_direct_result(content):
return await handle_direct_result(self.config, update, content) if len(content.strip()) == 0:
continue stream_chunks = split_into_chunks(content)
if len(stream_chunks) > 1:
content = stream_chunks[-1]
if stream_chunk != len(stream_chunks) - 1:
stream_chunk += 1
try:
await edit_message_with_retry(context, chat_id, str(sent_message.message_id),
stream_chunks[-2])
except:
pass
try:
sent_message = await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=content if len(content) > 0 else "..."
)
except:
pass
continue cutoff = get_stream_cutoff_values(update, content)
cutoff += backoff if i == 0:
try:
if sent_message is not None:
await context.bot.delete_message(chat_id=sent_message.chat_id,
message_id=sent_message.message_id)
sent_message = await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=content,
)
except:
continue elif abs(len(content) - len(prev)) > cutoff or tokens != 'not_finished':
prev = content try:
use_markdown = tokens != 'not_finished'
await edit_message_with_retry(context, chat_id, str(sent_message.message_id),
text=content, markdown=use_markdown) except RetryAfter as e:
backoff += 5
await asyncio.sleep(e.retry_after)
continue except TimedOut:
backoff += 5
await asyncio.sleep(0.5)
continue except Exception:
backoff += 5
continueawait asyncio.sleep(0.01)
i += 1
if tokens != 'not_finished':
total_tokens = int(tokens) else:
async def _reply():
nonlocal total_tokens
response, total_tokens = await self.openai.get_chat_response(chat_id=chat_id, query=prompt) if is_direct_result(response):
return await handle_direct_result(self.config, update, response) # Split into chunks of 4096 characters (Telegram's message limit)
chunks = split_into_chunks(response) for index, chunk in enumerate(chunks):
try:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config,
update) if index == 0 else None,
text=chunk,
parse_mode=constants.ParseMode.MARKDOWN
)
except Exception:
try:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config,
update) if index == 0 else None,
text=chunk
)
except Exception as exception:
raise exceptionawait wrap_with_indicator(update, context, _reply, constants.ChatAction.TYPING)
add_chat_request_to_usage_tracker(self.usage, self.config, user_id, total_tokens)
except Exception as e:
logging.exception(e)
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
reply_to_message_id=get_reply_to_message_id(self.config, update),
text=f"{localized_text('chat_fail', self.config['bot_language'])} {str(e)}",
parse_mode=constants.ParseMode.MARKDOWN
) async def inline_query(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""
Handle the inline query. This is run when you type: @botusername <query>
"""
query = update.inline_query.query
if len(query) < 3:
return
if not await self.check_allowed_and_within_budget(update, context, is_inline=True):
return callback_data_suffix = "gpt:"
result_id = str(uuid4())
self.inline_queries_cache[result_id] = query
callback_data = f'{callback_data_suffix}{result_id}'await self.send_inline_query_result(update, result_id, message_content=query, callback_data=callback_data)
async def send_inline_query_result(self, update: Update, result_id, message_content, callback_data=""):
"""
Send inline query result
"""
try:
reply_markup = None
bot_language = self.config['bot_language']
if callback_data:
reply_markup = InlineKeyboardMarkup([[
InlineKeyboardButton(text=f'🤖 {localized_text("answer_with_chatgpt", bot_language)}',
callback_data=callback_data)
]]) inline_query_result = InlineQueryResultArticle(
id=result_id,
title=localized_text("ask_chatgpt", bot_language),
input_message_content=InputTextMessageContent(message_content),
description=message_content,
thumbnail_url='https://user-images.githubusercontent.com/11541888/223106202-7576ff11-2c8e-408d-94ea-b02a7a32149a.png',
reply_markup=reply_markup
) await update.inline_query.answer([inline_query_result], cache_time=0)
except Exception as e:
logging.error(f'An error occurred while generating the result card for inline query {e}') async def handle_callback_inline_query(self, update: Update, context: CallbackContext):
"""
Handle the callback query from the inline query result
"""
callback_data = update.callback_query.data
user_id = update.callback_query.from_user.id
inline_message_id = update.callback_query.inline_message_id
name = update.callback_query.from_user.name
callback_data_suffix = "gpt:"
query = ""
bot_language = self.config['bot_language']
answer_tr = localized_text("answer", bot_language)
loading_tr = localized_text("loading", bot_language) try:
if callback_data.startswith(callback_data_suffix):
unique_id = callback_data.split(':')[1]
total_tokens = 0 # Retrieve the prompt from the cache
query = self.inline_queries_cache.get(unique_id)
if query:
self.inline_queries_cache.pop(unique_id)
else:
error_message = (
f'{localized_text("error", bot_language)}. '
f'{localized_text("try_again", bot_language)}'
)
await edit_message_with_retry(context, chat_id=None, message_id=inline_message_id,
text=f'{query}\n\n_{answer_tr}:_\n{error_message}',
is_inline=True)
return unavailable_message = localized_text("function_unavailable_in_inline_mode", bot_language)
if self.config['stream']:
stream_response = self.openai.get_chat_response_stream(chat_id=user_id, query=query)
i = 0
prev = ''
backoff = 0
async for content, tokens in stream_response:
if is_direct_result(content):
cleanup_intermediate_files(content)
await edit_message_with_retry(context, chat_id=None,
message_id=inline_message_id,
text=f'{query}\n\n_{answer_tr}:_\n{unavailable_message}',
is_inline=True)
return if len(content.strip()) == 0:
continue cutoff = get_stream_cutoff_values(update, content)
cutoff += backoff if i == 0:
try:
await edit_message_with_retry(context, chat_id=None,
message_id=inline_message_id,
text=f'{query}\n\n{answer_tr}:\n{content}',
is_inline=True)
except:
continue elif abs(len(content) - len(prev)) > cutoff or tokens != 'not_finished':
prev = content
try:
use_markdown = tokens != 'not_finished'
divider = '_' if use_markdown else ''
text = f'{query}\n\n{divider}{answer_tr}:{divider}\n{content}' # We only want to send the first 4096 characters. No chunking allowed in inline mode.
text = text[:4096] await edit_message_with_retry(context, chat_id=None, message_id=inline_message_id,
text=text, markdown=use_markdown, is_inline=True) except RetryAfter as e:
backoff += 5
await asyncio.sleep(e.retry_after)
continue
except TimedOut:
backoff += 5
await asyncio.sleep(0.5)
continue
except Exception:
backoff += 5
continueawait asyncio.sleep(0.01)
i += 1
if tokens != 'not_finished':
total_tokens = int(tokens) else:
async def _send_inline_query_response():
nonlocal total_tokens
# Edit the current message to indicate that the answer is being processed
await context.bot.edit_message_text(inline_message_id=inline_message_id,
text=f'{query}\n\n_{answer_tr}:_\n{loading_tr}',
parse_mode=constants.ParseMode.MARKDOWN) logging.info(f'Generating response for inline query by {name}')
response, total_tokens = await self.openai.get_chat_response(chat_id=user_id, query=query) if is_direct_result(response):
cleanup_intermediate_files(response)
await edit_message_with_retry(context, chat_id=None,
message_id=inline_message_id,
text=f'{query}\n\n_{answer_tr}:_\n{unavailable_message}',
is_inline=True)
return text_content = f'{query}\n\n_{answer_tr}:_\n{response}' # We only want to send the first 4096 characters. No chunking allowed in inline mode.
text_content = text_content[:4096] # Edit the original message with the generated content
await edit_message_with_retry(context, chat_id=None, message_id=inline_message_id,
text=text_content, is_inline=True) await wrap_with_indicator(update, context, _send_inline_query_response,
constants.ChatAction.TYPING, is_inline=True)add_chat_request_to_usage_tracker(self.usage, self.config, user_id, total_tokens)
except Exception as e:
logging.error(f'Failed to respond to an inline query via button callback: {e}')
logging.exception(e)
localized_answer = localized_text('chat_fail', self.config['bot_language'])
await edit_message_with_retry(context, chat_id=None, message_id=inline_message_id,
text=f"{query}\n\n_{answer_tr}:_\n{localized_answer} {str(e)}",
is_inline=True) async def check_allowed_and_within_budget(self, update: Update, context: ContextTypes.DEFAULT_TYPE,
is_inline=False) -> bool:
"""
Checks if the user is allowed to use the bot and if they are within their budget
:param update: Telegram update object
:param context: Telegram context object
:param is_inline: Boolean flag for inline queries
:return: Boolean indicating if the user is allowed to use the bot
"""
name = update.inline_query.from_user.name if is_inline else update.message.from_user.name
user_id = update.inline_query.from_user.id if is_inline else update.message.from_user.id if not await is_allowed(self.config, update, context, is_inline=is_inline):
logging.warning(f'User {name} (id: {user_id}) is not allowed to use the bot')
await self.send_disallowed_message(update, context, is_inline)
return False
if not is_within_budget(self.config, self.usage, update, is_inline=is_inline):
logging.warning(f'User {name} (id: {user_id}) reached their usage limit')
await self.send_budget_reached_message(update, context, is_inline)
return Falsereturn True
async def send_disallowed_message(self, update: Update, _: ContextTypes.DEFAULT_TYPE, is_inline=False):
"""
Sends the disallowed message to the user.
"""
if not is_inline:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=self.disallowed_message,
disable_web_page_preview=True
)
else:
result_id = str(uuid4())
await self.send_inline_query_result(update, result_id, message_content=self.disallowed_message) async def send_budget_reached_message(self, update: Update, _: ContextTypes.DEFAULT_TYPE, is_inline=False):
"""
Sends the budget reached message to the user.
"""
if not is_inline:
await update.effective_message.reply_text(
message_thread_id=get_thread_id(update),
text=self.budget_limit_message
)
else:
result_id = str(uuid4())
await self.send_inline_query_result(update, result_id, message_content=self.budget_limit_message) async def post_init(self, application: Application) -> None:
"""
Post initialization hook for the bot.
"""
await application.bot.set_my_commands(self.group_commands, scope=BotCommandScopeAllGroupChats())
await application.bot.set_my_commands(self.commands) def run(self):
"""
Runs the bot indefinitely until the user presses Ctrl+C
"""
application = ApplicationBuilder() \
.token(self.config['token']) \
.proxy_url(self.config['proxy']) \
.get_updates_proxy_url(self.config['proxy']) \
.post_init(self.post_init) \
.concurrent_updates(True) \
.build() application.add_handler(CommandHandler('reset', self.reset))
application.add_handler(CommandHandler('help', self.help))
application.add_handler(CommandHandler('image', self.image))
application.add_handler(CommandHandler('tts', self.tts))
application.add_handler(CommandHandler('start', self.help))
application.add_handler(CommandHandler('stats', self.stats))
application.add_handler(CommandHandler('resend', self.resend))
application.add_handler(CommandHandler(
'chat', self.prompt, filters=filters.ChatType.GROUP | filters.ChatType.SUPERGROUP)
)
application.add_handler(MessageHandler(
filters.PHOTO | filters.Document.IMAGE,
self.vision))
application.add_handler(MessageHandler(
filters.AUDIO | filters.VOICE | filters.Document.AUDIO |
filters.VIDEO | filters.VIDEO_NOTE | filters.Document.VIDEO,
self.transcribe))
application.add_handler(MessageHandler(filters.TEXT & (~filters.COMMAND), self.prompt))
application.add_handler(InlineQueryHandler(self.inline_query, chat_types=[
constants.ChatType.GROUP, constants.ChatType.SUPERGROUP, constants.ChatType.PRIVATE
]))
application.add_handler(CallbackQueryHandler(self.handle_callback_inline_query))application.add_error_handler(error_handler)
application.run_polling()
↪️usage_tracker.py
import os.path import pathlib import json from datetime import date
def year_month(date_str):
# extract string of year-month from date, eg: '2023-03'
return str(date_str)[:7]
class UsageTracker:
"""
UsageTracker class
Enables tracking of daily/monthly usage per user.
User files are stored as JSON in /usage_logs directory.
JSON example:
{
"user_name": "@user_name",
"current_cost": {
"day": 0.45,
"month": 3.23,
"all_time": 3.23,
"last_update": "2023-03-14"},
"usage_history": {
"chat_tokens": {
"2023-03-13": 520,
"2023-03-14": 1532
},
"transcription_seconds": {
"2023-03-13": 125,
"2023-03-14": 64
},
"number_images": {
"2023-03-12": [0, 2, 3],
"2023-03-13": [1, 2, 3],
"2023-03-14": [0, 1, 2]
}
}
}
""" def __init__(self, user_id, user_name, logs_dir="usage_logs"):
"""
Initializes UsageTracker for a user with current date.
Loads usage data from usage log file.
:param user_id: Telegram ID of the user
:param user_name: Telegram user name
:param logs_dir: path to directory of usage logs, defaults to "usage_logs"
"""
self.user_id = user_id
self.logs_dir = logs_dir
# path to usage file of given user
self.user_file = f"{logs_dir}/{user_id}.json" if os.path.isfile(self.user_file):
with open(self.user_file, "r") as file:
self.usage = json.load(file)
if 'vision_tokens' not in self.usage['usage_history']:
self.usage['usage_history']['vision_tokens'] = {}
if 'tts_characters' not in self.usage['usage_history']:
self.usage['usage_history']['tts_characters'] = {}
else:
# ensure directory exists
pathlib.Path(logs_dir).mkdir(exist_ok=True)
# create new dictionary for this user
self.usage = {
"user_name": user_name,
"current_cost": {"day": 0.0, "month": 0.0, "all_time": 0.0, "last_update": str(date.today())},
"usage_history": {"chat_tokens": {}, "transcription_seconds": {}, "number_images": {}, "tts_characters": {}, "vision_tokens":{}}
}# token usage functions:
def add_chat_tokens(self, tokens, tokens_price=0.002):
"""Adds used tokens from a request to a users usage history and updates current cost
:param tokens: total tokens used in last request
:param tokens_price: price per 1000 tokens, defaults to 0.002
"""
today = date.today()
token_cost = round(float(tokens) * tokens_price / 1000, 6)
self.add_current_costs(token_cost) # update usage_history
if str(today) in self.usage["usage_history"]["chat_tokens"]:
# add token usage to existing date
self.usage["usage_history"]["chat_tokens"][str(today)] += tokens
else:
# create new entry for current date
self.usage["usage_history"]["chat_tokens"][str(today)] = tokens # write updated token usage to user file
with open(self.user_file, "w") as outfile:
json.dump(self.usage, outfile) def get_current_token_usage(self):
"""Get token amounts used for today and this month :return: total number of tokens used per day and per month
"""
today = date.today()
if str(today) in self.usage["usage_history"]["chat_tokens"]:
usage_day = self.usage["usage_history"]["chat_tokens"][str(today)]
else:
usage_day = 0
month = str(today)[:7] # year-month as string
usage_month = 0
for today, tokens in self.usage["usage_history"]["chat_tokens"].items():
if today.startswith(month):
usage_month += tokens
return usage_day, usage_month# image usage functions:
def add_image_request(self, image_size, image_prices="0.016,0.018,0.02"):
"""Add image request to users usage history and update current costs. :param image_size: requested image size
:param image_prices: prices for images of sizes ["256x256", "512x512", "1024x1024"],
defaults to [0.016, 0.018, 0.02]
"""
sizes = ["256x256", "512x512", "1024x1024"]
requested_size = sizes.index(image_size)
image_cost = image_prices[requested_size]
today = date.today()
self.add_current_costs(image_cost) # update usage_history
if str(today) in self.usage["usage_history"]["number_images"]:
# add token usage to existing date
self.usage["usage_history"]["number_images"][str(today)][requested_size] += 1
else:
# create new entry for current date
self.usage["usage_history"]["number_images"][str(today)] = [0, 0, 0]
self.usage["usage_history"]["number_images"][str(today)][requested_size] += 1 # write updated image number to user file
with open(self.user_file, "w") as outfile:
json.dump(self.usage, outfile) def get_current_image_count(self):
"""Get number of images requested for today and this month. :return: total number of images requested per day and per month
"""
today = date.today()
if str(today) in self.usage["usage_history"]["number_images"]:
usage_day = sum(self.usage["usage_history"]["number_images"][str(today)])
else:
usage_day = 0
month = str(today)[:7] # year-month as string
usage_month = 0
for today, images in self.usage["usage_history"]["number_images"].items():
if today.startswith(month):
usage_month += sum(images)
return usage_day, usage_month
# vision usage functions
def add_vision_tokens(self, tokens, vision_token_price=0.01):
"""
Adds requested vision tokens to a users usage history and updates current cost.
:param tokens: total tokens used in last request
:param vision_token_price: price per 1K tokens transcription, defaults to 0.01
"""
today = date.today()
token_price = round(tokens * vision_token_price / 1000, 2)
self.add_current_costs(token_price) # update usage_history
if str(today) in self.usage["usage_history"]["vision_tokens"]:
# add requested seconds to existing date
self.usage["usage_history"]["vision_tokens"][str(today)] += tokens
else:
# create new entry for current date
self.usage["usage_history"]["vision_tokens"][str(today)] = tokens # write updated token usage to user file
with open(self.user_file, "w") as outfile:
json.dump(self.usage, outfile) def get_current_vision_tokens(self):
"""Get vision tokens for today and this month. :return: total amount of vision tokens per day and per month
"""
today = date.today()
if str(today) in self.usage["usage_history"]["vision_tokens"]:
tokens_day = self.usage["usage_history"]["vision_tokens"][str(today)]
else:
tokens_day = 0
month = str(today)[:7] # year-month as string
tokens_month = 0
for today, tokens in self.usage["usage_history"]["vision_tokens"].items():
if today.startswith(month):
tokens_month += tokens
return tokens_day, tokens_month# tts usage functions:
def add_tts_request(self, text_length, tts_model, tts_prices):
tts_models = ['tts-1', 'tts-1-hd']
price = tts_prices[tts_models.index(tts_model)]
today = date.today()
tts_price = round(text_length * price / 1000, 2)
self.add_current_costs(tts_price) if 'tts_characters' not in self.usage['usage_history']:
self.usage['usage_history']['tts_characters'] = {}
if tts_model not in self.usage['usage_history']['tts_characters']:
self.usage['usage_history']['tts_characters'][tts_model] = {} # update usage_history
if str(today) in self.usage["usage_history"]["tts_characters"][tts_model]:
# add requested text length to existing date
self.usage["usage_history"]["tts_characters"][tts_model][str(today)] += text_length
else:
# create new entry for current date
self.usage["usage_history"]["tts_characters"][tts_model][str(today)] = text_length # write updated token usage to user file
with open(self.user_file, "w") as outfile:
json.dump(self.usage, outfile) def get_current_tts_usage(self):
"""Get length of speech generated for today and this month. :return: total amount of characters converted to speech per day and per month
""" tts_models = ['tts-1', 'tts-1-hd']
today = date.today()
characters_day = 0
for tts_model in tts_models:
if tts_model in self.usage["usage_history"]["tts_characters"] and \
str(today) in self.usage["usage_history"]["tts_characters"][tts_model]:
characters_day += self.usage["usage_history"]["tts_characters"][tts_model][str(today)] month = str(today)[:7] # year-month as string
characters_month = 0
for tts_model in tts_models:
if tts_model in self.usage["usage_history"]["tts_characters"]:
for today, characters in self.usage["usage_history"]["tts_characters"][tts_model].items():
if today.startswith(month):
characters_month += characters
return int(characters_day), int(characters_month)
# transcription usage functions: def add_transcription_seconds(self, seconds, minute_price=0.006):
"""Adds requested transcription seconds to a users usage history and updates current cost.
:param seconds: total seconds used in last request
:param minute_price: price per minute transcription, defaults to 0.006
"""
today = date.today()
transcription_price = round(seconds * minute_price / 60, 2)
self.add_current_costs(transcription_price) # update usage_history
if str(today) in self.usage["usage_history"]["transcription_seconds"]:
# add requested seconds to existing date
self.usage["usage_history"]["transcription_seconds"][str(today)] += seconds
else:
# create new entry for current date
self.usage["usage_history"]["transcription_seconds"][str(today)] = seconds # write updated token usage to user file
with open(self.user_file, "w") as outfile:
json.dump(self.usage, outfile) def add_current_costs(self, request_cost):
"""
Add current cost to all_time, day and month cost and update last_update date.
"""
today = date.today()
last_update = date.fromisoformat(self.usage["current_cost"]["last_update"]) # add to all_time cost, initialize with calculation of total_cost if key doesn't exist
self.usage["current_cost"]["all_time"] = \
self.usage["current_cost"].get("all_time", self.initialize_all_time_cost()) + request_cost
# add current cost, update new day
if today == last_update:
self.usage["current_cost"]["day"] += request_cost
self.usage["current_cost"]["month"] += request_cost
else:
if today.month == last_update.month:
self.usage["current_cost"]["month"] += request_cost
else:
self.usage["current_cost"]["month"] = request_cost
self.usage["current_cost"]["day"] = request_cost
self.usage["current_cost"]["last_update"] = str(today) def get_current_transcription_duration(self):
"""Get minutes and seconds of audio transcribed for today and this month. :return: total amount of time transcribed per day and per month (4 values)
"""
today = date.today()
if str(today) in self.usage["usage_history"]["transcription_seconds"]:
seconds_day = self.usage["usage_history"]["transcription_seconds"][str(today)]
else:
seconds_day = 0
month = str(today)[:7] # year-month as string
seconds_month = 0
for today, seconds in self.usage["usage_history"]["transcription_seconds"].items():
if today.startswith(month):
seconds_month += seconds
minutes_day, seconds_day = divmod(seconds_day, 60)
minutes_month, seconds_month = divmod(seconds_month, 60)
return int(minutes_day), round(seconds_day, 2), int(minutes_month), round(seconds_month, 2) # general functions
def get_current_cost(self):
"""Get total USD amount of all requests of the current day and month :return: cost of current day and month
"""
today = date.today()
last_update = date.fromisoformat(self.usage["current_cost"]["last_update"])
if today == last_update:
cost_day = self.usage["current_cost"]["day"]
cost_month = self.usage["current_cost"]["month"]
else:
cost_day = 0.0
if today.month == last_update.month:
cost_month = self.usage["current_cost"]["month"]
else:
cost_month = 0.0
# add to all_time cost, initialize with calculation of total_cost if key doesn't exist
cost_all_time = self.usage["current_cost"].get("all_time", self.initialize_all_time_cost())
return {"cost_today": cost_day, "cost_month": cost_month, "cost_all_time": cost_all_time} def initialize_all_time_cost(self, tokens_price=0.002, image_prices="0.016,0.018,0.02", minute_price=0.006, vision_token_price=0.01, tts_prices='0.015,0.030'):
"""Get total USD amount of all requests in history
:param tokens_price: price per 1000 tokens, defaults to 0.002
:param image_prices: prices for images of sizes ["256x256", "512x512", "1024x1024"],
defaults to [0.016, 0.018, 0.02]
:param minute_price: price per minute transcription, defaults to 0.006
:param vision_token_price: price per 1K vision token interpretation, defaults to 0.01
:param tts_prices: price per 1K characters tts per model ['tts-1', 'tts-1-hd'], defaults to [0.015, 0.030]
:return: total cost of all requests
"""
total_tokens = sum(self.usage['usage_history']['chat_tokens'].values())
token_cost = round(total_tokens * tokens_price / 1000, 6) total_images = [sum(values) for values in zip(*self.usage['usage_history']['number_images'].values())]
image_prices_list = [float(x) for x in image_prices.split(',')]
image_cost = sum([count * price for count, price in zip(total_images, image_prices_list)]) total_transcription_seconds = sum(self.usage['usage_history']['transcription_seconds'].values())
transcription_cost = round(total_transcription_seconds * minute_price / 60, 2) total_vision_tokens = sum(self.usage['usage_history']['vision_tokens'].values())
vision_cost = round(total_vision_tokens * vision_token_price / 1000, 2) total_characters = [sum(tts_model.values()) for tts_model in self.usage['usage_history']['tts_characters'].values()]
tts_prices_list = [float(x) for x in tts_prices.split(',')]
tts_cost = round(sum([count * price / 1000 for count, price in zip(total_characters, tts_prices_list)]), 2) all_time_cost = token_cost + transcription_cost + image_cost + vision_cost + tts_cost
return all_time_cost↪️utils.py
from __future__ import annotations
import asyncio import itertools import json import logging import os import base64
import telegram from telegram import Message, MessageEntity, Update, ChatMember, constants from telegram.ext import CallbackContext, ContextTypes
from usage_tracker import UsageTracker
def message_text(message: Message) -> str:
"""
Returns the text of a message, excluding any bot commands.
"""
message_txt = message.text
if message_txt is None:
return '' for _, text in sorted(message.parse_entities([MessageEntity.BOT_COMMAND]).items(),
key=(lambda item: item[0].offset)):
message_txt = message_txt.replace(text, '').strip()return message_txt if len(message_txt) > 0 else ''
async def is_user_in_group(update: Update, context: CallbackContext, user_id: int) -> bool:
"""
Checks if user_id is a member of the group
"""
try:
chat_member = await context.bot.get_chat_member(update.message.chat_id, user_id)
return chat_member.status in [ChatMember.OWNER, ChatMember.ADMINISTRATOR, ChatMember.MEMBER]
except telegram.error.BadRequest as e:
if str(e) == "User not found":
return False
else:
raise e
except Exception as e:
raise e
def get_thread_id(update: Update) -> int | None:
"""
Gets the message thread id for the update, if any
"""
if update.effective_message and update.effective_message.is_topic_message:
return update.effective_message.message_thread_id
return None
def get_stream_cutoff_values(update: Update, content: str) -> int:
"""
Gets the stream cutoff values for the message length
"""
if is_group_chat(update):
# group chats have stricter flood limits
return 180 if len(content) > 1000 else 120 if len(content) > 200 \
else 90 if len(content) > 50 else 50
return 90 if len(content) > 1000 else 45 if len(content) > 200 \
else 25 if len(content) > 50 else 15
def is_group_chat(update: Update) -> bool:
"""
Checks if the message was sent from a group chat
"""
if not update.effective_chat:
return False
return update.effective_chat.type in [
constants.ChatType.GROUP,
constants.ChatType.SUPERGROUP
]
def split_into_chunks(text: str, chunk_size: int = 4096) -> list[str]:
"""
Splits a string into chunks of a given size.
"""
return [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
async def wrap_with_indicator(update: Update, context: CallbackContext, coroutine,
chat_action: constants.ChatAction = "", is_inline=False):
"""
Wraps a coroutine while repeatedly sending a chat action to the user.
"""
task = context.application.create_task(coroutine(), update=update)
while not task.done():
if not is_inline:
context.application.create_task(
update.effective_chat.send_action(chat_action, message_thread_id=get_thread_id(update))
)
try:
await asyncio.wait_for(asyncio.shield(task), 4.5)
except asyncio.TimeoutError:
pass
async def edit_message_with_retry(context: ContextTypes.DEFAULT_TYPE, chat_id: int | None,
message_id: str, text: str, markdown: bool = True, is_inline: bool = False):
"""
Edit a message with retry logic in case of failure (e.g. broken markdown)
:param context: The context to use
:param chat_id: The chat id to edit the message in
:param message_id: The message id to edit
:param text: The text to edit the message with
:param markdown: Whether to use markdown parse mode
:param is_inline: Whether the message to edit is an inline message
:return: None
"""
try:
await context.bot.edit_message_text(
chat_id=chat_id,
message_id=int(message_id) if not is_inline else None,
inline_message_id=message_id if is_inline else None,
text=text,
parse_mode=constants.ParseMode.MARKDOWN if markdown else None,
)
except telegram.error.BadRequest as e:
if str(e).startswith("Message is not modified"):
return
try:
await context.bot.edit_message_text(
chat_id=chat_id,
message_id=int(message_id) if not is_inline else None,
inline_message_id=message_id if is_inline else None,
text=text,
)
except Exception as e:
logging.warning(f'Failed to edit message: {str(e)}')
raise e except Exception as e:
logging.warning(str(e))
raise e
async def error_handler(_: object, context: ContextTypes.DEFAULT_TYPE) -> None:
"""
Handles errors in the telegram-python-bot library.
"""
logging.error(f'Exception while handling an update: {context.error}')
async def is_allowed(config, update: Update, context: CallbackContext, is_inline=False) -> bool:
"""
Checks if the user is allowed to use the bot.
"""
if config['allowed_user_ids'] == '*':
return True user_id = update.inline_query.from_user.id if is_inline else update.message.from_user.id
if is_admin(config, user_id):
return True
name = update.inline_query.from_user.name if is_inline else update.message.from_user.name
allowed_user_ids = config['allowed_user_ids'].split(',')
# Check if user is allowed
if str(user_id) in allowed_user_ids:
return True
# Check if it's a group a chat with at least one authorized member
if not is_inline and is_group_chat(update):
admin_user_ids = config['admin_user_ids'].split(',')
for user in itertools.chain(allowed_user_ids, admin_user_ids):
if not user.strip():
continue
if await is_user_in_group(update, context, user):
logging.info(f'{user} is a member. Allowing group chat message...')
return True
logging.info(f'Group chat messages from user {name} '
f'(id: {user_id}) are not allowed')
return False
def is_admin(config, user_id: int, log_no_admin=False) -> bool:
"""
Checks if the user is the admin of the bot.
The first user in the user list is the admin.
"""
if config['admin_user_ids'] == '-':
if log_no_admin:
logging.info('No admin user defined.')
return False admin_user_ids = config['admin_user_ids'].split(',') # Check if user is in the admin user list
if str(user_id) in admin_user_ids:
return Truereturn False
def get_user_budget(config, user_id) -> float | None:
"""
Get the user's budget based on their user ID and the bot configuration.
:param config: The bot configuration object
:param user_id: User id
:return: The user's budget as a float, or None if the user is not found in the allowed user list
""" # no budget restrictions for admins and '*'-budget lists
if is_admin(config, user_id) or config['user_budgets'] == '*':
return float('inf') user_budgets = config['user_budgets'].split(',')
if config['allowed_user_ids'] == '*':
# same budget for all users, use value in first position of budget list
if len(user_budgets) > 1:
logging.warning('multiple values for budgets set with unrestricted user list '
'only the first value is used as budget for everyone.')
return float(user_budgets[0]) allowed_user_ids = config['allowed_user_ids'].split(',')
if str(user_id) in allowed_user_ids:
user_index = allowed_user_ids.index(str(user_id))
if len(user_budgets) <= user_index:
logging.warning(f'No budget set for user id: {user_id}. Budget list shorter than user list.')
return 0.0
return float(user_budgets[user_index])
return None
def get_remaining_budget(config, usage, update: Update, is_inline=False) -> float:
"""
Calculate the remaining budget for a user based on their current usage.
:param config: The bot configuration object
:param usage: The usage tracker object
:param update: Telegram update object
:param is_inline: Boolean flag for inline queries
:return: The remaining budget for the user as a float
"""
# Mapping of budget period to cost period
budget_cost_map = {
"monthly": "cost_month",
"daily": "cost_today",
"all-time": "cost_all_time"
} user_id = update.inline_query.from_user.id if is_inline else update.message.from_user.id
name = update.inline_query.from_user.name if is_inline else update.message.from_user.name
if user_id not in usage:
usage[user_id] = UsageTracker(user_id, name) # Get budget for users
user_budget = get_user_budget(config, user_id)
budget_period = config['budget_period']
if user_budget is not None:
cost = usage[user_id].get_current_cost()[budget_cost_map[budget_period]]
return user_budget - cost # Get budget for guests
if 'guests' not in usage:
usage['guests'] = UsageTracker('guests', 'all guest users in group chats')
cost = usage['guests'].get_current_cost()[budget_cost_map[budget_period]]
return config['guest_budget'] - cost
def is_within_budget(config, usage, update: Update, is_inline=False) -> bool:
"""
Checks if the user reached their usage limit.
Initializes UsageTracker for user and guest when needed.
:param config: The bot configuration object
:param usage: The usage tracker object
:param update: Telegram update object
:param is_inline: Boolean flag for inline queries
:return: Boolean indicating if the user has a positive budget
"""
user_id = update.inline_query.from_user.id if is_inline else update.message.from_user.id
name = update.inline_query.from_user.name if is_inline else update.message.from_user.name
if user_id not in usage:
usage[user_id] = UsageTracker(user_id, name)
remaining_budget = get_remaining_budget(config, usage, update, is_inline=is_inline)
return remaining_budget > 0
def add_chat_request_to_usage_tracker(usage, config, user_id, used_tokens):
"""
Add chat request to usage tracker
:param usage: The usage tracker object
:param config: The bot configuration object
:param user_id: The user id
:param used_tokens: The number of tokens used
"""
try:
if int(used_tokens) == 0:
logging.warning('No tokens used. Not adding chat request to usage tracker.')
return
# add chat request to users usage tracker
usage[user_id].add_chat_tokens(used_tokens, config['token_price'])
# add guest chat request to guest usage tracker
allowed_user_ids = config['allowed_user_ids'].split(',')
if str(user_id) not in allowed_user_ids and 'guests' in usage:
usage["guests"].add_chat_tokens(used_tokens, config['token_price'])
except Exception as e:
logging.warning(f'Failed to add tokens to usage_logs: {str(e)}')
pass
def get_reply_to_message_id(config, update: Update):
"""
Returns the message id of the message to reply to
:param config: Bot configuration object
:param update: Telegram update object
:return: Message id of the message to reply to, or None if quoting is disabled
"""
if config['enable_quoting'] or is_group_chat(update):
return update.message.message_id
return None
def is_direct_result(response: any) -> bool:
"""
Checks if the dict contains a direct result that can be sent directly to the user
:param response: The response value
:return: Boolean indicating if the result is a direct result
"""
if type(response) is not dict:
try:
json_response = json.loads(response)
return json_response.get('direct_result', False)
except:
return False
else:
return response.get('direct_result', False)
async def handle_direct_result(config, update: Update, response: any):
"""
Handles a direct result from a plugin
"""
if type(response) is not dict:
response = json.loads(response) result = response['direct_result']
kind = result['kind']
format = result['format']
value = result['value'] common_args = {
'message_thread_id': get_thread_id(update),
'reply_to_message_id': get_reply_to_message_id(config, update),
} if kind == 'photo':
if format == 'url':
await update.effective_message.reply_photo(**common_args, photo=value)
elif format == 'path':
await update.effective_message.reply_photo(**common_args, photo=open(value, 'rb'))
elif kind == 'gif' or kind == 'file':
if format == 'url':
await update.effective_message.reply_document(**common_args, document=value)
if format == 'path':
await update.effective_message.reply_document(**common_args, document=open(value, 'rb'))
elif kind == 'dice':
await update.effective_message.reply_dice(**common_args, emoji=value) if format == 'path':
cleanup_intermediate_files(response)
def cleanup_intermediate_files(response: any):
"""
Deletes intermediate files created by plugins
"""
if type(response) is not dict:
response = json.loads(response) result = response['direct_result']
format = result['format']
value = result['value'] if format == 'path':
if os.path.exists(value):
os.remove(value)
# Function to encode the image
def encode_image(fileobj):
image = base64.b64encode(fileobj.getvalue()).decode('utf-8')
return f'data:image/jpeg;base64,{image}'
def decode_image(imgbase64):
image = imgbase64[len('data:image/jpeg;base64,'):]
return base64.b64decode(image)↪️.докеригнор
.env
.dockerignore
.github
.gitignore
docker-compose.yml
Dockerfile
↪️.env.пример
# Your OpenAI API key OPENAI_API_KEY=XXX
# Your Telegram bot token obtained using @BotFather TELEGRAM_BOT_TOKEN=XXX
# Telegram user ID of admins, or - to assign no admin ADMIN_USER_IDS=ADMIN_1_USER_ID,ADMIN_2_USER_ID
# Comma separated list of telegram user IDs, or * to allow all ALLOWED_TELEGRAM_USER_IDS=USER_ID_1,USER_ID_2
# Optional configuration, refer to the README for more details # BUDGET_PERIOD=monthly # USER_BUDGETS=* # GUEST_BUDGET=100.0 # TOKEN_PRICE=0.002 # IMAGE_PRICES=0.016,0.018,0.02 # TRANSCRIPTION_PRICE=0.006 # VISION_TOKEN_PRICE=0.01 # ENABLE_QUOTING=true # ENABLE_IMAGE_GENERATION=true # ENABLE_TTS_GENERATION=true # ENABLE_TRANSCRIPTION=true # ENABLE_VISION=true # PROXY=http://localhost:8080 # OPENAI_MODEL=gpt-4o # OPENAI_BASE_URL=https://example.com/v1/ # ASSISTANT_PROMPT="You are a helpful assistant." # SHOW_USAGE=false # STREAM=true # MAX_TOKENS=1200 # VISION_MAX_TOKENS=300 # MAX_HISTORY_SIZE=15 # MAX_CONVERSATION_AGE_MINUTES=180 # VOICE_REPLY_WITH_TRANSCRIPT_ONLY=true # VOICE_REPLY_PROMPTS="Hi bot;Hey bot;Hi chat;Hey chat" # VISION_PROMPT="What is in this image" # N_CHOICES=1 # TEMPERATURE=1.0 # PRESENCE_PENALTY=0.0 # FREQUENCY_PENALTY=0.0 # IMAGE_MODEL=dall-e-3 # IMAGE_QUALITY=hd # IMAGE_STYLE=natural # IMAGE_SIZE=1024x1024 # IMAGE_FORMAT=document # VISION_DETAIL="low" # GROUP_TRIGGER_KEYWORD="" # IGNORE_GROUP_TRANSCRIPTIONS=true # IGNORE_GROUP_VISION=true # TTS_MODEL="tts-1" # TTS_VOICE="alloy" # TTS_PRICES=0.015,0.030 # BOT_LANGUAGE=en # ENABLE_VISION_FOLLOW_UP_QUESTIONS="true" # VISION_MODEL="gpt-4o"
↪️.гитиньоре
__pycache__ /.idea .env .DS_Store /usage_logs venv /.cache
↪️Файл Dockerfile
FROM python:3.9-alpine
ENV PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=onRUN apk --no-cache add ffmpeg
WORKDIR /app COPY . . RUN pip install -r requirements.txt --no-cache-dir
CMD ["python", "bot/main.py"]
↪️requirements.txt
python-dotenv~=1.0.0 pydub~=0.25.1 tiktoken==0.7.0 openai==1.58.1 python-telegram-bot==21.9 requests~=2.32.3 tenacity==8.3.0 wolframalpha~=5.1.3 duckduckgo_search==7.1.1 spotipy~=2.24.0 pytube~=15.0.0 gtts~=2.5.4 whois~=0.9.27 Pillow~=11.0.0
↪️переводы.json
{
"en": {
"help_description":"Show help message",
"reset_description":"Reset the conversation. Optionally pass high-level instructions (e.g. /reset You are a helpful assistant)",
"image_description":"Generate image from prompt (e.g. /image cat)",
"tts_description":"Generate speech from text (e.g. /tts my house)",
"stats_description":"Get your current usage statistics",
"resend_description":"Resend the latest message",
"chat_description":"Chat with the bot!",
"disallowed":"Sorry, you are not allowed to use this bot. You can check out the source code at https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Sorry, you have reached your usage limit.",
"help_text":["I'm a ChatGPT bot, talk to me!", "Send me a voice message or file and I'll transcribe it for you", "Open source at https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Current conversation", "chat messages in history", "chat tokens in history"],
"usage_today":"Usage today",
"usage_month":"Usage this month",
"stats_tokens":"tokens",
"stats_images":"images generated",
"stats_vision":"image tokens interpreted",
"stats_tts":"characters converted to speech",
"stats_transcribe":["minutes and", "seconds transcribed"],
"stats_total":"💰 For a total amount of quot;,
"stats_budget":"Your remaining budget",
"monthly":" for this month",
"daily":" for today",
"all-time":"",
"stats_openai":"This month your OpenAI account was billed quot;,
"resend_failed":"You have nothing to resend",
"reset_done":"Done!",
"image_no_prompt":"Please provide a prompt! (e.g. /image cat)",
"image_fail":"Failed to generate image",
"vision_fail":"Failed to interpret image",
"tts_no_prompt":"Please provide text! (e.g. /tts my house)",
"tts_fail":"Failed to generate speech",
"media_download_fail":["Failed to download audio file", "Make sure the file is not too large. (max 20MB)"],
"media_type_fail":"Unsupported file type",
"transcript":"Transcript",
"answer":"Answer",
"transcribe_fail":"Failed to transcribe text",
"chat_fail":"Failed to get response",
"prompt":"prompt",
"completion":"completion",
"openai_rate_limit":"OpenAI Rate Limit exceeded",
"openai_invalid":"OpenAI Invalid request",
"error":"An error has occurred",
"try_again":"Please try again in a while",
"answer_with_chatgpt":"Answer with ChatGPT",
"ask_chatgpt":"Ask ChatGPT",
"loading":"Loading...",
"function_unavailable_in_inline_mode": "This function is unavailable in inline mode"
},
"ar": {
"help_description":"عرض رسالة المساعدة",
"reset_description":"إعادة تعيين المحادثة. يمكنك منح بعض السياق (مثلًا /reset أنت مساعد مفيد)",
"image_description":"إنشاء صورة من المطالبة (مثلًا /image مسجد)",
"tts_description":"تحويل النص إلى كلام (مثلًا /tts السلام عليكم)",
"stats_description":"الحصول على إحصائيات الاستخدام الحالية",
"resend_description":"إعادة إرسال آخر رسالة",
"chat_description":"الدردشة مع البوت!",
"disallowed":"عذرًا، غير مسموح لك باستخدام هذا البوت. يمكنك التحقق من شفرة المصدر عبر https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"عفوًا، لقد وصلت إلى حد الاستخدام.",
"help_text":["أنا بوت ChatGPT، يمكنك التحدث إلي!", "إرسال رسالة صوتية أو ملف لتحويلهما إلى نص", "مفتوح المصدر على https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["المحادثة الحالية", "رسائل الدردشة بالسجل", "رموز الدردشة بالسجل"],
"usage_today":"استخدام هذا اليوم",
"usage_month":"استخدام هذا الشهر",
"stats_tokens":"الرموز",
"stats_images":"الصور المنشئة",
"stats_vision":"تم تفسير رموز الصورة",
"stats_tts":"الأحرف المحولة إلى كلام",
"stats_transcribe":["من الدقائق و", "من الثواني تم تحويلهم إلى نص"],
"stats_total":"💰 الإجمالي quot;,
"stats_budget":"ميزانيتك المتبقية",
"monthly":" لهذا الشهر",
"daily":" لليوم",
"all-time":"",
"stats_openai":"هذا الشهر، تم إصدار فاتورة لحساب OpenAI الخاص بك بمبلغ quot;,
"resend_failed":"لا شيء لديك لإعادة إرساله",
"reset_done":"تم!",
"image_no_prompt":"الرجاء تقديم مطالبة! (مثلًا /image مسجد)",
"image_fail":"فشل إنشاء الصورة",
"vision_fail":"فشل في تفسير الصورة",
"tts_no_prompt":"الرجاء تقديم نص! (مثلًا /tts السلام عليكم)",
"tts_fail":"فشل تحويل النص إلى كلام",
"media_download_fail":["فشل تنزيل الملف الصوتي", "التحقق من أن حجم ملفك ليس كبيرًا. (بحد أقصى 20 م.ب.)"],
"media_type_fail":"صيغة الملف غير مدعومة",
"transcript":"النص",
"answer":"الإجابة",
"transcribe_fail":"فشل تحويل الكلام إلى نص",
"chat_fail":"فشل الحصول على إجابة",
"prompt":"المطالبة",
"completion":"الاكتمال",
"openai_rate_limit":"تم تجاوز حد OpenAI",
"openai_invalid":"طلب OpenAI غير صالح",
"error":"حدث خطأ ما",
"try_again":"الرجاء المحاولة مجددًا لاحقًا",
"answer_with_chatgpt":"الإجابة بواسطة ChatGPT",
"ask_chatgpt":"سؤال ChatGPT",
"loading":"قيد التحميل...",
"function_unavailable_in_inline_mode": "هذه الوظيفة غير متوفرة في الوضع المضمن"
},
"de": {
"help_description":"Zeige die Hilfenachricht",
"reset_description":"Setze die Konversation zurück. Optionale Eingabe einer grundlegenden Anweisung (z.B. /reset Du bist ein hilfreicher Assistent)",
"image_description":"Erzeuge ein Bild aus einer Aufforderung (z.B. /image Katze)",
"tts_description":"Erzeuge Sprache aus Text (z.B. /tts mein Haus)",
"stats_description":"Zeige aktuelle Benutzungstatistiken",
"resend_description":"Wiederhole das Senden der letzten Nachricht",
"chat_description":"Schreibe mit dem Bot!",
"disallowed":"Sorry, du darfst diesen Bot nicht verwenden. Den Quellcode findest du hier https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Sorry, du hast dein Benutzungslimit erreicht",
"help_text":["Ich bin ein ChatGPT Bot, rede mit mir!", "Sende eine Sprachnachricht oder Datei und ich fertige dir eine Abschrift an", "Quelloffener Code unter https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Aktuelle Konversation", "Nachrichten im Verlauf", "Chat Tokens im Verlauf"],
"usage_today":"Nutzung heute",
"usage_month":"Nutzung diesen Monat",
"stats_tokens":"Chat Tokens verwendet",
"stats_images":"Bilder generiert",
"stats_vision":"Bilder-Token interpretiert",
"stats_tts":"Zeichen in Sprache umgewandelt",
"stats_transcribe":["Minuten und", "Sekunden abgeschrieben"],
"stats_total":"💰 Für einem Gesamtbetrag von quot;,
"stats_budget":"Dein verbliebenes Budget",
"monthly":" für diesen Monat",
"daily":" für heute",
"all-time":"",
"stats_openai":"Deine OpenAI Rechnung für den aktuellen Monat beträgt quot;,
"resend_failed":"Es gibt keine Nachricht zum wiederholten Senden",
"reset_done":"Fertig!",
"image_no_prompt":"Bitte füge eine Aufforderung hinzu (z.B. /image Katze)",
"image_fail":"Fehler beim Generieren eines Bildes",
"vision_fail":"Bildinterpretation fehlgeschlagen",
"tts_no_prompt":"Bitte füge Text hinzu! (z.B. /tts mein Haus)",
"tts_fail":"Fehler beim Generieren von Sprache",
"media_download_fail":["Fehler beim Herunterladen der Audiodatei", "Die Datei könnte zu groß sein. (max 20MB)"],
"media_type_fail":"Dateityp nicht unterstützt",
"transcript":"Abschrift",
"answer":"Antwort",
"transcribe_fail":"Fehler bei der Abschrift",
"chat_fail":"Fehler beim Erhalt der Antwort",
"prompt":"Anfrage",
"completion":"Antwort",
"openai_rate_limit":"OpenAI Nutzungslimit überschritten",
"openai_invalid":"OpenAI ungültige Anfrage",
"error":"Ein Fehler ist aufgetreten",
"try_again":"Bitte versuche es später erneut",
"answer_with_chatgpt":"Antworte mit ChatGPT",
"ask_chatgpt":"Frage ChatGPT",
"loading":"Lade...",
"function_unavailable_in_inline_mode": "Diese Funktion ist im Inline-Modus nicht verfügbar"
},
"es": {
"help_description":"Muestra el mensaje de ayuda",
"reset_description":"Reinicia la conversación. Opcionalmente, pasa instrucciones de alto nivel (por ejemplo, /reset Eres un asistente útil)",
"image_description":"Genera una imagen a partir de una sugerencia (por ejemplo, /image gato)",
"tts_description":"Genera voz a partir de texto (por ejemplo, /tts mi casa)",
"stats_description":"Obtén tus estadísticas de uso actuales",
"resend_description":"Reenvía el último mensaje",
"chat_description":"¡Chatea con el bot!",
"disallowed":"Lo siento, no tienes permiso para usar este bot. Puedes revisar el código fuente en https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Lo siento, has alcanzado tu límite de uso.",
"help_text":["Soy un bot de ChatGPT, ¡háblame!", "Envíame un mensaje de voz o un archivo y lo transcribiré para ti", "Código abierto en https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Conversación actual", "mensajes de chat en el historial", "tokens de chat en el historial"],
"usage_today":"Uso hoy",
"usage_month":"Uso este mes",
"stats_tokens":"tokens de chat usados",
"stats_images":"imágenes generadas",
"stats_vision":"Tokens de imagen interpretados",
"stats_tts":"caracteres convertidos a voz",
"stats_transcribe":["minutos y", "segundos transcritos"],
"stats_total":"💰 Por un monto total de quot;,
"stats_budget":"Tu presupuesto restante",
"monthly":" para este mes",
"daily":" para hoy",
"all-time":"",
"stats_openai":"Este mes se facturó $ a tu cuenta de OpenAI",
"resend_failed":"No tienes nada que reenviar",
"reset_done":"¡Listo!",
"image_no_prompt":"¡Por favor proporciona una sugerencia! (por ejemplo, /image gato)",
"image_fail":"No se pudo generar la imagen",
"vision_fail":"Error al interpretar la imagen",
"tts_no_prompt":"¡Por favor proporciona texto! (por ejemplo, /tts mi casa)",
"tts_fail":"No se pudo generar la voz",
"media_download_fail":["No se pudo descargar el archivo de audio", "Asegúrate de que el archivo no sea demasiado grande. (máx. 20MB)"],
"media_type_fail":"Tipo de archivo no compatible",
"transcript":"Transcripción",
"answer":"Respuesta",
"transcribe_fail":"No se pudo transcribir el texto",
"chat_fail":"No se pudo obtener la respuesta",
"prompt":"sugerencia",
"completion":"completado",
"openai_rate_limit":"Límite de tasa de OpenAI excedido",
"openai_invalid":"Solicitud inválida de OpenAI",
"error":"Ha ocurrido un error",
"try_again":"Por favor, inténtalo de nuevo más tarde",
"answer_with_chatgpt":"Responder con ChatGPT",
"ask_chatgpt":"Preguntar a ChatGPT",
"loading":"Cargando...",
"function_unavailable_in_inline_mode": "Esta función no está disponible en el modo inline"
},
"fa": {
"help_description":"نمایش پیغام راهنما",
"reset_description":"مکالمه را تنظیم مجدد کنید. به صورت اختیاری میتوانید دستورالعمل های سطح بالا را ارسال کنید (به عنوان مثال /reset تو یک دستیار مفید هستی)",
"image_description":"ایجاد تصویر بر اساس فرمان (به عنوان مثال /image گربه)",
"tts_description":"تبدیل متن به صدا (به عنوان مثال /tts خانه من)",
"stats_description":"آمار استفاده فعلی خود را دریافت کنید",
"resend_description":"آخرین پیام را دوباره ارسال کنید",
"chat_description":"چت با ربات!",
"disallowed":"با عرض پوزش، شما مجاز به استفاده از این ربات نیستید. میتوانید کد منبع را در https://github.com/n3d1117/chatgpt-telegram-bot بررسی کنید",
"budget_limit":"با عرض پوزش، شما به حد مجاز استفاده خود رسیدهاید.",
"help_text":["من یک ربات ChatGPT هستم، با من صحبت کنید!", "برای من پیام صوتی یا فایل بفرستید تا آن را برای شما رونویسی کنم", "منبع باز در https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["مکالمه فعلی", "پیام چت در تاریخچه", "توکن چت در تاریخچه"],
"usage_today":"استفاده امروز",
"usage_month":"استفاده این ماه",
"stats_tokens":"توکن چت استفاده شده است",
"stats_images":"تصویر تولید شده است",
"stats_vision":"توکنهای تصویر تفسیر شدند",
"stats_tts":"کاراکترهای تبدیل شده به صدا",
"stats_transcribe":["دقیقه و", "ثانیه رونویسی شده است"],
"stats_total":"💰 مقدار کل مصرف: quot;,
"stats_budget":"بودجه باقیمانده شما",
"monthly":" برای این ماه",
"daily":" برای امروز",
"all-time":"",
"stats_openai":"صورتحساب این ماه حساب OpenAI شما: quot;,
"resend_failed":"شما چیزی برای ارسال مجدد ندارید",
"reset_done":"انجام شد!",
"image_no_prompt":"لطفا یک فرمان ارائه دهید! (به عنوان مثال /image گربه)",
"image_fail":"در تولید تصویر خطایی رخ داد",
"vision_fail":"تفسیر تصویر ناموفق بود",
"tts_no_prompt":"لطفا متنی را وارد کنید! (به عنوان مثال /tts خانه من)",
"tts_fail":"در تولید صدا خطایی رخ داد",
"media_download_fail":["فایل صوتی دانلود نشد", "دقت کنید که فایل خیلی بزرگ نباشد. (حداکثر 20 مگابایت)"],
"media_type_fail":"نوع فایل پشتیبانی نمیشود",
"transcript":"رونوشت",
"answer":"پاسخ",
"transcribe_fail":"در رونویسی متن خطایی رخ داد",
"chat_fail":"در دریافت پاسخ خطایی رخ داد",
"prompt":"فرمان",
"completion":"تکمیل",
"openai_rate_limit":"بیشتر از حد مجاز درخواست به OpenAI استفاده شده است",
"openai_invalid":"درخواست نامعتبر OpenAI",
"error":"خطایی رخ داده است",
"try_again":"لطفا بعد از مدتی دوباره امتحان کنید",
"answer_with_chatgpt":"با ChatGPT پاسخ دهید",
"ask_chatgpt":"از ChatGPT بپرسید",
"loading":"در حال بارگذاری...",
"function_unavailable_in_inline_mode": "این عملکرد در حالت آنلاین در دسترس نیست"
},
"fi": {
"help_description":"Näytä ohjeet",
"reset_description":"Nollaa keskustelu. Voit myös antaa korkean tason ohjeita (esim. /reset Olet avulias avustaja)",
"image_description":"Luo kuva tekstistä (esim. /image kissa)",
"tts_description":"Muuta teksti puheeksi (esim. /tts taloni)",
"stats_description":"Hae tämän hetken käyttötilastot",
"resend_description":"Lähetä viimeisin viesti uudestaan",
"chat_description":"Keskustele botin kanssa!",
"disallowed":"Pahoittelut, mutta sinulla ei ole oikeuksia käyttää tätä bottia. Voit lukea sen lähdekoodin osoitteessa https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Pahoittelut, mutta olet ylittänyt käyttörajasi.",
"help_text":["Olen ChatGPT-botti, keskustele kanssani!", "Lähetä minulle ääniviesti tai -tiedosto, ja litteroin sen sinulle", "Avoin lähdekoodi osoittessa https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Nykyinen keskustelu", "viestiä muistissa", "viestipolettia muistissa"],
"usage_today":"Käyttö tänään",
"usage_month":"Käyttö tässä kuussa",
"stats_tokens":"viestipolettia käytetty",
"stats_images":"kuvaa luotu",
"stats_vision":"Kuvatulkittujen tokenien määrä",
"stats_tts":"merkkiä muutettu puheeksi",
"stats_transcribe":["minuuttia ja", "sekuntia litteroitu"],
"stats_total":"💰 Yhteensä quot;,
"stats_budget":"Jäljellä oleva budjettisi",
"monthly":" tälle kuulle",
"daily":" tälle päivälle",
"all-time":"",
"stats_openai":"Tässä kuussa OpenAI-tiliäsi on laskutettu quot;,
"resend_failed":"Ei uudelleenlähetettävää",
"reset_done":"Valmis!",
"image_no_prompt":"Ole hyvä ja anna ohjeet! (esim. /image kissa)",
"image_fail":"Kuvan luonti epäonnistui",
"vision_fail":"Kuvan tulkinta epäonnistui",
"tts_no_prompt":"Ole hyvä ja anna teksti! (esim. /tts taloni)",
"tts_fail":"Puheen luonti epäonnistui",
"media_download_fail":["Äänitiedoston lataus epäonnistui", "Varmista että se ei ole liian iso. (enintään 20MB)"],
"media_type_fail":"Tiedostomuoto ei tuettu",
"transcript":"Litteroitu teksti",
"answer":"Vastaa",
"transcribe_fail":"Litterointi epäonnistui",
"chat_fail":"Vastaus epäonnistui",
"prompt":"ohjeistus",
"completion":"viimeistely",
"openai_rate_limit":"OpenAI-nopeusraja ylitetty",
"openai_invalid":"OpenAI-pyyntö virheellinen",
"error":"Virhe",
"try_again":"Yritä myöhemmin uudelleen",
"answer_with_chatgpt":"Vastaa ChatGPT:n avulla",
"ask_chatgpt":"Kysy ChatGPT:ltä",
"loading":"Lataa...",
"function_unavailable_in_inline_mode": "Tämä toiminto ei ole käytettävissä sisäisessä tilassa"
},
"he": {
"help_description": "הצג הודעת עזרה",
"reset_description": "אתחל את השיחה. ניתן להעביר הוראות ברמה גבוהה (למשל, /reset אתה עוזר מועיל)",
"image_description": "צור תמונה מהפרומפט (למשל, /image חתול)",
"tts_description": "צור דיבור מטקסט (למשל, /tts הבית שלי)",
"stats_description": "קבל את סטטיסטיקות השימוש הנוכחיות שלך",
"resend_description": "שלח מחדש את ההודעה האחרונה",
"chat_description": "שוחח עם הבוט!",
"disallowed": "מצטערים, אינך מורשה להשתמש בבוט זה. תוכל לבדוק את הקוד המקור ב https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit": "מצטערים, הגעת למגבלת השימוש שלך.",
"help_text": ["אני בוט של ChatGPT, דבר איתי!", "שלח לי הודעה קולית או קובץ ואני אתרגם אותו בשבילך", "קוד פתוח ב https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation": ["שיחה נוכחית", "הודעות צ'אט בהיסטוריה", "אסימוני צ'אט בהיסטוריה"],
"usage_today": "שימוש היום",
"usage_month": "שימוש החודש",
"stats_tokens": "אסימונים",
"stats_images": "תמונות שנוצרו",
"stats_vision": "אסימוני תמונה שפורשו",
"stats_tts": "תווים שהומרו לדיבור",
"stats_transcribe": ["דקות ו", "שניות שהוקלטו"],
"stats_total": "💰 לסך כל של quot;,
"stats_budget": "התקציב הנותר שלך",
"monthly": " לחודש זה",
"daily": " להיום",
"all-time": "",
"stats_openai": "החשבון שלך ב-OpenAI חויב החודש ב-quot;,
"resend_failed": "אין לך מה לשלוח מחדש",
"reset_done": "בוצע!",
"image_no_prompt": "נא לספק פרומפט! (למשל, /image חתול)",
"image_fail": "נכשל ביצירת התמונה",
"vision_fail": "נכשל בפרשנות התמונה",
"tts_no_prompt": "נא לספק טקסט! (למשל, /tts הבית שלי)",
"tts_fail": "נכשל ביצירת הדיבור",
"media_download_fail": ["נכשל בהורדת קובץ השמע", "ודא שהקובץ אינו גדול מדי. (מקסימום 20MB)"],
"media_type_fail": "סוג קובץ לא נתמך",
"transcript": "תמליל",
"answer": "תשובה",
"transcribe_fail": "נכשל בתרגום הטקסט",
"chat_fail": "נכשל בקבלת תגובה",
"prompt": "פרומפט",
"completion": "השלמה",
"openai_rate_limit": "חריגה מהגבלת השימוש של OpenAI",
"openai_invalid": "בקשה לא חוקית של OpenAI",
"error": "אירעה שגיאה",
"try_again": "נא לנסות שוב מאוחר יותר",
"answer_with_chatgpt": "ענה באמצעות ChatGPT",
"ask_chatgpt": "שאל את ChatGPT",
"loading": "טוען...",
"function_unavailable_in_inline_mode": "הפונקציה לא זמינה במצב inline"
},
"id": {
"help_description": "Menampilkan pesan bantuan",
"reset_description": "Merestart percakapan. Opsional memasukkan instruksi tingkat tinggi (misalnya /reset Anda adalah asisten yang membantu)",
"image_description": "Menghasilkan gambar dari input prompt (misalnya /image kucing)",
"tts_description": "Menghasilkan suara dari teks (misalnya /tts rumah saya)",
"stats_description": "Mendapatkan statistik penggunaan saat ini",
"resend_description": "Mengirim kembali pesan terakhir",
"chat_description": "Berkonversasi dengan bot!",
"disallowed": "Maaf, Anda tidak diizinkan menggunakan bot ini. Anda dapat melihat kode sumber di https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit": "Maaf, Anda telah mencapai batas penggunaan Anda.",
"help_text": ["Saya adalah bot ChatGPT, obrolan dengan saya!", "Kirimkan pesan suara atau file dan saya akan mentranskripsinya untuk Anda", "Sumber terbuka di https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation": ["Percakapan saat ini", "pesan obrolan dalam riwayat", "token obrolan dalam riwayat"],
"usage_today": "Penggunaan hari ini",
"usage_month": "Penggunaan bulan ini",
"stats_tokens": "token obrolan yang digunakan",
"stats_images": "gambar yang dihasilkan",
"stats_vision":"Token gambar diinterpretasi",
"stats_tts": "karakter dikonversi ke suara",
"stats_transcribe": ["menit dan", "detik ditranskripsi"],
"stats_total": "💰 Untuk total sebesar quot;,
"stats_budget": "Sisa anggaran Anda",
"monthly": " untuk bulan ini",
"daily": " untuk hari ini",
"all-time": "",
"stats_openai": "Bulan ini akun OpenAI Anda dikenakan biaya sebesar quot;,
"resend_failed": "Anda tidak memiliki pesan untuk dikirim ulang",
"reset_done": "Selesai!",
"image_no_prompt": "Harap berikan prompt! (misalnya /image kucing)",
"image_fail": "Gagal menghasilkan gambar",
"vision_fail":"Gagal menginterpretasi gambar",
"tts_no_prompt": "Harap berikan teks! (misalnya /tts rumah saya)",
"tts_fail": "Gagal menghasilkan suara",
"media_download_fail": ["Gagal mengunduh file audio", "Pastikan file tidak terlalu besar. (maksimal 20MB)"],
"media_type_fail": "Tipe file tidak didukung",
"transcript": "Transkrip",
"answer": "Jawaban",
"transcribe_fail": "Gagal mentranskripsi teks",
"chat_fail": "Gagal mendapatkan respons",
"prompt": "input prompt",
"completion": "input selesai",
"openai_rate_limit": "Batas Rate OpenAI terlampaui",
"openai_invalid": "Permintaan OpenAI tidak valid",
"error": "Terjadi kesalahan",
"try_again": "Silakan coba lagi nanti",
"answer_with_chatgpt": "Jawaban dengan ChatGPT",
"ask_chatgpt": "Tanya ChatGPT",
"loading": "Sedang memuat...",
"function_unavailable_in_inline_mode": "Fungsi ini tidak tersedia dalam mode inline"
},
"it": {
"help_description":"Mostra il messaggio di aiuto",
"reset_description":"Resetta la conversazione. Puoi anche specificare ulteriori istruzioni (ad esempio, /reset Sei un assistente utile).",
"image_description":"Genera immagine da un testo (ad es. /image gatto)",
"tts_description":"Genera audio da un testo (ad es. /tts la mia casa)",
"stats_description":"Mostra le statistiche di utilizzo",
"resend_description":"Reinvia l'ultimo messaggio",
"chat_description":"Chatta con il bot!",
"disallowed":"Spiacente, non sei autorizzato ad usare questo bot. Se vuoi vedere il codice sorgente, vai su https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Spiacente, hai raggiunto il limite di utilizzo",
"help_text":["Sono un bot di ChatGPT, chiedimi qualcosa!", "Invia un messaggio vocale o un file e lo trascriverò per te.", "Open source su https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Conversazione attuale", "messaggi totali", "token totali"],
"usage_today":"Utilizzo oggi",
"usage_month":"Utilizzo questo mese",
"stats_tokens":"token utilizzati",
"stats_images":"immagini generate",
"stats_vision":"Token immagine interpretati",
"stats_tts":"caratteri convertiti in audio",
"stats_transcribe":["minuti", "secondi trascritti"],
"stats_total":"💰 Per un totale di quot;,
"stats_budget":"Budget rimanente",
"monthly":" per questo mese",
"daily":" per oggi",
"all-time":"",
"stats_openai":"Spesa OpenAI per questo mese: quot;,
"resend_failed":"Non c'è nulla da reinviare",
"reset_done":"Fatto!",
"image_no_prompt":"Inserisci un testo (ad es. /image gatto)",
"image_fail":"Impossibile generare l'immagine",
"vision_fail":"Interpretazione dell'immagine fallita",
"tts_no_prompt":"Inserisci un testo (ad es. /tts la mia casa)",
"tts_fail":"Impossibile generare l'audio",
"media_download_fail":["Impossibile processare il file audio", "Assicurati che il file non sia troppo pesante (massimo 20MB)"],
"media_type_fail":"Tipo di file non supportato",
"transcript":"Trascrizione",
"answer":"Risposta",
"transcribe_fail":"Impossibile trascrivere l'audio",
"chat_fail":"Impossibile ottenere una risposta",
"prompt":"testo",
"completion":"completamento",
"openai_rate_limit":"Limite massimo di richieste OpenAI raggiunto",
"openai_invalid":"Richiesta OpenAI non valida",
"error":"Si è verificato un errore",
"try_again":"Riprova più tardi",
"answer_with_chatgpt":"Rispondi con ChatGPT",
"ask_chatgpt":"Chiedi a ChatGPT",
"loading":"Carico...",
"function_unavailable_in_inline_mode": "Questa funzione non è disponibile in modalità inline"
},
"ms": {
"help_description":"Lihat Mesej Bantuan",
"reset_description":"Tetapkan semula perbualan. Secara pilihan, hantar arahan peringkat tinggi (cth. /reset Anda adalah pembantu yang membantu)",
"image_description":"Jana imej daripada gesaan (cth. /image cat)",
"tts_description":"Tukar teks kepada suara (cth. /tts rumah saya)",
"stats_description":"Dapatkan statistik penggunaan semasa anda",
"resend_description":"Hantar semula mesej terkini",
"chat_description":"Sembang dengan bot!",
"disallowed":"Maaf, anda tidak dibenarkan menggunakan bot ini. Anda boleh menyemak kod sumber di https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Maaf, anda telah mencapai had penggunaan anda.",
"help_text":["Saya bot ChatGPT, bercakap dengan saya!", "Hantar mesej suara atau fail kepada saya dan saya akan menyalinnya untuk anda", "Sumber terbuka di https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Perbualan semasa", "Lihat mesej terdahulu", "chat token terdahulu"],
"usage_today":"Penggunaan Hari ini",
"usage_month":"Penggunaan bulan ini",
"stats_tokens":"Token",
"stats_images":"Penghasilan",
"stats_vision":"Token imej diinterpretasikan",
"stats_tts":"Aksara yang ditukar kepada suara",
"stats_transcribe":["Minit dan", "Penterjemah yang kedua"],
"stats_total":"Jumlah semua 💰 dalam quot;,
"stats_budget":"Baki yang tersisa",
"monthly":" Untuk bulan ini",
"daily":" Untuk hari ini",
"all-time":"",
"stats_openai":"Bulan ini akaun OpenAI anda telah dibilkan quot;,
"resend_failed":"Anda tiada apa-apa untuk dihantar semula",
"reset_done":"Selesai!",
"image_no_prompt":"Sila berikan gesaan! (cth. /image cat)",
"image_fail":"Gagal menjana imej",
"vision_fail":"Gagal menginterpretasikan imej",
"tts_no_prompt":"Sila berikan teks! (cth. /tts rumah saya)",
"tts_fail":"Gagal menjana suara",
"media_download_fail":["Gagal memuat turun fail audio", "Pastikan fail tidak terlalu besar. (maks 20MB)"],
"media_type_fail":"Jenis fail tidak disokong",
"transcript":"Transkrip",
"answer":"Jawapan",
"transcribe_fail":"Gagal menyalin teks",
"chat_fail":"Gagal mendapat respons",
"prompt":"segera",
"completion":"selesai",
"openai_rate_limit":"Had Kadar OpenAI melebihi",
"openai_invalid":"Permintaan tidak sah OpenAI",
"error":"Ralat telah berlaku",
"try_again":"Sila cuba lagi sebentar lagi",
"answer_with_chatgpt":"Jawab dengan ChatGPT",
"ask_chatgpt":"Tanya ChatGPT",
"loading":"Memuatkan...",
"function_unavailable_in_inline_mode": "Fungsi ini tidak tersedia dalam mod sebaris"
},
"nl": {
"help_description":"Toon uitleg",
"reset_description":"Herstart het gesprek. Voeg eventueel instructies toe (bijv. /reset Je bent een behulpzame assistent)",
"image_description":"Genereer een afbeelding van een prompt (bijv. /image kat)",
"tts_description":"Genereer spraak van tekst (bijv. /tts mijn huis)",
"stats_description":"Bekijk je huidige gebruiksstatistieken",
"resend_description":"Verstuur het laatste bericht opnieuw",
"chat_description":"Chat met de bot!",
"disallowed":"Sorry, je hebt geen bevoegdheid om deze bot te gebruiken. Je kunt de sourcecode bekijken op https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Sorry, je hebt je gebruikslimiet bereikt.",
"help_text":["Ik ben een ChatGPT bot, praat met me.", "Stuur me een spraakbericht en ik zet het om naar tekst", "Open de source code op https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Huidig gesprek", "chatberichten in geschiedenis", "chat tokens in geschiedenis"],
"usage_today":"Gebruik vandaag",
"usage_month":"Gebruik deze maand",
"stats_tokens":"chat tokens gebruikt",
"stats_images":"afbeeldingen gegenereerd",
"stats_vision":"Afbeeldingstokens geïnterpreteerd",
"stats_tts":"karakters omgezet naar spraak",
"stats_transcribe":["minuten en", "seconden audio naar tekst omgezet"],
"stats_total":"💰 Voor een totaal van quot;,
"stats_budget":"Je resterende budget",
"monthly":" voor deze maand",
"daily":" voor vandaag",
"all-time":"",
"stats_openai":"Deze maand is je OpenAI account gefactureerd voor quot;,
"resend_failed":"Je hebt niks om opnieuw te sturen",
"reset_done":"Klaar!",
"image_no_prompt":"Geef a.u.b. een prompt! (bijv. /image kat)",
"image_fail":"Afbeelding genereren mislukt",
"vision_fail":"Interpretatie van afbeelding mislukt",
"tts_no_prompt":"Geef a.u.b. tekst! (bijv. /tts mijn huis)",
"tts_fail":"Spraak genereren mislukt",
"media_download_fail":["Audio bestand downloaden mislukt", "Check of het niet te groot is. (max 20MB)"],
"media_type_fail":"Niet ondersteund bestandsformaat",
"transcript":"Uitgeschreven tekst",
"answer":"Antwoord",
"transcribe_fail":"Audio naar tekst omzetten mislukt",
"chat_fail":"Reactie verkrijgen mislukt",
"prompt":"prompt",
"completion":"completion",
"openai_rate_limit":"OpenAI Rate Limit overschreden",
"openai_invalid":"OpenAI ongeldig verzoek",
"error":"Er is een fout opgetreden",
"try_again":"Probeer het a.u.b. later opnieuw",
"answer_with_chatgpt":"Antwoord met ChatGPT",
"ask_chatgpt":"Vraag ChatGPT",
"loading":"Laden...",
"function_unavailable_in_inline_mode": "Deze functie is niet beschikbaar in de inline modus"
},
"pl": {
"help_description": "Pokaż wiadomości pomocnicze",
"reset_description": "Zresetuj konwersację. Opcjonalnie dołącz nową frazę (np. /reset Jesteś pomocnym asystentem)",
"image_description": "Generuj obraz na podstawie promptu (np. /image kot)",
"tts_description": "Generuj mowę na podstawie tekstu (np. /tts mój dom)",
"stats_description": "Pokaż obecne statystyki użycia",
"resend_description": "Wyślij ponownie ostatnią wiadomość",
"chat_description": "Rozmawiaj z botem!",
"disallowed": "Przepraszam, nie masz uprawnień do korzystania z tego bota. Sprawdź kod źródłowy pod adresem https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit": "Niestety osiągnąłeś swój limit użytkowania.",
"help_text": ["Jestem botem ChatGPT, porozmawiaj ze mną!", "Wyślij mi wiadomość głosową lub plik, a ja ją dla Ciebie przepiszę", "Otwarty kod źródłowy dostępny na https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation": ["Bieżąca rozmowa", "wiadomości w historii", "tokeny w historii rozmowy"],
"usage_today": "Użycie dzisiaj",
"usage_month": "Użycie w tym miesiącu",
"stats_tokens": "tokeny",
"stats_images": "wygenerowane obrazy",
"stats_vision":"Tokeny obrazu zinterpretowane",
"stats_tts": "znaki przekształcone na mowę",
"stats_transcribe": ["minut i", "sekund transkrybowano"],
"stats_total": "💰 Łącznie za kwotę quot;,
"stats_budget": "Twój pozostały budżet",
"monthly": " w tym miesiącu",
"daily": " dzisiaj",
"all-time": "",
"stats_openai": "W tym miesiącu Twoje konto OpenAI zostało obciążone kwotą quot;,
"resend_failed": "Nie masz nic do ponownego przesłania",
"reset_done": "Gotowe!",
"image_no_prompt": "Proszę podać jakiś prompt! (np. /image kot)",
"image_fail": "Nie udało się wygenerować obrazu",
"vision_fail":"Nie udało się zinterpretować obrazu",
"tts_no_prompt": "Proszę podać tekst! (np. /tts mój dom)",
"tts_fail": "Nie udało się wygenerować mowy",
"media_download_fail": ["Nie udało się pobrać pliku dźwiękowego", "Sprawdź rozmiar pliku (maks. 20 MB)"],
"media_type_fail": "Nieobsługiwany typ pliku",
"transcript": "Transkrypt",
"answer": "Odpowiedź",
"transcribe_fail": "Transkrypcja nieudana",
"chat_fail": "Nie udało się uzyskać odpowiedzi",
"prompt": "prompt",
"completion": "ukończenie",
"openai_rate_limit": "Przekroczono limit OpenAI",
"openai_invalid": "Błędne żądanie od OpenAI",
"error": "Wystąpił błąd",
"try_again": "Spróbuj ponownie za chwilę",
"answer_with_chatgpt": "Odpowiedz z ChatGPT",
"ask_chatgpt": "Zapytaj ChatGPT",
"loading": "Ładowanie...",
"function_unavailable_in_inline_mode": "Ta funkcja jest niedostępna w trybie inline"
},
"pt-br": {
"help_description": "Mostra a mensagem de ajuda",
"reset_description": "Redefine a conversa. Opcionalmente, passe instruções de alto nível (por exemplo, /reset Você é um assistente útil)",
"image_description": "Gera uma imagem a partir do prompt (por exemplo, /image gato)",
"tts_description": "Gera fala a partir do texto (por exemplo, /tts minha casa)",
"stats_description": "Obtenha suas estatísticas de uso atuais",
"resend_description": "Reenvia a última mensagem",
"chat_description": "Converse com o bot!",
"disallowed": "Desculpe, você não tem permissão para usar este bot. Você pode verificar o código-fonte em https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit": "Desculpe, você atingiu seu limite de uso.",
"help_text": ["Eu sou um bot ChatGPT, fale comigo!", "Envie-me uma mensagem de voz ou arquivo e eu o transcreverei para você", "Código-fonte aberto em https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation": ["Conversa atual", "mensagens do chat na história", "tokens do chat na história"],
"usage_today": "Uso hoje",
"usage_month": "Uso este mês",
"stats_tokens": "tokens de chat usados",
"stats_images": "imagens geradas",
"stats_vision":"Tokens de imagem interpretados",
"stats_tts": "caracteres convertidos em fala",
"stats_transcribe": ["minutos e", "segundos transcritos"],
"stats_total": "💰 Para um valor total de quot;,
"stats_budget": "Seu orçamento restante",
"monthly": " para este mês",
"daily": " para hoje",
"all-time": "",
"stats_openai": "Este mês sua conta OpenAI foi cobrada em quot;,
"resend_failed": "Você não tem nada para reenviar",
"reset_done": "Feito!",
"image_no_prompt": "Por favor, forneça um prompt! (por exemplo, /image gato)",
"image_fail": "Falha ao gerar imagem",
"vision_fail":"Falha ao interpretar a imagem",
"tts_no_prompt": "Por favor, forneça texto! (por exemplo, /tts minha casa)",
"tts_fail": "Falha ao gerar fala",
"media_download_fail": ["Falha ao baixar arquivo de áudio", "Certifique-se de que o arquivo não seja muito grande. (máx. 20 MB)"],
"media_type_fail": "Tipo de arquivo não suportado",
"transcript": "Transcrição",
"answer": "Resposta",
"transcribe_fail": "Falha ao transcrever texto",
"chat_fail": "Falha ao obter resposta",
"prompt": "prompt",
"completion": "conclusão",
"openai_rate_limit": "Limite de taxa OpenAI excedido",
"openai_invalid": "Solicitação inválida OpenAI",
"error": "Ocorreu um erro",
"try_again": "Por favor, tente novamente mais tarde",
"answer_with_chatgpt": "Responder com ChatGPT",
"ask_chatgpt": "Perguntar ao ChatGPT",
"loading": "Carregando...",
"function_unavailable_in_inline_mode": "Esta função não está disponível no modo inline"
},
"ru": {
"help_description":"Показать справочное сообщение",
"reset_description":"Перезагрузить разговор. По желанию передай общие инструкции (например, /reset ты полезный помощник)",
"image_description":"Создать изображение по запросу (например, /image кошка)",
"tts_description":"Создать речь из текста (например, /tts мой дом)",
"stats_description":"Получить статистику использования",
"resend_description":"Повторная отправка последнего сообщения",
"chat_description":"Общайся с ботом!",
"disallowed":"Извини, тебе запрещено использовать этого бота. Исходный код можно найти здесь https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Извини, ты достиг предела использования",
"help_text":["Я бот ChatGPT, поговори со мной!", "Пришли мне голосовое сообщение или файл, и я сделаю тебе расшифровку", "Открытый исходный код на https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Текущий разговор", "сообщения в истории", "токены чата в истории"],
"usage_today":"Использование сегодня",
"usage_month":"Использование в этом месяце",
"stats_tokens":"токенов чата использовано",
"stats_images":"изображений создано",
"stats_vision":"Токенов на интерпритацию изображений",
"stats_tts":"символов преобразовано в речь",
"stats_transcribe":["минут(ы) и", "секунд(ы) расшифровки"],
"stats_total":"💰 На общую сумму quot;,
"stats_budget":"Остаточный бюджет",
"monthly":" на этот месяц",
"daily":" на сегодня",
"all-time":"",
"stats_openai":"В этом месяце на ваш аккаунт OpenAI был выставлен счет на quot;,
"resend_failed":"Вам нечего пересылать",
"reset_done":"Готово!",
"image_no_prompt":"Пожалуйста, подайте запрос! (например, /image кошка)",
"image_fail":"Не удалось создать изображение",
"vision_fail":"Сбой при интерпретации изображения",
"tts_no_prompt":"Пожалуйста, подайте текст! (например, /tts мой дом)",
"tts_fail":"Не удалось создать речь",
"media_download_fail":["Не удалось загрузить аудиофайл", "Проверьте, чтобы файл не был слишком большим. (не более 20 МБ)"],
"media_type_fail":"Неподдержанный тип файла",
"transcript":"Расшифровка",
"answer":"Ответ",
"transcribe_fail":"Не удалось расшифровать текст",
"chat_fail":"Не удалось получить ответ",
"prompt":"запрос",
"completion":"ответ",
"openai_rate_limit":"Превышен предел использования OpenAI",
"openai_invalid":"ошибочный запрос OpenAI",
"error":"Произошла ошибка",
"try_again":"Пожалуйста, повторите попытку позже",
"answer_with_chatgpt":"Ответить с помощью ChatGPT",
"ask_chatgpt":"Спросить ChatGPT",
"loading":"Загрузка...",
"function_unavailable_in_inline_mode": "Эта функция недоступна в режиме inline"
},
"tr": {
"help_description":"Yardım mesajını göster",
"reset_description":"Konuşmayı sıfırla. İsteğe bağlı olarak botun nasıl davranacağını belirleyin (Örneğin: /reset Sen yardımcı bir asistansın)",
"image_description":"Verilen komuta göre görüntü üret (Örneğin /image kedi)",
"tts_description":"Verilen metni seslendir (Örneğin /tts evim)",
"stats_description":"Mevcut kullanım istatistiklerinizi alın",
"resend_description":"En son mesajı yeniden gönder",
"chat_description":"Bot ile sohbet edin!",
"disallowed":"Üzgünüz, bu botu kullanmanıza izin verilmiyor. Botun kaynak koduna göz atmak isterseniz: https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Üzgünüz, kullanım limitinize ulaştınız",
"help_text":["Ben bir ChatGPT botuyum, konuş benimle!", "Bana bir sesli mesaj veya dosya gönderin, sizin için yazıya çevireyim", "Açık kaynak kodu: https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Güncel sohbet", "sohbet mesajı", "sohbet token'i"],
"usage_today":"Bugünkü kullanım",
"usage_month":"Bu ayki kullanım",
"stats_tokens":"sohbet token'i kullanıldı",
"stats_images":"görüntüler oluşturuldu",
"stats_vision":"Resim belirteçleri yorumlandı",
"stats_tts":"seslendirilen karakterler",
"stats_transcribe":["dakika", "saniye sesten yazıya çeviri yapıldı"],
"stats_total":"💰 Bu kullanımların toplam maliyeti quot;,
"stats_budget":"Kalan bütçeniz",
"monthly":" bu ay için",
"daily":" bugün için",
"all-time":"",
"stats_openai":"Bu ay OpenAI hesabınıza kesilen fatura tutarı: quot;,
"resend_failed":"Yeniden gönderilecek bir şey yok",
"reset_done":"Tamamlandı!",
"image_no_prompt":"Lütfen komut giriniz (Örneğin /image kedi)",
"image_fail":"Görüntü oluşturulamadı",
"vision_fail":"Resmi yorumlama başarısız oldu",
"tts_no_prompt":"Lütfen metin giriniz (Örneğin /tts evim)",
"tts_fail":"Seslendirme oluşturulamadı",
"media_download_fail":["Ses dosyası indirilemedi", "Dosyanın çok büyük olmadığından emin olun. (maksimum 20MB)"],
"media_type_fail":"Desteklenmeyen dosya türü",
"transcript":"Yazıya çevirilmiş hali",
"answer":"Cevap",
"transcribe_fail":"Sesin yazıya çevirilme işlemi yapılamadı",
"chat_fail":"Yanıt alınamadı",
"prompt":"Komut",
"completion":"Tamamlama",
"openai_rate_limit":"OpenAI maksimum istek limiti aşıldı",
"openai_invalid":"OpenAI Geçersiz istek",
"error":"Bir hata oluştu",
"try_again":"Lütfen birazdan tekrar deneyiniz",
"answer_with_chatgpt":"ChatGPT ile cevapla",
"ask_chatgpt":"ChatGPT'ye sor",
"loading":"Yükleniyor...",
"function_unavailable_in_inline_mode": "Bu işlev inline modda kullanılamaz"
},
"uk": {
"help_description":"Показати повідомлення допомоги",
"reset_description":"Скинути розмову. Опціонально передайте високорівневі інструкції (наприклад, /reset Ви корисний помічник)",
"image_description":"Створити зображення за вашим запитом (наприклад, /image кіт)",
"tts_description":"Створити голос з тексту (наприклад, /tts мій будинок)",
"stats_description":"Отримати вашу поточну статистику використання",
"resend_description":"Повторно відправити останнє повідомлення",
"chat_description":"Розмовляйте з ботом!",
"disallowed":"Вибачте, вам не дозволено використовувати цього бота. Ви можете переглянути його код за адресою https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Вибачте, ви вичерпали ліміт використання.",
"help_text":["Я бот ChatGPT, розмовляйте зі мною!", "Надішліть мені голосове повідомлення або файл, і я транскрибую його для вас", "Відкритий код за адресою https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Поточна розмова", "повідомлень у історії чату", "токенів у історії чату"],
"usage_today":"Використання сьогодні",
"usage_month":"Використання цього місяця",
"stats_tokens":"використано токенів чату",
"stats_images":"згенеровано зображень",
"stats_vision":"використано токенів на інтерпритрацію зображень",
"stats_tts":"символів перетворено на голос",
"stats_transcribe":["хвилин і", "секунд транскрибовано"],
"stats_total":"💰 Загальна сума quot;,
"stats_budget":"Ваш залишок бюджету",
"monthly":" за цей місяць",
"daily":" за сьогодні",
"all-time":"",
"stats_openai":"Цього місяця з вашого облікового запису OpenAI було списано quot;,
"resend_failed":"У вас немає повідомлень для повторної відправки",
"reset_done":"Готово!",
"image_no_prompt":"Будь ласка, надайте свій запит! (наприклад, /image кіт)",
"image_fail":"Не вдалося створити зображення",
"vision_fail":"Не вдалося інтерпретувати зображення",
"tts_no_prompt":"Будь ласка, надайте текст! (наприклад, /tts мій будинок)",
"tts_fail":"Не вдалося створити голос",
"media_download_fail":["Не вдалося завантажити аудіофайл", "Переконайтеся, що файл не занадто великий. (максимум 20 МБ)"],
"media_type_fail":"Непідтримуваний тип файлу",
"transcript":"Транскрипт",
"answer":"Відповідь",
"transcribe_fail":"Не вдалося транскрибувати текст",
"chat_fail":"Не вдалося отримати відповідь",
"prompt":"запит",
"completion":"завершення",
"openai_rate_limit":"Перевищено ліміт частоти запитів до OpenAI",
"openai_invalid":"Неправильний запит до OpenAI",
"error":"Сталася помилка",
"try_again":"Будь ласка, спробуйте знову через деякий час",
"answer_with_chatgpt":"Відповідь за допомогою ChatGPT",
"ask_chatgpt":"Запитати ChatGPT",
"loading":"Завантаження...",
"function_unavailable_in_inline_mode": "Ця функція недоступна в режимі Inline"
},
"uz": {
"help_description": "Yordam xabarini ko'rsatish",
"reset_description": "Suhbatni qayta boshlang. Agar xohlasangiz, umumiy ko'rsatmalar bering (masalan, /reset siz foydali yordamchisiz)",
"image_description": "Tasvirni so'rov bo'yicha yaratish (masalan, /image mushuk)",
"tts_description": "Matnni ovozga aylantirish (masalan, /tts uyim)",
"stats_description": "Hozirgi foydalanilgan statistikani olish",
"resend_description": "Oxirgi xabarni qayta yuborish",
"chat_description": "Bot bilan suxbat!",
"disallowed": "Kechirasiz, sizga bu botdan foydalanish taqiqlangan. Siz manba kodini tekshirishingiz mumkin https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit": "Kechirasiz, siz foydalanish chegarasiga yetdingiz.",
"help_text": ["Men ChatGPT botman, men bilan gaplashing!", "Menga ovozli xabar yoki fayl yuboring, men uni siz uchun transkripsiya qilaman", "Ochiq manba: https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation": ["Hozirgi suhbat", "tarixdagi chat xabarlari", "tarixdagi suhbat tokenlari"],
"usage_today": "Bugungi foydalanish",
"usage_month": "Bu oydagi foydalanish",
"stats_tokens": "tokenlar",
"stats_images": "yaratilgan tasvirlar",
"stats_vision":"Tasvir belgilari tarjima qilindi",
"stats_tts": "ovozga aylangan belgilar",
"stats_transcribe": ["minutlar va", "soniyalar transkripsiya qilingan"],
"stats_total": "💰 Jami miqdor quot;,
"stats_budget": "Qolgan budjetingiz",
"monthly": " bu oy uchun",
"daily": " bugun uchun",
"all-time": "",
"stats_openai": "Shu oyda OpenAI hisobingizdan to'lov amalga oshirildi quot;,
"resend_failed": "Sizda qayta yuborish uchun hech narsa yo'q",
"reset_done": "Bajarildi!",
"image_no_prompt": "Iltimos, so'rov yozing! (masalan, /image mushuk)",
"image_fail": "Tasvir yaratish amalga oshmadi",
"vision_fail":"Tasvirni tarjima qilishda xatolik yuz berdi",
"tts_no_prompt": "Iltimos, matn kiriting! (masalan, /tts uyim)",
"tts_fail": "Ovoz yaratish amalga oshmadi",
"media_download_fail": ["Audio faylni yuklab olish amalga oshmadi", "Fayl hajmi katta emasligiga ishonch hosil qiling. (max 20MB)"],
"media_type_fail": "Qo'llab-quvvatlanmaydigan fayl turi",
"transcript": "Transkript",
"answer": "Javob",
"transcribe_fail": "Matnni transkripsiya qilib bo'lmadi",
"chat_fail": "Javob olish amalga oshmadi",
"prompt": "so'rov",
"completion": "yakunlash",
"openai_rate_limit": "OpenAI ta'rif chegarasidan oshib ketdi",
"openai_invalid": "OpenAI So'rov noto'g'ri",
"error": "Xatolik yuz berdi",
"try_again": "Birozdan keyin qayta urinib ko'ring",
"answer_with_chatgpt": "ChatGPT bilan javob berish",
"ask_chatgpt": "ChatGPTdan so'rash",
"loading": "Yuklanmoqda...",
"function_unavailable_in_inline_mode": "Bu funksiya inline rejimida mavjud emas"
},
"vi": {
"help_description":"Hiển thị trợ giúp",
"reset_description":"Đặt lại cuộc trò chuyện. Tùy ý chuyển hướng dẫn cấp cao (ví dụ: /reset Bạn là một trợ lý hữu ích)",
"image_description":"Tạo hình ảnh từ câu lệnh (ví dụ: /image cat)",
"tts_description":"Tạo giọng nói từ văn bản (ví dụ: /tts nhà của tôi)",
"stats_description":"Nhận số liệu thống kê sử dụng hiện tại của bạn",
"resend_description":"Gửi lại tin nhắn mới nhất",
"chat_description":"Trò chuyện với bot!",
"disallowed":"Xin lỗi, bạn không được phép sử dụng bot này. Bạn có thể kiểm tra mã nguồn tại https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"Rất tiếc, bạn đã đạt đến giới hạn sử dụng.",
"help_text":["Tôi là bot ChatGPT, hãy nói chuyện với tôi!", "Hãy gửi cho tôi một tin nhắn thoại hoặc tệp và tôi sẽ phiên âm nó cho bạn", "Mã nguồn mở tại https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["Cuộc trò chuyện hiện tại", "tin nhắn trò chuyện trong lịch sử", "thông báo trò chuyện trong lịch sử"],
"usage_today":"Sử dụng ngày hôm nay",
"usage_month":"Sử dụng trong tháng này",
"stats_tokens":"mã thông báo trò chuyện được sử dụng",
"stats_images":"hình ảnh tạo ra",
"stats_vision":"Dịch thông tin từ mã thông báo hình ảnh",
"stats_tts":"ký tự được chuyển đổi thành giọng nói",
"stats_transcribe":["phút và", "giây"],
"stats_total":"💰 Với tổng số tiền quot;,
"stats_budget":"Ngân sách còn lại của bạn",
"monthly":" cho tháng này",
"daily":" cho hôm nay",
"all-time":"",
"stats_openai":"Tháng này, tài khoản OpenAI của bạn đã bị tính phí quot;,
"resend_failed":"Bạn không có gì để gửi lại",
"reset_done":"Xong!",
"image_no_prompt":"Vui lòng cung cấp lời nhắc! (ví dụ: /image con mèo",
"image_fail":"Không thể tạo hình ảnh",
"vision_fail":"Không thể dịch thông tin từ hình ảnh",
"tts_no_prompt":"Vui lòng cung cấp văn bản! (ví dụ: /tts nhà của tôi)",
"tts_fail":"Không thể tạo giọng nói",
"media_download_fail":["Không thể tải xuống tệp âm thanh", "Đảm bảo tệp không quá lớn. (tối đa 20 MB)"],
"media_type_fail":"Loại tập tin không được hỗ trợ",
"transcript":"Dịch",
"answer":"Trả lời",
"transcribe_fail":"Không thể sao chép văn bản",
"chat_fail":"Không nhận được phản hồi",
"prompt":"Lời nhắc",
"completion":"hoàn thành",
"openai_rate_limit":"Đã vượt quá giới hạn tỷ lệ OpenAI",
"openai_invalid":"OpenAI Yêu cầu không hợp lệ",
"error":"Một lỗi đã xảy ra",
"try_again":"Vui lòng thử lại sau một lúc",
"answer_with_chatgpt":"Trả lời với ChatGPT",
"ask_chatgpt":"Hỏi ChatGPT",
"loading":"Đang tải...",
"function_unavailable_in_inline_mode": "Chức năng này không khả dụng trong chế độ nội tuyến"
},
"zh-cn": {
"help_description":"显示帮助信息",
"reset_description":"重置对话。可以选择传递高级指令(例如/reset 你是一个有用的助手)",
"image_description":"根据提示生成图像(例如/image 猫)",
"tts_description":"将文本转换为语音(例如/tts 我的房子)",
"stats_description":"获取您当前的使用统计",
"resend_description":"重新发送最近的消息",
"chat_description":"与机器人聊天!",
"disallowed":"对不起,您不被允许使用该机器人。您可以查看源代码:https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"对不起,您已经达到了使用限制。",
"help_text":["我是一个ChatGPT机器人,请和我交谈!", "发送语音消息或文件,我会为您进行转录。", "源代码:https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["当前对话", "历史聊天消息", "历史聊天token"],
"usage_today":"今日使用情况",
"usage_month":"本月使用情况",
"stats_tokens":"使用的token",
"stats_images":"生成图像的数量",
"stats_vision":"图像令牌已解释",
"stats_tts":"转换为语音的字符",
"stats_transcribe":["分钟", "秒转录时长"],
"stats_total":"💰 总计金额 quot;,
"stats_budget":"您的剩余预算",
"monthly":"本月",
"daily":"今日",
"all-time":"",
"stats_openai":"本月您的OpenAI账户已使用 quot;,
"resend_failed":"没有消息需要重发",
"reset_done":"完成!",
"image_no_prompt":"请提供提示!(例如/image 猫)",
"image_fail":"生成图像失败",
"vision_fail":"图像解释失败",
"tts_no_prompt":"请提供文本!(例如/tts 我的房子)",
"tts_fail":"生成语音失败",
"media_download_fail":["下载音频文件失败", "请确保文件不要太大(最大20MB)"],
"media_type_fail":"不支持的文件类型",
"transcript":"转录",
"answer":"回答",
"transcribe_fail":"转录文本失败",
"chat_fail":"获取响应失败",
"prompt":"提示",
"completion":"补全",
"openai_rate_limit":"OpenAI请求频率超限",
"openai_invalid":"OpenAI请求无效",
"error":"发生错误",
"try_again":"请稍后再试",
"answer_with_chatgpt":"使用ChatGPT回答",
"ask_chatgpt":"询问ChatGPT",
"loading":"载入中...",
"function_unavailable_in_inline_mode": "此功能在内联模式下不可用"
},
"zh-tw": {
"help_description":"顯示幫助訊息",
"reset_description":"重設對話。可以選擇傳遞進階指令(例如 /reset 你是一個樂於助人的助手)",
"image_description":"根據提示生成圖片(例如 /image 貓)",
"tts_description":"將文字轉換為語音(例如 /tts 我的房子)",
"stats_description":"取得當前使用統計",
"resend_description":"重新傳送最後一則訊息",
"chat_description":"與機器人聊天!",
"disallowed":"抱歉,您不被允許使用此機器人。你可以在以下網址檢視原始碼:https://github.com/n3d1117/chatgpt-telegram-bot",
"budget_limit":"抱歉,已達到用量上限。",
"help_text":["我是一個 ChatGPT 機器人,跟我聊天吧!", "傳送語音訊息或檔案給我,我會為您進行轉錄", "開放原始碼在:https://github.com/n3d1117/chatgpt-telegram-bot"],
"stats_conversation":["目前的對話", "聊天訊息紀錄", "聊天 Token 紀錄"],
"usage_today":"今日用量",
"usage_month":"本月用量",
"stats_tokens":"Token 已使用",
"stats_images":"圖片已生成",
"stats_vision":"圖片令牌已解釋",
"stats_tts":"轉換為語音的字元",
"stats_transcribe":["分", "秒已轉錄"],
"stats_total":"💰 總計金額 quot;,
"stats_budget":"剩餘預算",
"monthly":"本月",
"daily":"今日",
"all-time":"",
"stats_openai":"本月您的 OpenAI 帳戶總共計費 quot;,
"resend_failed":"沒有訊息可以重新傳送",
"reset_done":"重設完成!",
"image_no_prompt":"請輸入提示!(例如 /image 貓)",
"image_fail":"圖片生成失敗",
"vision_fail":"圖片解釋失敗",
"tts_no_prompt":"請輸入文字!(例如 /tts 我的房子)",
"tts_fail":"語音生成失敗",
"media_download_fail":["下載音訊檔案失敗", "請確保檔案大小不超過 20MB"],
"media_type_fail":"不支援的檔案類型",
"transcript":"轉錄",
"answer":"回答",
"transcribe_fail":"轉錄文字失敗",
"chat_fail":"無法取得回應",
"prompt":"提示",
"completion":"填充",
"openai_rate_limit":"OpenAI 的請求數量已超過上限",
"openai_invalid":"OpenAI 的請求無效",
"error":"發生錯誤",
"try_again":"請稍後重試",
"answer_with_chatgpt":"使用 ChatGPT 回答",
"ask_chatgpt":"詢問 ChatGPT",
"loading":"載入中...",
"function_unavailable_in_inline_mode": "此功能在內嵌模式下不可用"
}
}