The official gpt4free repository | various collection of powerful language models
Written by @xtekky & maintained by @hlohaus
By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
pip install -U g4f
docker pull hlohaus789/g4f
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f:latest
pip install -U g4f
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
python3 -m venv venv
.\venv\Scripts\activate
source venv/bin/activate
requirements.txt
:pip install -r requirements.txt
test.py
file in the root folder and start using the repo, further Instructions are belowimport g4f
...
If you have Docker installed, you can easily set up and run the project without manually installing dependencies.
First, ensure you have both Docker and Docker Compose installed.
Clone the GitHub repo:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
docker pull selenium/node-chrome
docker-compose build
docker-compose up
Your server will now be running at http://localhost:1337
. You can interact with the API or run your tests as you would normally.
To stop the Docker containers, simply run:
docker-compose down
[!Note] When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the
docker-compose.yml
file. If you add or remove dependencies, however, you'll need to rebuild the Docker image usingdocker-compose build
.
g4f
Packageimport g4f
g4f.debug.logging = True # Enable debug logging
g4f.debug.check_version = False # Disable automatic version checking
print(g4f.Provider.Bing.params) # Print supported args for Bing
# Using automatic a provider for the given model
## Streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
## Normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "Hello"}],
) # Alternative model setting
print(response)
import g4f
allowed_models = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model='text-davinci-003',
prompt='say this is a test'
)
print(response)
import g4f
# Print all available providers
print([
provider.__name__
for provider in g4f.Provider.__providers__
if provider.working
])
# Execute with a specific provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Aichat,
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message)
Some providers using a browser to bypass the bot protection. They using the selenium webdriver to control the browser. The browser settings and the login data are saved in a custom directory. If the headless mode is enabled, the browser windows are loaded invisibly. For performance reasons, it is recommended to reuse the browser instances and close them yourself at the end:
import g4f
from undetected_chromedriver import Chrome, ChromeOptions
from g4f.Provider import (
Bard,
Poe,
AItianhuSpace,
MyShell,
PerplexityAi,
)
options = ChromeOptions()
options.add_argument("--incognito");
webdriver = Chrome(options=options, headless=True)
for idx in range(10):
response = g4f.ChatCompletion.create(
model=g4f.models.default,
provider=g4f.Provider.MyShell,
messages=[{"role": "user", "content": "Suggest me a name."}],
webdriver=webdriver
)
print(f"{idx}:", response)
webdriver.quit()
To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution.
import g4f
import asyncio
_providers = [
g4f.Provider.Aichat,
g4f.Provider.ChatBase,
g4f.Provider.Bing,
g4f.Provider.GptGo,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
async def run_provider(provider: g4f.Provider.BaseProvider):
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=provider,
)
print(f"{provider.__name__}:", response)
except Exception as e:
print(f"{provider.__name__}:", e)
async def run_all():
calls = [
run_provider(provider) for provider in _providers
]
await asyncio.gather(*calls)
asyncio.run(run_all())
All providers support specifying a proxy and increasing timeout in the create functions.
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
timeout=120, # in secs
)
print(f"Result:", response)
You can also set a proxy globally via an environment variable:
export G4F_PROXY="http://host:port"
from g4f.api import run_api
run_api()
If you want to use the embedding function, you need to get a Hugging Face token. You can get one at Hugging Face Tokens. Make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key.
Run server:
g4f api
or
python -m g4f.api.run
import openai
# Set your Hugging Face token as the API key if you use embeddings
# If you don't use embeddings, leave it empty
openai.api_key = "YOUR_HUGGING_FACE_TOKEN" # Replace with your actual token
# Set the API base URL if needed, e.g., for a local development environment
openai.api_base = "http://localhost:1337/v1"
def main():
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# Not streaming
print(chat_completion.choices[0].message.content)
else:
# Streaming
for token in chat_completion:
content = token["choices"][0]["delta"].get("content")
if content is not None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
---|---|---|---|---|---|---|
bing.com | g4f.Provider.Bing |
β | βοΈ | βοΈ | β | |
chat.geekgpt.org | g4f.Provider.GeekGpt |
βοΈ | βοΈ | βοΈ | β | |
gptchatly.com | g4f.Provider.GptChatly |
βοΈ | βοΈ | β | β | |
liaobots.site | g4f.Provider.Liaobots |
βοΈ | βοΈ | βοΈ | β | |
www.phind.com | g4f.Provider.Phind |
β | βοΈ | βοΈ | β | |
raycast.com | g4f.Provider.Raycast |
βοΈ | βοΈ | βοΈ | βοΈ |
Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
---|---|---|---|---|---|---|
www.aitianhu.com | g4f.Provider.AItianhu |
βοΈ | β | βοΈ | β | |
chat3.aiyunos.top | g4f.Provider.AItianhuSpace |
βοΈ | β | βοΈ | β | |
e.aiask.me | g4f.Provider.AiAsk |
βοΈ | β | βοΈ | β | |
chat-gpt.org | g4f.Provider.Aichat |
βοΈ | β | β | β | |
www.chatbase.co | g4f.Provider.ChatBase |
βοΈ | β | βοΈ | β | |
chatforai.store | g4f.Provider.ChatForAi |
βοΈ | β | βοΈ | β | |
chatgpt.ai | g4f.Provider.ChatgptAi |
βοΈ | β | βοΈ | β | |
chatgptx.de | g4f.Provider.ChatgptX |
βοΈ | β | βοΈ | β | |
chat-shared2.zhile.io | g4f.Provider.FakeGpt |
βοΈ | β | βοΈ | β | |
freegpts1.aifree.site | g4f.Provider.FreeGpt |
βοΈ | β | βοΈ | β | |
gptalk.net | g4f.Provider.GPTalk |
βοΈ | β | βοΈ | β | |
ai18.gptforlove.com | g4f.Provider.GptForLove |
βοΈ | β | βοΈ | β | |
gptgo.ai | g4f.Provider.GptGo |
βοΈ | β | βοΈ | β | |
hashnode.com | g4f.Provider.Hashnode |
βοΈ | β | βοΈ | β | |
app.myshell.ai | g4f.Provider.MyShell |
βοΈ | β | βοΈ | β | |
noowai.com | g4f.Provider.NoowAi |
βοΈ | β | βοΈ | β | |
chat.openai.com | g4f.Provider.OpenaiChat |
βοΈ | β | βοΈ | βοΈ | |
theb.ai | g4f.Provider.Theb |
βοΈ | β | βοΈ | βοΈ | |
sdk.vercel.ai | g4f.Provider.Vercel |
βοΈ | β | βοΈ | β | |
you.com | g4f.Provider.You |
βοΈ | β | βοΈ | β | |
chat9.yqcloud.top | g4f.Provider.Yqcloud |
βοΈ | β | βοΈ | β | |
chat.acytoo.com | g4f.Provider.Acytoo |
βοΈ | β | βοΈ | β | |
aibn.cc | g4f.Provider.Aibn |
βοΈ | β | βοΈ | β | |
ai.ls | g4f.Provider.Ails |
βοΈ | β | βοΈ | β | |
chatgpt4online.org | g4f.Provider.Chatgpt4Online |
βοΈ | β | βοΈ | β | |
chat.chatgptdemo.net | g4f.Provider.ChatgptDemo |
βοΈ | β | βοΈ | β | |
chatgptduo.com | g4f.Provider.ChatgptDuo |
βοΈ | β | β | β | |
chatgptfree.ai | g4f.Provider.ChatgptFree |
βοΈ | β | β | β | |
chatgptlogin.ai | g4f.Provider.ChatgptLogin |
βοΈ | β | βοΈ | β | |
cromicle.top | g4f.Provider.Cromicle |
βοΈ | β | βοΈ | β | |
gptgod.site | g4f.Provider.GptGod |
βοΈ | β | βοΈ | β | |
opchatgpts.net | g4f.Provider.Opchatgpts |
βοΈ | β | βοΈ | β | |
chat.ylokh.xyz | g4f.Provider.Ylokh |
βοΈ | β | βοΈ | β |
Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
---|---|---|---|---|---|---|
bard.google.com | g4f.Provider.Bard |
β | β | β | βοΈ | |
deepinfra.com | g4f.Provider.DeepInfra |
β | β | βοΈ | β | |
huggingface.co | g4f.Provider.HuggingChat |
β | β | βοΈ | βοΈ | |
www.llama2.ai | g4f.Provider.Llama2 |
β | β | βοΈ | β | |
open-assistant.io | g4f.Provider.OpenAssistant |
β | β | βοΈ | βοΈ |
Model | Base Provider | Provider | Website |
---|---|---|---|
palm | g4f.Provider.Bard | bard.google.com | |
h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Hugging Face | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Hugging Face | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-open-llama-13b | Hugging Face | g4f.Provider.H2o | www.h2o.ai |
claude-instant-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v2 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
command-light-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
command-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-neox-20b | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-1-pythia-12b | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-4-pythia-12b-epoch-3.5 | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
santacoder | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
bloom | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
flan-t5-xxl | Hugging Face | g4f.Provider.Vercel | sdk.vercel.ai |
code-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-4-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-ada-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-babbage-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-curie-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-003 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
llama13b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
llama7b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
π Projects | β Stars | π Forks | π Issues | π¬ Pull requests |
gpt4free | gpt4free-ts | |||
Free AI API's & Potential Providers List | ||||
ChatGPT-Clone | ||||
ChatGpt Discord Bot | ||||
Nyx-Bot (Discord) | ||||
LangChain gpt4free | ||||
ChatGpt Telegram Bot | ||||
ChatGpt Line Bot | ||||
Action Translate Readme | ||||
Langchain Document GPT |
Call in your terminal the create_provider.py
script:
python etc/tool/create_provider.py
cURL
command from your browser developer tools.from __future__ import annotations
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
working = True
supports_gpt_35_turbo = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
yield ""
supports_stream
to True
...create_async_generator
and yield
the response, even if it's a one-time response, do not hesitate to look at other providers for inspirationg4f/Provider/__init__.py
from .HogeService import HogeService
__all__ = [
HogeService,
]
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
A list of the contributors is available here
The Vercel.py
file contains code from vercel-llm-api by @ading2210, which is licensed under the GNU GPL v3
Top 1 Contributor: @hlohaus
This program is licensed under the GNU GPL v3
xtekky/gpt4free: Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
This project is licensed under GNU_GPL_v3.0. |