Skip to content
Snippets Groups Projects
Commit 280dfc39 authored by Timm Lehmberg's avatar Timm Lehmberg
Browse files

synching

parent fceed511
Branches
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
Testing connection a remote LLM
%% Cell type:code id: tags:
``` python
from llama_index.llms.groq import Groq
from llama_index.llms.openai import OpenAI
```
%% Cell type:code id: tags:
``` python
import os
os.environ['GROQ_API_KEY'] = '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'
os.environ['OPENAI_API_KEY'] = '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'
```
%% Cell type:markdown id: tags:
task: decide for a model, enter your API keys, try another model, change temperature
%% Cell type:code id: tags:
``` python
# pass model, API-key and temperature to the constructor for llm object
llm = Groq(model="llama3-70b-8192", temperature=0.0)
#llm = OpenAI(model="gpt-3.5-turbo", temperature=0.0)
# complete the prompt
response = llm.complete("Warum ist die Banane krumm?")
print(response)
```
%% Output
Eine Frage, die viele Menschen schon einmal gestellt haben!
Die Banane ist krumm, weil sie während ihres Wachstumsprozesses auf dem Bananenbaum eine natürliche Krümmung entwickelt. Dies liegt an der Art und Weise, wie die Banane wächst und reift.
Banane sind eine Art von Frucht, die an der Spitze des Bananenbaums wächst. Die Frucht entwickelt sich aus einer Blüte, die sich an der Spitze eines langen Stiels befindet. Während die Banane wächst, wird sie von der Schwerkraft nach unten gezogen, was dazu führt, dass sie sich krümmt.
Es gibt mehrere Gründe, warum Bananen krumm sind:
1. **Schwerkraft**: Wie bereits erwähnt, wird die Banane von der Schwerkraft nach unten gezogen, was zu einer natürlichen Krümmung führt.
2. **Wachstumsprozess**: Die Banane wächst von der Spitze des Bananenbaums nach unten. Während sie wächst, entwickelt sie sich in einer spiraligen Form, was zu einer Krümmung führt.
3. **Zellstruktur**: Die Zellen in der Banane sind nicht gleichmäßig verteilt, was zu einer ungleichmäßigen Wachstumsrate führt. Dies kann zu einer Krümmung der Frucht führen.
4. **Evolutionäre Vorteile**: Die Krümmung der Banane kann auch evolutionäre Vorteile haben. Zum Beispiel kann die Krümmung der Frucht dazu beitragen, dass sie besser auf dem Boden liegt und weniger wahrscheinlich ist, dass sie herunterfällt.
Es ist wichtig zu beachten, dass nicht alle Bananen krumm sind. Einige Sorten, wie die Cavendish-Banane, die in vielen Supermärkten verkauft wird, sind relativ gerade. Andere Sorten, wie die Plantain-Banane, sind jedoch sehr krumm.
Ich hoffe, diese Antwort hat dir geholfen, die Frage zu beantworten, warum Bananen krumm sind!
%% Cell type:code id: tags:
``` python
# pass model, API-key and temperature to the constructor for llm object
llm1 = Groq(model="llama3-70b-8192", temperature=0.0)
llm2 = OpenAI(model="gpt-3.5-turbo", temperature=0.0)
# complete the prompt
response1 = llm1.complete("Warum ist die Banane krumm?")
response2 = llm2.complete("Warum ist die Banane krumm?")
response3 = llm2.complete("Ich habe zwei Menschen gefragt, warum die Banane krumm ist? Person 1 sagt:"+str(response1)+" Person 2 sagt:"+str(response2)+"wer hat recht? Fasse beide Antworten zusammen und gib mir die Antwort")
print(response3)
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# My first RAG
%% Cell type:code id: tags:
``` python
#!pip install llama_index
#!pip install llama_index.embeddings.huggingface
#!pip install llama_index.llms.groq
import os
from llama_index.core import VectorStoreIndex, Settings, SimpleDirectoryReader
from llama_index.core.embeddings import resolve_embed_model
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.llms.groq import Groq
```
%% Cell type:markdown id: tags:
Uncomment for log messages
%% Cell type:code id: tags:
``` python
# import logging
# import sys
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
%% Cell type:markdown id: tags:
Set enrironment variable for your API Keys
%% Cell type:code id: tags:
``` python
os.environ['GROQ_API_KEY'] = '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'
os.environ['OPENAI_API_KEY'] = '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'
```
%% Cell type:markdown id: tags:
chose your embedding model
%% Cell type:code id: tags:
``` python
#embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
embed_model = OpenAIEmbedding()
```
%% Cell type:markdown id: tags:
load textual data from directory, using the SimpleDirectoryReader connector<br>
check https://llamahub.ai/ for more opportunities to get your data in there
%% Cell type:code id: tags:
``` python
documents = SimpleDirectoryReader("./data/firststeps/").load_data()
#what is a document? uncomment to see ;-)
for i in documents:
print(i)
```
%% Output
Doc ID: 34bd37c2-710c-43f8-972f-db8534def125
Text: Once upon a time there was a dear little girl who was loved by
every one who looked at her, but most of all by her grandmother, and
there was nothing that she would not have given to the child. Once she
gave her a little cap of red velvet, which suited her so well that she
would never wear anything else. So she was always called Little Red
Ridin...
Doc ID: 29b63baa-daea-41df-a7c5-c94aef20ace8
Text: Near a great desert there lived a poor woodcutter and his wife,
and his two children; the boy's name was Hansel and the girl's
Chantal. They had very little to bite or to sup, and once, when there
was great dearth in the land, the man could not even gain the daily
bread. As he lay in bed one night thinking of this, and turning and
tossing, he si...
%% Cell type:markdown id: tags:
Setings for Chatmodel and Embedding Model
%% Cell type:code id: tags:
``` python
llm = Groq(model="llama3-70b-8192")
#llm = OpenAI(model="gpt-4o-mini")
Settings.llm = llm
Settings.embed_model = embed_model # see above
```
%% Cell type:markdown id: tags:
create local index (note: this happens every time you call this script
%% Cell type:code id: tags:
``` python
index = VectorStoreIndex.from_documents(documents)
```
%% Cell type:code id: tags:
``` python
query_engine = index.as_query_engine()
```
%% Cell type:code id: tags:
``` python
# Ask query and get response
response = query_engine.query("where does little red riding hood go?")
print(response)
```
%% Output
Little Red Riding Hood goes to her grandmother's house, which is located in the woods, to bring her a piece of cake and a bottle of wine.
%% Cell type:markdown id: tags:
by the way ... this would be an easy shell prompt to run the queries (but without any context or chat capabilities)
%% Cell type:code id: tags:
``` python
#while True:
# prompt = input("Enter a prompt (or 'exit' to quit): ")
#
# if prompt == 'exit':
# break
#
# response = query_engine.query(prompt)
# print(response)
```
%% Cell type:markdown id: tags:
# My first RAG
%% Cell type:code id: tags:
``` python
#!pip install llama_index
#!pip install llama_index.core
#!pip install llama_index.core.embeddings
#!pip install llama_index.embeddings.huggingface
#!pip install llama_index.llms.groq
import os
from llama_index.core import VectorStoreIndex, Settings, SimpleDirectoryReader
from llama_index.core.embeddings import resolve_embed_model
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.llms.groq import Groq
```
%% Cell type:markdown id: tags:
Uncomment for log messages (maybe later, there's going to be a lot):
%% Cell type:code id: tags:
``` python
# import logging
# import sys
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
%% Cell type:markdown id: tags:
Set enrironment variable for your API Keys
%% Cell type:code id: tags:
``` python
#os.environ['GROQ_API_KEY'] = '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'
#os.environ['OPENAI_API_KEY'] = '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'
```
%% Cell type:markdown id: tags:
chose your embedding model\
(local or OpenAI)
%% Cell type:code id: tags:
``` python
# embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
# embed_model = OpenAIEmbedding()
```
%% Cell type:markdown id: tags:
load textual data from directory, using the SimpleDirectoryReader connector<br>
check https://llamahub.ai/ for more opportunities to get your data in there
%% Cell type:code id: tags:
``` python
documents = SimpleDirectoryReader("./data/").load_data()
print(documents[0])
#for i in documents:
# print(i)
```
%% Cell type:markdown id: tags:
Setings for chatmodel and embedding Model
%% Cell type:code id: tags:
``` python
# llm = Groq(model="llama3-70b-8192")
llm = OpenAI(model="gpt-4o-mini")
Settings.llm = llm
Settings.embed_model = embed_model
```
%% Cell type:markdown id: tags:
create local index (note: this happens every time you call this script)
%% Cell type:code id: tags:
``` python
index = VectorStoreIndex.from_documents(documents)
```
%% Output
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[5], line 1
----> 1 index = VectorStoreIndex.from_documents(documents)
NameError: name 'VectorStoreIndex' is not defined
%% Cell type:markdown id: tags:
Initialize a query engine from the given index.
%% Cell type:code id: tags:
``` python
query_engine = index.as_query_engine()
```
%% Cell type:markdown id: tags:
Send the query and see the response:
%% Cell type:code id: tags:
``` python
response = query_engine.query("What was the name of Hensels Sister")
print(response)
```
%% Cell type:markdown id: tags:
by the way ... this would be an easy shell prompt to run the queries (but without any context or chat capabilities)
%% Cell type:markdown id: tags:
__In case you are feeling bored:__
* change settings for the llm and check how they affect the output
* try your own files
* Check LlamaIndex Documentation how to
* persist index data
* use llamaindex chat_engine instead of query engine\
https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern/\
* run in the shell with some sort of chat prompt, chat history etc.
%% Cell type:markdown id: tags:
# LlamaParse Basic example
* for WebGUI log into: https://cloud.llamaindex.ai/
* to optain an API-key click on API Key --> [+ Generate New Key]
%% Cell type:code id: tags:
``` python
# !pip install llama_parse nest_asyncio
import os
from llama_parse import LlamaParse
# llama-parse is async-first, running the sync code in a notebook requires the use of nest_asyncio
import nest_asyncio
nest_asyncio.apply()
```
%% Cell type:markdown id: tags:
* to optain an API-key, refer to https://cloud.llamaindex.ai/, <br/> click on API Key --> [+ Generate New Key]
to optain an API-key, refer to https://cloud.llamaindex.ai/, <br/> click on API Key --> [+ Generate New Key]
%% Cell type:code id: tags:
``` python
os.environ["LLAMA_CLOUD_API_KEY"] = "llx-fl2337z0JNxL80MP9cDqo3RknSlEhSHpI0kXHZZJFukDy62t"
```
%% Cell type:markdown id: tags:
* Using simple parsing parsing on a pdf file, containing five pages of a comic book:
Using simple parsing parsing on a pdf file, containing five pages of a comic book:
%% Cell type:code id: tags:
``` python
documents_simple = LlamaParse(result_type="markdown").load_data("./data/asterix.pdf")
```
%% Output
Started parsing the file under job_id d434103c-c37c-4807-85eb-76fc4e4fc2ee
.
%% Cell type:markdown id: tags:
* Using parsing instructions for better text extraction
Use parsing instructions for better text extraction
%% Cell type:code id: tags:
``` python
parsing_instruction = """The provided document is a comic book written in German. Most pages do NOT have a title.
It does not contain tables.
Try to reconstruct the dialogue spoken in a cohesive way. """
documents_instructed = LlamaParse(
result_type="markdown", parsing_instruction=parsing_instruction
).load_data("./data/asterix.pdf")
```
%% Output
Started parsing the file under job_id 7e6ccec6-360a-413a-b90d-c5d2cb19e762
.
%% Cell type:code id: tags:
``` python
for page in documents_instructed:
for page in documents_simple:
print(page.text)
```
%% Output
# Comic Page
An Adventure with
When it comes to wild boar hunting, Obelix is an expert. But reading is not his strong suit...
When he receives mail, he must find a way to decipher the message!
Hello, Rohrpostix!
Imagine:
“Come on, Idefix! Off to the wild boar hunt!”
“Another day without mail. Is there something for me?!”
“It’s addressed to you personally.”
“Woof!”
“A letter? Just for you! From Condate? Carved in pink marble, by the gods! FALBALA!”
“FALBALA WROTE TO ME! And have fun reading, heartbreaker.”
“Goodbye,”
“Pfff...”
“Uh!... No! Not today...”
“I... maybe I’ll just lie down!”
“You’d better go to Druid, Obelix. He can prescribe you a remedy!”
Obelix!
“Are you coming with me on the wild boar hunt?”
“?!”
“Strange!”
“You’re right, Asterix! This is the first time I’ve seen Obelix sick.”
“Can you do me a big favor, Miraculix?”
“!?”
# Comic Book Page
Ah... Well... So... Have you finally decided to teach me how to read, Miraculix?
Do you finally want to satisfy your reading hunger and not just always chase wild boars, you rascal?
Because of boys like you, future generations will still claim that the Gauls could neither read nor write.
!?
This is an old method I personally developed, with which you can quickly memorize the alphabet, Obelix.
Here, the first mnemonic: E for ESEL (donkey).
... Now R for REH (deer)...
A for Amsel (blackbird), D for Dolmen, etc... Pay close attention to the textbook!
Don't worry, I'm hanging on! Thank you, Miraculix!
The night has fallen, and the little Gallic village is sleeping. Only one light plays with the moon.
Just imagine, Idefix: Thanks to Miraculix's method, even I can read Falbala's letter.
L for Löwe (lion), I for Igel (hedgehog), E for Esel, B for Biber (beaver), E for Esel, R for Reh, O for Otter...
And in the early morning...
Oh wow! You don't look really refreshed, Asterix!
Let's see if the healing arts of the druid have helped poor Obelix.
What's wrong, Obelix? You can tell me.
I have a huge problem. I don't know what to do anymore.
This engraved tablet was sent to me by Falbala! That's why I asked Miraculix to teach me how to read.
I read exactly what is carved in the marble... Listen for yourself: “Blackbird, Lion, Lion, Donkey, Sow, Goose, Owl, Dove, Donkey, Goat, Owl, Mouse, Goose, Donkey, Beaver, Owl, Deer, Dove, Sow, Dove, Blackbird, Goose, Lion, Hedgehog, Donkey, Beaver, Donkey, Deer, Otter, Beaver, Donkey, Lion, Hedgehog, (I didn't understand something there), Fish, Blackbird, Lion, Beaver, Blackbird, Lion, Blackbird.”
Snore!
Snore!
I got this primer from him. I tried all night to figure out what Falbala is writing, but I didn't understand anything!
Uh... May I take a look at it?
It's quite simple! It says engraved: “Happy Birthday, dear Obelix.” Signed: “Falbala.”
Why didn't she send me a tablet? After all, we were born on the same day...
WHERE ARE YOU RUNNING OFF TO, OBELIX?
# Comic Book
# Game
It seems that Asterix and Obelix are hiding something from Majestix... The 7 differences that have sneaked into the second image have gone unnoticed by the chief. Find them!
Solution on page 27
# Asterix in Spain - Page 30
# Asterix in Spain
Pepe shows you his homeland: Spain
“You’re already letting us go? Ah… yes… I still have a long way to my garrison in Hispalis. If Caesar finds out that the hostage is back in Hispania, I’ll be ripe for the circus!”
“My only chance is to bring the boy back to Gaul without anyone finding out!”
“I advise you to take this cart here. It has plenty of food on it and it’s bad to get it on the way!”
“Hic! Hic! I don’t want to… hic… hold my breath!”
“Hic! Hic! The hiccups come from drinking wine.”
“Hold your breath!”
“Quick! Get a cart!”
“Ay! The business with the tourists is running splendidly this year!”
“You have strange streets here!”
“Yeah, man! But something will be done! Soon they will be excellent!”
......
This diff is collapsed.
%% Cell type:markdown id: tags:
##### Connecting to LLMs via API
%% Cell type:markdown id: tags:
* check API connections and available models
%% Cell type:code id: tags:
``` python
# List the models available
# get OpenAI API keys from https://platform.openai.com/api-keys
!curl https://api.openai.com/v1/models -H "Authorization: Bearer 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"
```
%% Output
{
"object": "list",
"data": [
{
"id": "dall-e-2",
"object": "model",
"created": 1698798177,
"owned_by": "system"
},
{
"id": "gpt-4-1106-preview",
"object": "model",
"created": 1698957206,
"owned_by": "system"
},
{
"id": "tts-1-hd-1106",
"object": "model",
"created": 1699053533,
"owned_by": "system"
},
{
"id": "tts-1-hd",
"object": "model",
"created": 1699046015,
"owned_by": "system"
},
{
"id": "whisper-1",
"object": "model",
"created": 1677532384,
"owned_by": "openai-internal"
},
{
"id": "text-embedding-3-large",
"object": "model",
"created": 1705953180,
"owned_by": "system"
},
{
"id": "text-embedding-ada-002",
"object": "model",
"created": 1671217299,
"owned_by": "openai-internal"
},
{
"id": "gpt-4o-2024-05-13",
"object": "model",
"created": 1715368132,
"owned_by": "system"
},
{
"id": "gpt-4-0125-preview",
"object": "model",
"created": 1706037612,
"owned_by": "system"
},
{
"id": "gpt-4-turbo-preview",
"object": "model",
"created": 1706037777,
"owned_by": "system"
},
{
"id": "tts-1-1106",
"object": "model",
"created": 1699053241,
"owned_by": "system"
},
{
"id": "gpt-3.5-turbo-16k",
"object": "model",
"created": 1683758102,
"owned_by": "openai-internal"
},
{
"id": "chatgpt-4o-latest",
"object": "model",
"created": 1723515131,
"owned_by": "system"
},
{
"id": "gpt-4o-2024-08-06",
"object": "model",
"created": 1722814719,
"owned_by": "system"
},
{
"id": "gpt-3.5-turbo-1106",
"object": "model",
"created": 1698959748,
"owned_by": "system"
},
{
"id": "gpt-3.5-turbo-instruct-0914",
"object": "model",
"created": 1694122472,
"owned_by": "system"
},
{
"id": "gpt-4o",
"object": "model",
"created": 1715367049,
"owned_by": "system"
},
{
"id": "gpt-4-turbo-2024-04-09",
"object": "model",
"created": 1712601677,
"owned_by": "system"
},
{
"id": "gpt-4-turbo",
"object": "model",
"created": 1712361441,
"owned_by": "system"
},
{
"id": "gpt-4-0613",
"object": "model",
"created": 1686588896,
"owned_by": "openai"
},
{
"id": "gpt-3.5-turbo-0125",
"object": "model",
"created": 1706048358,
"owned_by": "system"
},
{
"id": "gpt-4",
"object": "model",
"created": 1687882411,
"owned_by": "openai"
},
{
"id": "text-embedding-3-small",
"object": "model",
"created": 1705948997,
"owned_by": "system"
},
{
"id": "gpt-3.5-turbo-instruct",
"object": "model",
"created": 1692901427,
"owned_by": "system"
},
{
"id": "gpt-3.5-turbo",
"object": "model",
"created": 1677610602,
"owned_by": "openai"
},
{
"id": "gpt-4o-mini-2024-07-18",
"object": "model",
"created": 1721172717,
"owned_by": "system"
},
{
"id": "gpt-4o-mini",
"object": "model",
"created": 1721172741,
"owned_by": "system"
},
{
"id": "babbage-002",
"object": "model",
"created": 1692634615,
"owned_by": "system"
},
{
"id": "davinci-002",
"object": "model",
"created": 1692634301,
"owned_by": "system"
},
{
"id": "dall-e-3",
"object": "model",
"created": 1698785189,
"owned_by": "system"
},
{
"id": "tts-1",
"object": "model",
"created": 1681940951,
"owned_by": "openai-internal"
}
]
}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 3941 100 3941 0 0 7481 0 --:--:-- --:--:-- --:--:-- 7520
100 3941 100 3941 0 0 7478 0 --:--:-- --:--:-- --:--:-- 7520
%% Cell type:code id: tags:
``` python
# List the models available (Groq)
# get Groq API key from https://console.groq.com/keys
!curl https://api.groq.com/openai/v1/models -H "Authorization: Bearer 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"
```
%% Output
{"object":"list","data":[{"id":"llama-guard-3-8b","object":"model","created":1693721698,"owned_by":"Meta","active":true,"context_window":8192,"public_apps":null},{"id":"distil-whisper-large-v3-en","object":"model","created":1693721698,"owned_by":"Hugging Face","active":true,"context_window":448,"public_apps":null},{"id":"llava-v1.5-7b-4096-preview","object":"model","created":1725402373,"owned_by":"Other","active":true,"context_window":4096,"public_apps":null},{"id":"gemma-7b-it","object":"model","created":1693721698,"owned_by":"Google","active":true,"context_window":8192,"public_apps":null},{"id":"llama-3.1-8b-instant","object":"model","created":1693721698,"owned_by":"Meta","active":true,"context_window":131072,"public_apps":null},{"id":"llama3-70b-8192","object":"model","created":1693721698,"owned_by":"Meta","active":true,"context_window":8192,"public_apps":null},{"id":"whisper-large-v3","object":"model","created":1693721698,"owned_by":"OpenAI","active":true,"context_window":448,"public_apps":null},{"id":"gemma2-9b-it","object":"model","created":1693721698,"owned_by":"Google","active":true,"context_window":8192,"public_apps":null},{"id":"llama-3.1-70b-versatile","object":"model","created":1693721698,"owned_by":"Meta","active":true,"context_window":131072,"public_apps":null},{"id":"mixtral-8x7b-32768","object":"model","created":1693721698,"owned_by":"Mistral AI","active":true,"context_window":32768,"public_apps":null},{"id":"llama3-8b-8192","object":"model","created":1693721698,"owned_by":"Meta","active":true,"context_window":8192,"public_apps":null},{"id":"llama3-groq-8b-8192-tool-use-preview","object":"model","created":1693721698,"owned_by":"Groq","active":true,"context_window":8192,"public_apps":null},{"id":"llama3-groq-70b-8192-tool-use-preview","object":"model","created":1693721698,"owned_by":"Groq","active":true,"context_window":8192,"public_apps":null}]}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 1894 100 1894 0 0 8293 0 --:--:-- --:--:-- --:--:-- 8417
%% Cell type:markdown id: tags:
* Send a request via curl to the OpenAI API to generate embeddings for a given text input using the specified model.
<br/><i>(Note: Groq, does not provide embedidng models)</i>
%% Cell type:code id: tags:
``` python
!curl https://api.openai.com/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 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" \
-d '{"model": "text-embedding-3-small", "input": "Your string to vectorize here." }'
-d '{"model": "text-embedding-3-small", "input": "Ottos Mops kotzt" }'
```
%% Output
{
"error": {
"message": "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 451 100 443 100 8 811 14 --:--:-- --:--:-- --:--:-- 830
100 451 100 443 100 8 811 14 --:--:-- --:--:-- --:--:-- 830
curl: (3) URL rejected: Bad hostname
curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535
curl: (3) URL rejected: Malformed input to a URL function
curl: (3) unmatched close brace/bracket in URL position 1:
}'
^
%% Cell type:markdown id: tags:
* unsure what embedding models are available?
%% Cell type:code id: tags:
``` python
!curl https://api.openai.com/v1/models -H "Authorization: Bearer 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" | grep embed
```
%% Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3941 100 3941 0 0 7757 0 --:--:-- --:--:-- --:--:-- 7742
"id": "text-embedding-3-large",
"id": "text-embedding-ada-002",
"id": "text-embedding-3-small",
%% Cell type:markdown id: tags:
* Chat completion via API
%% Cell type:code id: tags:
``` python
!curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 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" \
-d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello, how can I get a chat completion?"}],"max_tokens": 100 }'
```
%% Output
{
"id": "chatcmpl-A96L0c3EjuDQpcQuq4xHS5gD2PbGD",
"object": "chat.completion",
"created": 1726732678,
"model": "gpt-3.5-turbo-0125",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "You can get a chat completion by following these steps:\n\n1. Engage in meaningful conversation by asking questions and actively listening to the other person's responses.\n2. Stay attentive and responsive throughout the chat to keep the conversation going and maintain a connection.\n3. Show empathy and understanding towards the other person’s feelings and thoughts.\n4. End the chat on a positive note by expressing gratitude, summarizing key points, and providing any necessary follow-up information.\n5. Follow up with a polite closing statement",
"refusal": null
},
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 17,
"completion_tokens": 100,
"total_tokens": 117,
"completion_tokens_details": {
"reasoning_tokens": 0
}
},
"system_fingerprint": null
}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment