 
                            在本教程中,我們將建立一個簡單的聊天介面,讓使用者上傳 PDF、使用OpenAI 的 API檢索其內容,並使用Streamlit在類似聊天的介面中顯示回應。我們還將利用@pinata上傳和儲存 PDF 文件。
在繼續之前,讓我們先來看看我們正在建立的內容:
https://vimeo.com/1018294716?share=copy
先決條件:
Python是基礎知識
Pinata API 金鑰(用於上傳 PDF)
OpenAI API 金鑰(用於產生回應)
Streamlit 安裝(用於建置 UI)
首先建立一個新的 Python 專案目錄:
mkdir chat-with-pdf
cd chat-with-pdf
python3 -m venv venv
source venv/bin/activate
pip install streamlit openai requests PyPDF2現在,在專案的根目錄中建立一個.env檔案並新增以下環境變數:
PINATA_API_KEY=<Your Pinata API Key>
PINATA_SECRET_API_KEY=<Your Pinata Secret Key>
OPENAI_API_KEY=<Your OpenAI API Key>OPENAI_API_KEY是付費的,需要自行管理。
因此,在進一步討論之前,讓我們先了解Pinata是什麼,以及我們使用它的原因。

Pinata 是一項服務,提供用於在IPFS (星際檔案系統)上儲存和管理檔案的平台,IPFS 是一種去中心化的分散式檔案儲存系統。
去中心化儲存: Pinata 可協助您將檔案儲存在去中心化網路 IPFS 上。
易於使用:它提供用戶友好的工具和 API 用於文件管理。
文件可用性: Pinata 透過將文件「固定」在 IPFS 上來保持文件的可存取性。
NFT 支援:非常適合儲存 NFT 和 Web3 應用程式的元資料。
成本效益: Pinata 可以成為傳統雲端儲存的更便宜的替代品。
讓我們透過登入來建立所需的令牌:

下一步是驗證您的註冊電子郵件:

驗證登入後產生 api 金鑰:

之後轉到 API 金鑰部分並建立新的 API 金鑰:

最後,密鑰已成功產生。

OPENAI_API_KEY=<Your OpenAI API Key>
PINATA_API_KEY=dfc05775d0c8a1743247
PINATA_SECRET_API_KEY=a54a70cd227a85e68615a5682500d73e9a12cd211dfbf5e25179830dc8278efc
我們將使用 Pinata 的 API 上傳 PDF 並取得每個檔案的雜湊值 (CID)。建立一個名為pinata_helper.py的檔案來處理 PDF 上傳。
import os  # Import the os module to interact with the operating system
import requests  # Import the requests library to make HTTP requests
from dotenv import load_dotenv  # Import load_dotenv to load environment variables from a .env file
# Load environment variables from the .env file
load_dotenv()
# Define the Pinata API URL for pinning files to IPFS
PINATA_API_URL = "https://api.pinata.cloud/pinning/pinFileToIPFS"
# Retrieve Pinata API keys from environment variables
PINATA_API_KEY = os.getenv("PINATA_API_KEY")
PINATA_SECRET_API_KEY = os.getenv("PINATA_SECRET_API_KEY")
def upload_pdf_to_pinata(file_path):
    """
    Uploads a PDF file to Pinata's IPFS service.
    Args:
        file_path (str): The path to the PDF file to be uploaded.
    Returns:
        str: The IPFS hash of the uploaded file if successful, None otherwise.
    """
    # Prepare headers for the API request with the Pinata API keys
    headers = {
        "pinata_api_key": PINATA_API_KEY,
        "pinata_secret_api_key": PINATA_SECRET_API_KEY
    }
    # Open the file in binary read mode
    with open(file_path, 'rb') as file:
        # Send a POST request to Pinata API to upload the file
        response = requests.post(PINATA_API_URL, files={'file': file}, headers=headers)
        # Check if the request was successful (status code 200)
        if response.status_code == 200:
            print("File uploaded successfully")  # Print success message
            # Return the IPFS hash from the response JSON
            return response.json()['IpfsHash']
        else:
            # Print an error message if the upload failed
            print(f"Error: {response.text}")
            return None  # Return None to indicate failure
第 3 步:設定 OpenAI
接下來,我們將建立一個使用 OpenAI API 與從 PDF 中提取的文字進行互動的函數。我們將利用 OpenAI 的gpt-4o或gpt-4o-mini模型進行聊天回應。
建立一個新檔案openai_helper.py :
import os
from openai import OpenAI
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Initialize OpenAI client with the API key
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_API_KEY)
def get_openai_response(text, pdf_text):
    try:
        # Create the chat completion request
        print("User Input:", text)
        print("PDF Content:", pdf_text)  # Optional: for debugging
        # Combine the user's input and PDF content for context
        messages = [
            {"role": "system", "content": "You are a helpful assistant for answering questions about the PDF."},
            {"role": "user", "content": pdf_text},  # Providing the PDF content
            {"role": "user", "content": text}  # Providing the user question or request
        ]
        response = client.chat.completions.create(
            model="gpt-4",  # Use "gpt-4" or "gpt-4o mini" based on your access
            messages=messages,
            max_tokens=100,  # Adjust as necessary
            temperature=0.7  # Adjust to control response creativity
        )
        # Extract the content of the response
        return response.choices[0].message.content  # Corrected access method
    except Exception as e:
        return f"Error: {str(e)}"
現在我們已經準備好了輔助函數,是時候建立 Streamlit 應用程式來上傳 PDF、從 OpenAI 獲取回應並顯示聊天了。
建立一個名為app.py的檔案:
import streamlit as st
import os
import time
from pinata_helper import upload_pdf_to_pinata
from openai_helper import get_openai_response
from PyPDF2 import PdfReader
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
st.set_page_config(page_title="Chat with PDFs", layout="centered")
st.title("Chat with PDFs using OpenAI and Pinata")
uploaded_file = st.file_uploader("Upload your PDF", type="pdf")
# Initialize session state for chat history and loading state
if "chat_history" not in st.session_state:
    st.session_state.chat_history = []
if "loading" not in st.session_state:
    st.session_state.loading = False
if uploaded_file is not None:
    # Save the uploaded file temporarily
    file_path = os.path.join("temp", uploaded_file.name)
    with open(file_path, "wb") as f:
        f.write(uploaded_file.getbuffer())
    # Upload PDF to Pinata
    st.write("Uploading PDF to Pinata...")
    pdf_cid = upload_pdf_to_pinata(file_path)
    if pdf_cid:
        st.write(f"File uploaded to IPFS with CID: {pdf_cid}")
        # Extract PDF content
        reader = PdfReader(file_path)
        pdf_text = ""
        for page in reader.pages:
            pdf_text += page.extract_text()
        if pdf_text:
            st.text_area("PDF Content", pdf_text, height=200)
            # Allow user to ask questions about the PDF
            user_input = st.text_input("Ask something about the PDF:", disabled=st.session_state.loading)
            if st.button("Send", disabled=st.session_state.loading):
                if user_input:
                    # Set loading state to True
                    st.session_state.loading = True
                    # Display loading indicator
                    with st.spinner("AI is thinking..."):
                        # Simulate loading with sleep (remove in production)
                        time.sleep(1)  # Simulate network delay
                        # Get AI response
                        response = get_openai_response(user_input, pdf_text)
                    # Update chat history
                    st.session_state.chat_history.append({"user": user_input, "ai": response})
                    # Clear the input box after sending
                    st.session_state.input_text = ""
                    # Reset loading state
                    st.session_state.loading = False
            # Display chat history
            if st.session_state.chat_history:
                for chat in st.session_state.chat_history:
                    st.write(f"**You:** {chat['user']}")
                    st.write(f"**AI:** {chat['ai']}")
                # Auto-scroll to the bottom of the chat
                st.write("<style>div.stChat {overflow-y: auto;}</style>", unsafe_allow_html=True)
                # Add three dots as a loading indicator if still waiting for response
                if st.session_state.loading:
                    st.write("**AI is typing** ...")
        else:
            st.error("Could not extract text from the PDF.")
    else:
        st.error("Failed to upload PDF to Pinata.")
要在本地執行應用程式,請使用以下命令:
streamlit run app.py我們的文件已成功上傳至 Pinata 平台:

皮納塔上傳
PDF擷取
開放人工智慧交互
最終程式碼可在此 github 儲存庫中找到:
https://github.com/Jagroop2001/chat-with-pdf
這就是本部落格的全部內容!請繼續關注更多更新並繼續建立令人驚嘆的應用程式! 💻✨
快樂編碼! 😊
原文出處:https://dev.to/jagroop2001/building-a-chat-with-pdfs-using-pinataopenai-and-streamlit-3jb7