All Questions
61
questions
0
votes
0
answers
53
views
Streamlit LLM chatbot - Feedback not captured in LangSmith
I'm running a Streamlit app with an integrated LLM chatbot using Langchain, LangSmith and Openai. I'm trying to record a thumbs up feedback in the chatbot after each response from the chain and pass ...
0
votes
1
answer
545
views
How to realize streaming response from Ollama local LLM in the Streamlit App?
I'm a little confused in the documentation of the components. I need to implement a stream output from a local language model to a stream interface. I know that there is a new method st.write_stream, ...
0
votes
0
answers
16
views
Issue related how to trace current run and attach the feedbacks text to the run
I am working on the LLM application and It is programmed base on RAG archituctre by using LangChain features. I used Streamlit for my front end.
Now I want to use LangSmith for tracing each answer and ...
0
votes
0
answers
78
views
LangChain based Streamlit RAG App: chunks and Vectorstore is being recomputed on every question?
I'm building a Retrieval Augmented Generation (RAG) pipeline using LangChain, and I'm encountering an issue where my vectorstore seems to be recomputed every time I pass a new question to the pipeline....
0
votes
0
answers
20
views
Can you please let me know why the error mentioned is displaying in streamlit app even after i installed langchain community, please see the error
ModuleNotFoundError: Module langchain_community.vectorstores not found. Please
install langchain-community to access this module. You can install it using pip install -U langchain-community
2024-07-31 ...
0
votes
0
answers
411
views
How to Access Intermediate Steps of Langchain Agent React in Real-Time While Waiting for Final Response?
I am working with the AgentReact from the Langchain library and I need to access and display the intermediate steps (interactions) in real-time while waiting for the final response. Currently, my code ...
0
votes
0
answers
190
views
My LLM application in Streamlit (using python) takes longer time to generate the response
I am creating a LLM application using Ollama, Langchain, RAG and streamlit. I am using Mistral as my LLM model from Ollama. However, after uploading the PDF file in the streamlit, it take so much time ...
0
votes
0
answers
211
views
Streamlit application multithreaded by default, but not synchronized
I am making a streamlit application, which is a chatbot, that should refer to a list of chat messages.
For this purpose, I have stored the those chat messages in a variable "messages" in the ...
0
votes
1
answer
160
views
I can't get any PDF uploads to read
The app is supposed to read multiple PDFs but I can't get even a single PDF to work because of this issue. Any help is appreciated.
I received the error:
AttributeError: 'bytes' object has no ...
0
votes
0
answers
170
views
(this/thought/action/action input/observation can repeat N times) - agent entering infinite loop after giving the answer in terminal
i am doing langchain CSV agent with streamlit ,
it is working well upon upload of csv file and when question asked , it gives answer while entering agentexecutor chain ..
i can see the answer in "...
0
votes
0
answers
93
views
IndexError: list index out of range . Embedding error
I am building Q&A chat bot using LangChain. while uploading on streamlit , it throws error given below. it shows error in the line
FAISS.from_documents(st.session_state.final_documents,st....
1
vote
0
answers
453
views
OpenAI APIConnectionError: Connection error
I consistently receive an openai.APIConnectionError: Connection error message when attempting to use the API, while working on a project with langchain and streamlit.
I have tried debugging it by ...
2
votes
2
answers
201
views
How can I make a PDF chatbot that keeps the context of each access separate?
I’m a beginner user of Streamlit and I’ve created a chatbot with Gemini to ask questions in the context of .PDF files. The deploy was done in the Streamlit Community Cloud
The Github is here (app.py). ...
1
vote
0
answers
144
views
how can i use langchain's conversationchain?
if 'conversation_memory' not in st.session_state:
st.session_state.conversation_memory = ConversationBufferMemory(human_prefix="user", ai_prefix="ai")
memory = st....
0
votes
0
answers
317
views
Issues with ChromaDB vector store when deploying to Streamlit due to pysql3 version in deployment environment
I'm deploying an app in Streamlit which uses ChromaDB as vector store. The app works perfectly fine locally, but on deployment environment it throws this error:
RuntimeError: [91mYour system has an ...