NotImplementedError for vectorstore.add_documents using the new llama-text-embed-v2 embedding model. #29888
Open
5 tasks done
Labels
Ɑ: vector store
Related to vector store module
Checked other resources
Example Code
embedding = PineconeEmbeddings(model="llama-text-embed-v2")
vectorstore = PineconeVectorStore(index_name=index_name, embedding=embedding)
vectorstore.add_documents(documents)
Error Message and Stack Trace (if applicable)
TypeError Traceback (most recent call last)
Cell In[6], line 10
7 uploader.add_documents(docset)
8 except Exception as e:
9 #print(docset)
---> 10 raise e
Cell In[6], line 7
5 for docset in documents:
6 try:
----> 7 uploader.add_documents(docset)
8 except Exception as e:
9 #print(docset)
10 raise e
File ~/gitrepos/repository/src/chunking/chunking_common.py:89, in BaseUploader.add_documents(self, documents)
88 def add_documents(self, documents):
---> 89 self._vectorstore.add_documents(documents)
File /opt/mambaforge/envs/llmagents/lib/python3.12/site-packages/langchain_core/vectorstores/base.py:286, in VectorStore.add_documents(self, documents, **kwargs)
284 texts = [doc.page_content for doc in documents]
285 metadatas = [doc.metadata for doc in documents]
--> 286 return self.add_texts(texts, metadatas, **kwargs)
287 msg = (
288 f"
add_documents
andadd_texts
has not been implemented "289 f"for {self.class.name} "
290 )
291 raise NotImplementedError(msg)
File /opt/mambaforge/envs/llmagents/lib/python3.12/site-packages/langchain_pinecone/vectorstores.py:280, in PineconeVectorStore.add_texts(self, texts, metadatas, ids, namespace, batch_size, embedding_chunk_size, async_req, id_prefix, **kwargs)
278 chunk_ids = ids[i : i + embedding_chunk_size]
279 chunk_metadatas = metadatas[i : i + embedding_chunk_size]
--> 280 embeddings = self._embedding.embed_documents(chunk_texts)
281 vector_tuples = zip(chunk_ids, embeddings, chunk_metadatas)
282 if async_req:
283 # Runs the pinecone upsert asynchronously.
File /opt/mambaforge/envs/llmagents/lib/python3.12/site-packages/langchain_pinecone/embeddings.py:141, in PineconeEmbeddings.embed_documents(self, texts)
136 _iter = self._get_batch_iterator(texts)
137 for i in _iter:
138 response = self._client.inference.embed(
139 model=self.model,
140 parameters=self.document_params,
--> 141 inputs=texts[i : i + self.batch_size],
142 )
143 embeddings.extend([r["values"] for r in response])
145 return embeddings
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Description
I'm trying to use the new "llama-text-embed-v2" embeddings model with lanchain-pinecone. An error indicates that it hasn't been implemented yet in Langchain. This is the following information for the model:
https://docs.pinecone.io/models/llama-text-embed-v2
It appears that the existing parameters are quite compatible with the default model using an API (with the key difference being the possibility to have bigger chunk sizes)
System Info
langchain 0.3.19
langchain-pinecone 0.2.3
The text was updated successfully, but these errors were encountered: