PyPDFium2Loader
This notebook provides a quick overview for getting started with PyPDFium2
document loader. For detailed documentation of all DocumentLoader features and configurations head to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | JS support |
---|---|---|---|---|
PyPDFium2Loader | langchain_community | ✅ | ❌ | ❌ |
Loader features
Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |
---|---|---|---|---|
PyPDFium2Loader | ✅ | ❌ | ✅ | ❌ |
Setup
Credentials
No credentials are required to use PyPDFium2Loader
.
If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
Install langchain_community.
%pip install -qU langchain_community pypdfium2
%pip install -qq ../../../../dist/patch_langchain_pdf_loader*.whl
Note: you may need to restart the kernel to use updated packages.
Note: you may need to restart the kernel to use updated packages.
Initialization
Now we can instantiate our model object and load documents:
from langchain_community.document_loaders import PyPDFium2Loader
file_path = "./example_data/layout-parser-paper.pdf"
loader = PyPDFium2Loader(file_path)
Load
docs = loader.load()
docs[0]
/home/pprados/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
Document(metadata={'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationdate': '2021-06-22T01:27:10+00:00', 'moddate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen\n1\n(), Ruochen Zhang\n2\n, Melissa Dell\n3\n, Benjamin Charles Germain\nLee\n4\n, Jacob Carlson\n3\n, and Weining Li\n5\n1 Allen Institute for AI\nshannons@allenai.org 2 Brown University\nruochen zhang@brown.edu 3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu 5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im\x02portant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica\x02tions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de\x02tection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti\x02zation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis· Deep Learning· Layout Analysis\n· Character Recognition· Open Source library· Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n')
import pprint
pprint.pp(docs[0].metadata)
{'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
Lazy Load
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)
pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA 11
focuses on precision, efficiency, and robustness
{'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 10}
The metadata attribute contains the following keys:
- source
- page (if in mode page)
- total_page
- creationdate
- creator
- producer
Other metadata are specific to each parser. These pieces of information can be helpful (to categorize your PDFs for example).
Splitting mode & custom pages delimiter
When loading the PDF file you can split it in two different ways:
- By page
- As a single text flow
By default PyPDFium2Loader will split the PDF by page.
Extract the PDF by page. Each page is extracted as a langchain Document object:
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
In this mode the pdf is split by pages and the resulting Documents metadata contains the page number. But in some cases we could want to process the pdf as a single text flow (so we don't cut some paragraphs in half). In this case you can use the single mode :
Extract the whole PDF as a single langchain Document object:
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16}
Logically, in this mode, the ‘page_number’ metadata disappears. Here's how to clearly identify where pages end in the text flow :
Add a custom pages_delimitor to identify where are ends of pages in single mode:
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimitor="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen
1
(), Ruochen Zhang
2
, Melissa Dell
3
, Benjamin Charles Germain
Lee
4
, Jacob Carlson
3
, and Weining Li
5
1 Allen Institute for AI
shannons@allenai.org 2 Brown University
ruochen zhang@brown.edu 3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu 5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis· Deep Learning· Layout Analysis
· Character Recognition· Open Source library· Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
-------THIS IS A CUSTOM END OF PAGE-------
2 Z. Shen et al.
37], layout detection [38, 22], table detection [26], and scene text detection [4].
A generalized learning-based framework dramatically reduces the need for the
manual specification of complicated rules, which is the status quo with traditional
methods. DL has the potential to transform DIA pipelines and benefit a broad
spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of recent advances in DL-based methods: 1) DL models are notoriously convoluted
for reuse and extension. Existing models are developed using distinct frameworks like TensorFlow [1] or PyTorch [24], and the high-level parameters can
be obfuscated by implementation details [8]. It can be a time-consuming and
frustrating experience to debug, reproduce, and adapt existing models for DIA,
and many researchers who would benefit the most from using these methods lack
the technical background to implement them from scratch. 2) Document images
contain diverse and disparate patterns across domains, and customized training
is often required to achieve a desirable detection accuracy. Currently there is no
full-fledged infrastructure for easily curating the target document image datasets
and fine-tuning or re-training the models. 3) DIA usually requires a sequence of
models and other processing to obtain the final outputs. Often research teams use
DL models and then perform further document analyses in separate processes,
and these pipelines are not documented in any central location (and often not
documented at all). This makes it difficult for research teams to learn about how
full pipelines are implemented and leads them to invest significant resources in
reinventing the DIA wheel.
LayoutParser provides a unified toolkit to support DL-based document image
analysis and processing. To address the aforementioned challenges, LayoutParser
is built with the following components:
1. An off-the-shelf toolkit for applying DL models for layout detection, character
recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that
underlies the off-the-shelf usage
3. Comprehensive tools for efficient document image data annotation and model
tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribution, and discussion of DIA models and pipelines, to promote reusability,
reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing
generalizability and versatility, and can be easily installed via pip. Its convenient
functions for handling document image data can be seamlessly integrated with
existing DIA pipelines. With detailed documentations and carefully curated
tutorials, we hope this tool will benefit a variety of end-users, and will lead to
advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model
reusability in other disciplines like natural language processing [8, 34] and computer vision [35], but with a focus on unique challenges in DIA. We show
LayoutParser can be applied in sophisticated and large-scale digitization projects
-------THIS IS A CUSTOM END OF PAGE-------
LayoutParser: A Unified Toolkit for DL-Based DIA 3
that require precision, efficiency, and robustness, as well as s
This could simply be \n, or \f to clearly indicate a page change, or <!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect.
Extract images from the PDF
You can extract images from your PDFs with a choice of three different solutions:
- rapidOCR (lightweight Optical Character Recognition tool)
- Tesseract (OCR tool with high precision)
- Multimodal language model
You can tune these functions to choose the output format of the extracted images among html, markdown or text
The result is inserted between the last and the second-to-last paragraphs of text of the page.
Extract images from the PDF with rapidOCR:
%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_rapidocr,
)
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_rapidocr(format="html"),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the coordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
<img alt="Coordinate
(x1, y1)
(X1, y1)
(x2,y2)
APIS
x-interval
tart
end
Quadrilateral
operation
Rectangle
y-interval
ena
(x2, y2)
(x4, y4)
(x3, y3)
and
textblock
Coordinate
transformation
+
Block
Block
Reading
Extra features
Text
Type
Order
coordinatel
textblockl
layout
same
textblock2
layoutl
The
A list of the layout elements" />
Be careful, RapidOCR is designed to work with Chinese and English, not other languages.
Extract images from the PDF with tesseract:
%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_tesseract,
)
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_tesseract(format="text"),
)
docs = loader.load()
print(docs[5].page_content)
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
``````output
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the coordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
Coordinate
textblock
x-interval
JeAsaqul-A
Coordinate
+
Extra features
Rectangle
Quadrilateral
Block
Text
Block
Type
Reading
Order
layout
[ coordinatel1 textblock1 |
'
“y textblock2 , layout1 ]
A list of the layout elements
The same transformation and operation APIs
``````output
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
Extract images from the PDF with multimodal model:
%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os
from dotenv import load_dotenv
load_dotenv()
True
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_description,
)
from langchain_openai import ChatOpenAI
loader = PyPDFium2Loader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_description(
model=ChatOpenAI(model="gpt-4o", max_tokens=1024), format="markdown"
),
)
docs = loader.load()
print(docs[5].page_content)
/home/mame/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()
warnings.warn("get_text_range() call with default params will be implicitly redirected to get_text_bounded()")
---------------------------------------------------------------------------
``````output
RateLimitError Traceback (most recent call last)
``````output
Cell In[18], line 14
2 from langchain_community.document_loaders.parsers.pdf import (
3 convert_images_to_description,
4 )
6 loader = PyPDFium2Loader(
7 "./example_data/layout-parser-paper.pdf",
8 mode="page",
(...)
12 ),
13 )
---> 14 docs = loader.load()
15 print(docs[5].page_content)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py:31, in BaseLoader.load(self)
29 def load(self) -> list[Document]:
30 """Load data into Document objects."""
---> 31 return list(self.lazy_load())
``````output
File ~/PycharmProjects/patch_langchain_common/langchain_community/document_loaders/pdf.py:398, in PyPDFium2Loader.lazy_load(self)
396 else:
397 blob = Blob.from_path(self.file_path) # type: ignore[attr-defined]
--> 398 yield from self.parser.parse(blob)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py:127, in BaseBlobParser.parse(self, blob)
112 def parse(self, blob: Blob) -> list[Document]:
113 """Eagerly parse the blob into a document or documents.
114
115 This is a convenience method for interactive development environment.
(...)
125 List of documents
126 """
--> 127 return list(self.lazy_parse(blob))
``````output
File ~/PycharmProjects/patch_langchain_common/langchain_community/document_loaders/parsers/pdf.py:1441, in PyPDFium2Parser.lazy_parse(self, blob)
1437 text_from_page = "\n".join(
1438 text_page.get_text_range().splitlines()
1439 ) # Replace \r\n
1440 text_page.close()
-> 1441 image_from_page = self._extract_images_from_page(page)
1442 all_text = _merge_text_and_extras(
1443 [image_from_page], text_from_page
1444 ).strip()
1445 page.close()
``````output
File ~/PycharmProjects/patch_langchain_common/langchain_community/document_loaders/parsers/pdf.py:1490, in PyPDFium2Parser._extract_images_from_page(self, page)
1487 for image in images:
1488 image.close()
1489 return _format_image_str.format(
-> 1490 image_text=_join_images.join(self.convert_image_to_text(numpy_images))
1491 )
``````output
File ~/PycharmProjects/patch_langchain_common/langchain_community/document_loaders/parsers/pdf.py:374, in convert_images_to_description.<locals>._convert_images_to_description(images)
372 Image.fromarray(image).save(image_bytes, format="PNG")
373 img_base64 = base64.b64encode(image_bytes.getvalue()).decode("utf-8")
--> 374 msg = chat.invoke(
375 [
376 HumanMessage(
377 content=[
378 {"type": "text", "text": prompt.format()},
379 {
380 "type": "image_url",
381 "image_url": {
382 "url": f"data:image/jpeg;base64,{img_base64}"
383 },
384 },
385 ]
386 )
387 ]
388 )
389 result = msg.content
390 assert isinstance(result, str)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
275 def invoke(
276 self,
277 input: LanguageModelInput,
(...)
281 **kwargs: Any,
282 ) -> BaseMessage:
283 config = ensure_config(config)
284 return cast(
285 ChatGeneration,
--> 286 self.generate_prompt(
287 [self._convert_input(input)],
288 stop=stop,
289 callbacks=config.get("callbacks"),
290 tags=config.get("tags"),
291 metadata=config.get("metadata"),
292 run_name=config.get("run_name"),
293 run_id=config.pop("run_id", None),
294 **kwargs,
295 ).generations[0][0],
296 ).message
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
778 def generate_prompt(
779 self,
780 prompts: list[PromptValue],
(...)
783 **kwargs: Any,
784 ) -> LLMResult:
785 prompt_messages = [p.to_messages() for p in prompts]
--> 786 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
646 for res in results
647 ]
648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
630 for i, m in enumerate(messages):
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
636 run_manager=run_managers[i] if run_managers else None,
637 **kwargs,
638 )
639 )
640 except BaseException as e:
641 if run_managers:
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
854 else:
855 result = self._generate(messages, stop=stop, **kwargs)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:689, in BaseChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
687 generation_info = {"headers": dict(raw_response.headers)}
688 else:
--> 689 response = self.client.create(**payload)
690 return self._create_chat_result(response, generation_info)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f"Missing required argument: {quote(missing[0])}"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py:829, in Completions.create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
788 @required_args(["messages", "model"], ["messages", "model", "stream"])
789 def create(
790 self,
(...)
826 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
827 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
828 validate_response_format(response_format)
--> 829 return self._post(
830 "/chat/completions",
831 body=maybe_transform(
832 {
833 "messages": messages,
834 "model": model,
835 "audio": audio,
836 "frequency_penalty": frequency_penalty,
837 "function_call": function_call,
838 "functions": functions,
839 "logit_bias": logit_bias,
840 "logprobs": logprobs,
841 "max_completion_tokens": max_completion_tokens,
842 "max_tokens": max_tokens,
843 "metadata": metadata,
844 "modalities": modalities,
845 "n": n,
846 "parallel_tool_calls": parallel_tool_calls,
847 "prediction": prediction,
848 "presence_penalty": presence_penalty,
849 "response_format": response_format,
850 "seed": seed,
851 "service_tier": service_tier,
852 "stop": stop,
853 "store": store,
854 "stream": stream,
855 "stream_options": stream_options,
856 "temperature": temperature,
857 "tool_choice": tool_choice,
858 "tools": tools,
859 "top_logprobs": top_logprobs,
860 "top_p": top_p,
861 "user": user,
862 },
863 completion_create_params.CompletionCreateParams,
864 ),
865 options=make_request_options(
866 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
867 ),
868 cast_to=ChatCompletion,
869 stream=stream or False,
870 stream_cls=Stream[ChatCompletionChunk],
871 )
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1280, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1266 def post(
1267 self,
1268 path: str,
(...)
1275 stream_cls: type[_StreamT] | None = None,
1276 ) -> ResponseT | _StreamT:
1277 opts = FinalRequestOptions.construct(
1278 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1279 )
-> 1280 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:957, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
954 else:
955 retries_taken = 0
--> 957 return self._request(
958 cast_to=cast_to,
959 options=options,
960 stream=stream,
961 stream_cls=stream_cls,
962 retries_taken=retries_taken,
963 )
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1046, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1044 if remaining_retries > 0 and self._should_retry(err.response):
1045 err.response.close()
-> 1046 return self._retry_request(
1047 input_options,
1048 cast_to,
1049 retries_taken=retries_taken,
1050 response_headers=err.response.headers,
1051 stream=stream,
1052 stream_cls=stream_cls,
1053 )
1055 # If the response is streamed then we need to explicitly read the response
1056 # to completion before attempting to access the response text.
1057 if not err.response.is_closed:
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1095, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)
1091 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1092 # different thread if necessary.
1093 time.sleep(timeout)
-> 1095 return self._request(
1096 options=options,
1097 cast_to=cast_to,
1098 retries_taken=retries_taken + 1,
1099 stream=stream,
1100 stream_cls=stream_cls,
1101 )
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1046, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1044 if remaining_retries > 0 and self._should_retry(err.response):
1045 err.response.close()
-> 1046 return self._retry_request(
1047 input_options,
1048 cast_to,
1049 retries_taken=retries_taken,
1050 response_headers=err.response.headers,
1051 stream=stream,
1052 stream_cls=stream_cls,
1053 )
1055 # If the response is streamed then we need to explicitly read the response
1056 # to completion before attempting to access the response text.
1057 if not err.response.is_closed:
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1095, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)
1091 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1092 # different thread if necessary.
1093 time.sleep(timeout)
-> 1095 return self._request(
1096 options=options,
1097 cast_to=cast_to,
1098 retries_taken=retries_taken + 1,
1099 stream=stream,
1100 stream_cls=stream_cls,
1101 )
``````output
File ~/PycharmProjects/patch_langchain_common/.venv/lib/python3.11/site-packages/openai/_base_client.py:1061, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1058 err.response.read()
1060 log.debug("Re-raising status error")
-> 1061 raise self._make_status_error_from_response(err.response) from None
1063 return self._process_response(
1064 cast_to=cast_to,
1065 options=options,
(...)
1069 retries_taken=retries_taken,
1070 )
``````output
RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
Working with Files
Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use open
to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.
As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded. You can use this strategy to analyze different files, with the same parsing parameters.
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyPDFium2Parser
loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PyPDFium2Parser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen
1
(), Ruochen Zhang
2
, Melissa Dell
3
, Benjamin Charles Germain
Lee
4
, Jacob Carlson
3
, and Weining Li
5
1 Allen Institute for AI
shannons@allenai.org 2 Brown University
ruochen zhang@brown.edu 3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu 5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis· Deep Learning· Layout Analysis
· Character Recognition· Open Source library· Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
{'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'source': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
It is possible to work with files from cloud storage.
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3:/mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PyPDFium2Parser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
API reference
For detailed documentation of all PyPDFium2Loader
features and configurations head to the API reference: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html
Related
- Document loader conceptual guide
- Document loader how-to guides