Skip to content

add pagination support to list #26

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 39 additions & 2 deletions pwclient/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -581,6 +581,28 @@ def _detail(
data, _ = self._get(url)
return json.loads(data)

@staticmethod
def _get_next_page(headers):
link_header = next((data for header, data in headers if header == 'Link'), None)
if link_header is None:
return None

rel = '; rel="next"'
Copy link
Contributor

@ukleinek ukleinek May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to MDN's documentation about the Link header this is overspecific. There might be other parameters than rel and next might also be unquoted. I looked up a few implementations for parsing the Link header and found several different ones. All in all this isn't trivial to implement if you also want it to be somewhat robust. My position here would be to better use the implementation from the requests package than to add another imperfect implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my original thought. However, it was pointed out that we really only have to deal with the link header implementation from patchwork itself.


url = next((l[:-len(rel)] for l in link_header.split(',') if l.endswith(rel)), None)
if url is None:
return None;

if not (url.startswith('<') and url.endswith('>')):
return None;

parsed_link = urllib.parse.urlparse(url[1:-1])
page = next((x for x in parsed_link.query.split('&') if x.startswith('page=')), None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You make an effort here to extract the page number just to reconstruct url[1:-1] in _list() below.

if page is None:
return None

return int(page[5:])

def _list(
self,
resource_type,
Expand All @@ -594,8 +616,23 @@ def _list(
url = f'{url}{resource_id}/{subresource_type}/'
if params:
url = f'{url}?{urllib.parse.urlencode(params)}'
data, _ = self._get(url)
return json.loads(data)
data, headers = self._get(url)

items = json.loads(data)

page_nr = self._get_next_page(headers)
if page_nr is None:
return items

if params is None:
params = {}
params['page'] = page_nr

items += self._list(resource_type, params,
resource_id=resource_id,
subresource_type=subresource_type)

return items

# project

Expand Down