Không có mô tả

Jonathan D. Storm 9deb78e6be Bug fixes 1 ngày trước cách đây
v2 a98822bfae Add readme to v2 subdirectory 1 năm trước cách đây
v3 8bbb8dcd02 Add IterPages() method to v3 Pager interface 6 tháng trước cách đây
v4 4eac54e3ff Add v4 with expanded Page interface; fix page leaks 6 tháng trước cách đây
v5 611455604f Add v5 API 3 tháng trước cách đây
v6 9deb78e6be Bug fixes 1 ngày trước cách đây
CONTRIBUTING.md c10537babf Add CONTRIBUTING.md 1 năm trước cách đây
LICENSE 176b85182b Update readme; add license 1 năm trước cách đây
README.md d116f05e39 Update README to reflect v3 API 6 tháng trước cách đây
depager.go 8dc0b32cf5 Fix module doc for pkg.go.dev 1 năm trước cách đây
go.mod 919460a7aa Initial commit 1 năm trước cách đây

README.md

depager

Trades REST API paging logic and request throttling for a channel and a for loop.

depager requires values conforming to the following interfaces:

/*
The `Page` interface must wrap server responses. This
allows pagers to calculate page sizes and iterate over
response aggregates.
*/
type Page[T any] interface {
  // Elems must return the items from the current page
  Elems() []T
}

// Exposes the part of the client that depager understands.
type Client[T any] interface {
  // NextPage returns the next page or it returns an error
  NextPage(
    page Page[T],
    offset uint64, // item offset at which to start page
  ) (
    count uint64, // total count of all items being paged
    err error,
  )
}

And in return, depager provides the following:

type Pager[T any] interface {
  // Iter is intended to be used in a for-range loop
  Iter() <-chan T

  // LastErr must return the first error encountered, if any
  LastErr() error
}

Example

package main

import (
  "context"
  "fmt"
  "log"
  "net/http"
  "net/url"
  "os"

  dp "idio.link/go/depager/v3"
)

type MyClient struct {
  pageSize uint64
  // more fields
}

func (c *MyClient) get(
  uri *url.URL,
) (head http.Header, body io.ReadCloser, err error) {
  // do things
  return
}

func (c *MyClient) pagify(
  pathURI *url.URL,
  first,
  last uint64,
) (uri *url.URL) {
  // glue path to base URI
  return
}

func (c *MyClient) Things(id int) dp.Pager[Thing] {
  // TODO validate; if used elsewhere, take boxed id instead
  path := "/pile/%d/things"
  subClient := &MySubclient[Thing]{
    MyClient: c,
    path:     url.Parse(fmt.Sprintf(path, id)),
  }

  // The page pool length determines the number of pages
  // that depager will buffer. All pages must have equal,
  // nonzero capacity. The capacity of the first page is
  // taken to be the page size for all requests.
  pagePool := make(chan Page[Thing], 4)
  for i := 0; i < cap(pagePool); i++ {
    tmp := MyAggregate[Thing](make([]Thing, 0, c.pageSize))
    pagePool <- &tmp
  }
  return dp.NewPager(subClient, pagePool)
}

type MySubclient[T any] struct {
  MyClient

  path *url.URL
}

func (c *MySubclient[T]) NextPage(
    page dp.Page[T],
  offset uint64,
) (totalItems uint64, err error) {
  /*
  Different APIs use different conventions. Simply map 
  `offset` to the apposite fields or headers with whatever
  semantics the server expects.

  Most days, the page size should be the largest that the
  server is willing to accommodate.
  */
  first := offset
  last := first + c.pageSize - 1

  uri := c.pagify(c.path, first, last)
  header, body, err := c.get(uri)
  // parsing, etc.

  dst := *page.(*MyAggregate[T])
  dst = dst[:min(cap(dst), len(src))]
  copy(dst, src)                 // update values
  *page.(*MyAggregate[T]) = dst  // update slice

  /*
  When returning the total count of all items to be paged,
  if the server API only provides you the total number of
  pages, simply calculate total*pageSize and return that.
  */
  return totalItems, nil
}

type MyAggregate[T any] []T

func (a *MyAggregate[T]) Elems() []T {
  return []T(*a)
}

type Thing struct {
  Id   int32  `json:"id"`
  Name string `json:"name"`
}

func main() {
  ctx, _ := context.WithCancel(context.Background())
  client := NewMyClient(...)
  id := 82348
  pager := client.Things(id)
  for e := range pager.Iter() {
    // there could be many pages; stay responsive!
    if ctx.Err() != nil {
        break
    }
    // do stuff with each thing
  }

  // finally, check for errors
  if err := pager.LastErr(); err != nil {
    log.Printf("do stuff with things: %v", err)
    os.Exit(1)
  }
}

Contributing

This project will accept (merge/rebase/squash) all contributions. Contributions that break CI (once it is introduced) will be reverted.

For details, please see Why Optimistic Merging Works Better.

Why?

In a world of plentiful memory, application-level paging, when implemented over a buffered, flow-controlled protocol like TCP, becomes a needless redundancy, at best; at worst, it is a mindless abnegation of system-level affordances. But since we twisted the web into a transport layer, we no longer have the option of leveraging underlying flow control, and workarounds like QUIC only double down on this bizarre state of affairs.

Since I have no expectation that web-as-a-transport will disappear anytime soon, I may as well recapitulate the same basic flow control that TCP would provide us, if only we let it.