|
1 日 前 | |
---|---|---|
v2 | 1 年間 前 | |
v3 | 6 ヶ月 前 | |
v4 | 6 ヶ月 前 | |
v5 | 3 ヶ月 前 | |
v6 | 1 日 前 | |
CONTRIBUTING.md | 1 年間 前 | |
LICENSE | 1 年間 前 | |
README.md | 6 ヶ月 前 | |
depager.go | 1 年間 前 | |
go.mod | 1 年間 前 |
Trades REST API paging logic and request throttling for a
channel and a for
loop.
depager requires values conforming to the following interfaces:
/*
The `Page` interface must wrap server responses. This
allows pagers to calculate page sizes and iterate over
response aggregates.
*/
type Page[T any] interface {
// Elems must return the items from the current page
Elems() []T
}
// Exposes the part of the client that depager understands.
type Client[T any] interface {
// NextPage returns the next page or it returns an error
NextPage(
page Page[T],
offset uint64, // item offset at which to start page
) (
count uint64, // total count of all items being paged
err error,
)
}
And in return, depager provides the following:
type Pager[T any] interface {
// Iter is intended to be used in a for-range loop
Iter() <-chan T
// LastErr must return the first error encountered, if any
LastErr() error
}
package main
import (
"context"
"fmt"
"log"
"net/http"
"net/url"
"os"
dp "idio.link/go/depager/v3"
)
type MyClient struct {
pageSize uint64
// more fields
}
func (c *MyClient) get(
uri *url.URL,
) (head http.Header, body io.ReadCloser, err error) {
// do things
return
}
func (c *MyClient) pagify(
pathURI *url.URL,
first,
last uint64,
) (uri *url.URL) {
// glue path to base URI
return
}
func (c *MyClient) Things(id int) dp.Pager[Thing] {
// TODO validate; if used elsewhere, take boxed id instead
path := "/pile/%d/things"
subClient := &MySubclient[Thing]{
MyClient: c,
path: url.Parse(fmt.Sprintf(path, id)),
}
// The page pool length determines the number of pages
// that depager will buffer. All pages must have equal,
// nonzero capacity. The capacity of the first page is
// taken to be the page size for all requests.
pagePool := make(chan Page[Thing], 4)
for i := 0; i < cap(pagePool); i++ {
tmp := MyAggregate[Thing](make([]Thing, 0, c.pageSize))
pagePool <- &tmp
}
return dp.NewPager(subClient, pagePool)
}
type MySubclient[T any] struct {
MyClient
path *url.URL
}
func (c *MySubclient[T]) NextPage(
page dp.Page[T],
offset uint64,
) (totalItems uint64, err error) {
/*
Different APIs use different conventions. Simply map
`offset` to the apposite fields or headers with whatever
semantics the server expects.
Most days, the page size should be the largest that the
server is willing to accommodate.
*/
first := offset
last := first + c.pageSize - 1
uri := c.pagify(c.path, first, last)
header, body, err := c.get(uri)
// parsing, etc.
dst := *page.(*MyAggregate[T])
dst = dst[:min(cap(dst), len(src))]
copy(dst, src) // update values
*page.(*MyAggregate[T]) = dst // update slice
/*
When returning the total count of all items to be paged,
if the server API only provides you the total number of
pages, simply calculate total*pageSize and return that.
*/
return totalItems, nil
}
type MyAggregate[T any] []T
func (a *MyAggregate[T]) Elems() []T {
return []T(*a)
}
type Thing struct {
Id int32 `json:"id"`
Name string `json:"name"`
}
func main() {
ctx, _ := context.WithCancel(context.Background())
client := NewMyClient(...)
id := 82348
pager := client.Things(id)
for e := range pager.Iter() {
// there could be many pages; stay responsive!
if ctx.Err() != nil {
break
}
// do stuff with each thing
}
// finally, check for errors
if err := pager.LastErr(); err != nil {
log.Printf("do stuff with things: %v", err)
os.Exit(1)
}
}
This project will accept (merge/rebase/squash) all contributions. Contributions that break CI (once it is introduced) will be reverted.
For details, please see Why Optimistic Merging Works Better.
In a world of plentiful memory, application-level paging, when implemented over a buffered, flow-controlled protocol like TCP, becomes a needless redundancy, at best; at worst, it is a mindless abnegation of system-level affordances. But since we twisted the web into a transport layer, we no longer have the option of leveraging underlying flow control, and workarounds like QUIC only double down on this bizarre state of affairs.
Since I have no expectation that web-as-a-transport will disappear anytime soon, I may as well recapitulate the same basic flow control that TCP would provide us, if only we let it.