|
|
Created:
3 years, 5 months ago by Vadim Sh. Modified:
3 years, 4 months ago Reviewers:
tandrii(chromium) CC:
chromium-reviews, infra-reviews+luci-go_chromium.org, maruel+w_chromium.org, tandrii+luci-go_chromium.org Target Ref:
refs/heads/master Project:
luci-go Visibility:
Public. |
DescriptionAdd 'dsset' structure.
It is a peculiar kind of datastore-backed set that is suitable for multiple
producers, single consumer unordered queues.
R=tandrii@chromium.org
BUG=
Patch Set 1 #
Total comments: 52
Patch Set 2 : simple changes #Patch Set 3 : add submitted #Patch Set 4 : cleanup after pop + batch friendly #Patch Set 5 : more agressive cleanup, better CanPop, comment nits #
Total comments: 3
Patch Set 6 : rebase #
Depends on Patchset: Messages
Total messages: 8 (1 generated)
PTAL Please concentrate of dsset.go I probably will not be committing the rest of this CL. Adding it here just so you can see in details how I dsset will be used.
I need another pass through code, but makes good sense now + thanks for good docs. Most comments are nits in comments for now. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:20: // Items added to set should have IDs unique in time (or at least unique for 1. I **feel**: "the" set, and "the" duration. But i don't know the rules :( 2. `unique in time` is weird and confusing, maybe drop `in time` and remove parenthesis around the OR clause. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:30: // These properties make dsset suitable for multiple producers/single consumer nit: no need for slash in MPSC https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:32: // unique in time. ditto for "in time". How about "unique in each time window of TombstoneDelay"? https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:65: // Produces just call Add(...). Producers https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:67: // A consumer need to run more elaborate algorithm that ensures atomicity of consider: s/A consumer need/The consumer (there should only be one at a time) needs https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:68: // 'Pop' and takes care of cleanup up of the garbage. This requires a mix of cleanup up -> cleanup OR cleaning up https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:119: // Its fields are intentionally private to force correct usage CleanupStorage usage of https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:122: set string // parent set name parent set ID, right? https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:127: // Add adds a bunch of items to the set. s/set./set. idempotently. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:133: // If items with given keys are already in the set, or has been deleted s/has/have https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:138: // TODO: batch + parallel what's batch for? If # of items added is too large? and parallel is only useful for batches, right? https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:142: // deduplicated in 'List'. If added items has been popped already (they have s/has/have https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:157: // as well as set of tombstones that points to items that were deleted as well as A set https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:389: // Submit completes this pop operation but submitting changes to datastore. s/but/by
I've finished review of dsset and its test, but not reviewing other files. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:226: // shards hosting same item (if Add call was retried)! the same item https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:252: // CleanupStorage deletes entities used to store items under given tombstones. nit: I'd call it GC :) https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:257: // This MUST be called before tombstones are popped. Failure to do so will make is this worth enforcing? If so, then we could record CleanupStorage inside dsset struct, and then pass it through Pop() (maybe in case of context only?). https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:347: // been in the set to being with. Use List (prior to starting the transaction) to beGIN with https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:348: // to distinguish this two cases. these https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:354: // Pop pop the item from the set and returns true if it was indeed Pop pops https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:368: p.tombs[id] = tombstone{ID: id, Tombstoned: p.now} Suppose List is called > TombstonesDelay seconds before pop; is it possible to have double Pop(same) -> true through the interplay of 3 consumers s.t. we get this: A, B, C = consumers # ordered my time A List() -> xyz B List() -> xyz transaction(B pop('x') -> true) C List() -> {yz, old tombstones -> x} C.GC('x') # OK transaction(C pop('y') -> true) # also cleaned up old tombstone of x # A wakes up... transaction( A.Pop('y') => false, # Good A.Pop('x') => true, # OOps ) Now, I realize that you've mentioned above that this is for single consumer, however I doubt we can guarantee that at most 1 consumer is active at any point of time even with task queues. Assuming, my concern is grounded, I guess we can prevent it by recording time right before we do List() and then comparing it against p.now to ensure that tombstones loaded would still contain tombstones corresponding to popped items from list. Perhaps that can be done by returning a new context from List() which would then need to be passed to BeginPop. WDYT? https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:391: // The operation is unusable after this. wdyt about adding a field "submitted"=false by default, set it to True here, and panic if it's true in methods above? https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset_test.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset_test.go:168: TombstonesDelay: time.Minute, re: my comment about multiple consumers potentially consuming the same item twice if one of consumers runs pop after a delay of at least TombtoneDelay since List(). So, I wonder what happens if s/time.Minute/time.Microsecond.
https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/cron/demo/main.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/cron/demo/main.go:190: func runTriage(c context.Context, jobID string) error { Do IIC that this is each trigger job could also check currently pending + running requests (well, if it cares about already triggered set)? (I think i'm finally grokking your README.md CL about policies && looking at prior and emmitting new emitting event) https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/tq/tq.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/tq/tq.go:361: httpReply(c, false, 500, "Fatal error: %s", err) // return 200 to stop retries btw, why this?
I refactored CleanupStorage and Pop to be batch-friendly (they can batch together operations for different sets), since Scheduler will be using two sets from single transaction. This slightly reduces RPC costs. I've also added best-effort garbage cleanup after Pop: we try to cleanup the storage for popped items right away (if it fails, no big deal, we have the safe net in List). This significantly reduces amount of garbage fetched by List(...) and allows to increase TombstonesDelay without sacrificing efficiency much, and increasing TombstonesDelay makes operations safer in general. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/cron/demo/main.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/cron/demo/main.go:190: func runTriage(c context.Context, jobID string) error { On 2017/07/21 13:42:52, tandrii(chromium) wrote: > Do IIC that this is each trigger job could also check currently pending + > running requests (well, if it cares about already triggered set)? Only if it's fine with using stale data. It can't do transactional trigger, since it will overflow 1 QPS limit for transactions. (The core assumption of this design is that we are triggering faster than 1 QPS). We trigger with N QPS, but triage with 1 QPS, and triage is fully transactional. > (I think i'm finally grokking your README.md CL about policies && looking at > prior and emmitting new emitting event) In gitiles case, each trigger is individual git commit. So if gitiles polled 10 new commits since last known state, it will use pendingTriggersSet(c, jobID).Add(c, []dsset.Item{all 10 commits, one by one, using SHA1 as a key }) and then post single kickTriageTask(...) task to handle them. At some later time (1-2 sec later), runTriage wakes up and transactionally processes the back log of all triggers (all pending git commits). For example, it can collapse all of them into single invocation, or it can kick a new invocation for each separate commit. Or it can split them evenly, or it can asks the engine to call 'runTriage' at some later time (to wait for more commits), or it can rate-limit number of started invocations, or limit total number of running invocations, etc. (This wide range of available actions is why I called it "policy" initially...). (Delaying or rerunning triage later is not implemented in this prototype, but it is simple and introduces no new constrains). Another example, is a build that depends on more that 1 repository. We can have multile gitiles pollers, all producing trigger events, that are later get triaged in a similar way. Finally, this scheme will even work reasonably well with Gerrit CL triggering (e.g for tricium or CQ case), since it has no fundamental 1 QPS limit for triggering events (so we can easily have >1 tryjob per second). During triage phase it should just process the batch of requests at once, in a single transaction. So it still will be 1 QPS, but the transaction will be just fatter. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:20: // Items added to set should have IDs unique in time (or at least unique for On 2017/07/18 15:55:14, tandrii(chromium) wrote: > 1. I **feel**: "the" set, and "the" duration. But i don't know the rules :( > 2. `unique in time` is weird and confusing, maybe drop `in time` and remove > parenthesis around the OR clause. Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:30: // These properties make dsset suitable for multiple producers/single consumer On 2017/07/18 15:55:15, tandrii(chromium) wrote: > nit: no need for slash in MPSC Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:32: // unique in time. On 2017/07/18 15:55:14, tandrii(chromium) wrote: > ditto for "in time". How about "unique in each time window of TombstoneDelay"? Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:65: // Produces just call Add(...). On 2017/07/18 15:55:14, tandrii(chromium) wrote: > Producers Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:67: // A consumer need to run more elaborate algorithm that ensures atomicity of On 2017/07/18 15:55:15, tandrii(chromium) wrote: > consider: s/A consumer need/The consumer (there should only be one at a time) > needs Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:68: // 'Pop' and takes care of cleanup up of the garbage. This requires a mix of On 2017/07/18 15:55:15, tandrii(chromium) wrote: > cleanup up -> cleanup OR cleaning up Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:119: // Its fields are intentionally private to force correct usage CleanupStorage On 2017/07/18 15:55:14, tandrii(chromium) wrote: > usage of Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:122: set string // parent set name On 2017/07/18 15:55:15, tandrii(chromium) wrote: > parent set ID, right? Yes. 'ID' used to be 'Name', I changed it midway and forgot to update all references. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:127: // Add adds a bunch of items to the set. On 2017/07/18 15:55:15, tandrii(chromium) wrote: > s/set./set. idempotently. Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:133: // If items with given keys are already in the set, or has been deleted On 2017/07/18 15:55:14, tandrii(chromium) wrote: > s/has/have Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:138: // TODO: batch + parallel On 2017/07/18 15:55:15, tandrii(chromium) wrote: > what's batch for? If # of items added is too large? > > and parallel is only useful for batches, right? Yes. There's a limit on number of entities in single PutMulti call. If we are over this limit, we need to run multiple PutMulti operations in parallel. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:142: // deduplicated in 'List'. If added items has been popped already (they have On 2017/07/18 15:55:14, tandrii(chromium) wrote: > s/has/have Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:157: // as well as set of tombstones that points to items that were deleted On 2017/07/18 15:55:15, tandrii(chromium) wrote: > as well as A set Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:226: // shards hosting same item (if Add call was retried)! On 2017/07/18 17:54:18, tandrii(chromium) wrote: > the same item Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:252: // CleanupStorage deletes entities used to store items under given tombstones. On 2017/07/18 17:54:18, tandrii(chromium) wrote: > nit: I'd call it GC :) There are two "kinds" of GC: one removes itemEntity entities, another removes tombstones from tombstonesEntity. To avoid confusing, I decided to name the operation more explicitly. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:257: // This MUST be called before tombstones are popped. Failure to do so will make On 2017/07/18 17:54:18, tandrii(chromium) wrote: > is this worth enforcing? If so, then we could record CleanupStorage inside dsset > struct, and then pass it through Pop() (maybe in case of context only?). Done, sort of. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:347: // been in the set to being with. Use List (prior to starting the transaction) On 2017/07/18 17:54:18, tandrii(chromium) wrote: > to beGIN with Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:348: // to distinguish this two cases. On 2017/07/18 17:54:18, tandrii(chromium) wrote: > these Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:354: // Pop pop the item from the set and returns true if it was indeed On 2017/07/18 17:54:18, tandrii(chromium) wrote: > Pop pops Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:368: p.tombs[id] = tombstone{ID: id, Tombstoned: p.now} On 2017/07/18 17:54:19, tandrii(chromium) wrote: > Suppose List is called > TombstonesDelay seconds before pop; is it possible to > have double Pop(same) -> true > through the interplay of 3 consumers s.t. we get this: > > A, B, C = consumers > > # ordered my time > A List() -> xyz > B List() -> xyz > transaction(B pop('x') -> true) > C List() -> {yz, old tombstones -> x} > C.GC('x') # OK > transaction(C pop('y') -> true) # also cleaned up old tombstone of x > # A wakes up... > transaction( > A.Pop('y') => false, # Good > A.Pop('x') => true, # OOps > ) > > > > Now, I realize that you've mentioned above that this is for single consumer, > however I doubt we can guarantee that at most 1 consumer is active at any point > of time even with task queues. It is single consumer mostly because 1 QPS limit on transactions. I assume producers and consumers to be processes that do 1 operation per second. In this case, dsset structure allow ShardCount producers and 1 consumer. The unstated assumption is that TombstonesDelay is sufficiently large. Large enough for all "in-flight" changes to safely land. In you example "A List"+"A Pop" is one such in-flight change. It is assumed that the A process does its full cycle within <<TombstonesDelay period. There's a tradeoff to make. Larger TombstonesDelay is safer, but it introduces more garbage that slows down List and Pop (need to filter through more stuff). But I realized there's a way to reduce this garbage by calling CleanupStorage for popped items right after they have been popped. This avoid to store O(TombstonesDelay*ProducersQPS) of garbage (assuming we Pop items as fast as they are produced). We'll still store O(TombstonesDelay*ProducersQPS) or tombstones, but they are cheap. > > Assuming, my concern is grounded, I guess we can prevent it by recording time > right before we do List() and then comparing it against p.now to ensure that > tombstones loaded would still contain tombstones corresponding to popped items > from list. Perhaps that can be done by returning a new context from List() which > would then need to be passed to BeginPop. WDYT? Yeah, it is a good idea. Done, sort of. I have a feeling it's not very rigorous protection, but better than nothing. The situation you've described should not happen during normal usage (when TombstonesDelay is sufficient large). https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:389: // Submit completes this pop operation but submitting changes to datastore. On 2017/07/18 15:55:14, tandrii(chromium) wrote: > s/but/by Done. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:391: // The operation is unusable after this. On 2017/07/18 17:54:19, tandrii(chromium) wrote: > wdyt about adding a field "submitted"=false by default, set it to True here, and > panic if it's true in methods above? I did it initially, but the removed to simplify code :-/ Added back. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset_test.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset_test.go:168: TombstonesDelay: time.Minute, On 2017/07/18 17:54:19, tandrii(chromium) wrote: > re: my comment about multiple consumers potentially consuming the same item > twice if one of consumers runs pop after a delay of at least TombtoneDelay since > List(). So, I wonder what happens if s/time.Minute/time.Microsecond. It will all blow up. TombstonesDelay should be >> time scale of regular processes. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/tq/tq.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/tq/tq.go:361: httpReply(c, false, 500, "Fatal error: %s", err) // return 200 to stop retries On 2017/07/21 13:42:52, tandrii(chromium) wrote: > btw, why this? Forgot to undo before sending the CL :) Shouldn't be here.
https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/cron/demo/main.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/cron/demo/main.go:190: func runTriage(c context.Context, jobID string) error { On 2017/07/24 00:30:15, Vadim Sh. wrote: > On 2017/07/21 13:42:52, tandrii(chromium) wrote: > > Do IIC that this is each trigger job could also check currently pending + > > running requests (well, if it cares about already triggered set)? > > Only if it's fine with using stale data. It can't do transactional trigger, > since it will overflow 1 QPS limit for transactions. (The core assumption of > this design is that we are triggering faster than 1 QPS). We trigger with N QPS, > but triage with 1 QPS, and triage is fully transactional. > > > (I think i'm finally grokking your README.md CL about policies && looking at > > prior and emmitting new emitting event) > > In gitiles case, each trigger is individual git commit. So if gitiles polled 10 > new commits since last known state, it will use > > pendingTriggersSet(c, jobID).Add(c, []dsset.Item{all 10 commits, one by one, > using SHA1 as a key }) > > and then post single kickTriageTask(...) task to handle them. > > At some later time (1-2 sec later), runTriage wakes up and transactionally > processes the back log of all triggers (all pending git commits). For example, > it can collapse all of them into single invocation, or it can kick a new > invocation for each separate commit. Or it can split them evenly, or it can asks > the engine to call 'runTriage' at some later time (to wait for more commits), or > it can rate-limit number of started invocations, or limit total number of > running invocations, etc. (This wide range of available actions is why I called > it "policy" initially...). > > (Delaying or rerunning triage later is not implemented in this prototype, but it > is simple and introduces no new constrains). > > Another example, is a build that depends on more that 1 repository. We can have > multile gitiles pollers, all producing trigger events, that are later get > triaged in a similar way. > > Finally, this scheme will even work reasonably well with Gerrit CL triggering > (e.g for tricium or CQ case), since it has no fundamental 1 QPS limit for > triggering events (so we can easily have >1 tryjob per second). During triage > phase it should just process the batch of requests at once, in a single > transaction. So it still will be 1 QPS, but the transaction will be just fatter. Ah, so the policy is part of the triggerED job, not triggerING. https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... File scheduler/appengine/engine/dsset/dsset.go (right): https://codereview.chromium.org/2981143002/diff/1/scheduler/appengine/engine/... scheduler/appengine/engine/dsset/dsset.go:138: // TODO: batch + parallel On 2017/07/24 00:30:17, Vadim Sh. wrote: > On 2017/07/18 15:55:15, tandrii(chromium) wrote: > > what's batch for? If # of items added is too large? > > > > and parallel is only useful for batches, right? > > Yes. There's a limit on number of entities in single PutMulti call. If we are > over this limit, we need to run multiple PutMulti operations in parallel. Acknowledged. https://codereview.chromium.org/2981143002/diff/80001/scheduler/appengine/eng... File scheduler/appengine/engine/cron/demo/main.go (right): https://codereview.chromium.org/2981143002/diff/80001/scheduler/appengine/eng... scheduler/appengine/engine/cron/demo/main.go:124: entity := &CronState{ID: id} btw, why this change? I think entity object should be GC-ed with the stack. https://codereview.chromium.org/2981143002/diff/80001/scheduler/appengine/eng... File scheduler/appengine/engine/dsset/dsset.go (right): https://codereview.chromium.org/2981143002/diff/80001/scheduler/appengine/eng... scheduler/appengine/engine/dsset/dsset.go:155: type Tombstone struct { I think we don't actually need to expose individual Tombstone struct (nor write much about it in docs, even though it's interesting), but only a list of Tombstones, say type Garbage []Tombstone; now, I've just realized there is also tombstone (lowercase) struct. That was a tad confusing. Perhaps this struct can be renamed to garbageItem? https://codereview.chromium.org/2981143002/diff/80001/scheduler/appengine/eng... scheduler/appengine/engine/dsset/dsset.go:424: // Must be called within same transaction that called BeginPop. the same
Description was changed from ========== Add 'dsset' structure. It is a peculiar kind of datastore-backed set that is suitable for multiple producers, single consumer unordered queues. R=tandrii@chromium.org BUG= ========== to ========== Add 'dsset' structure. It is a peculiar kind of datastore-backed set that is suitable for multiple producers, single consumer unordered queues. R=tandrii@chromium.org BUG= ========== |