atheken

joined 2 years ago
[–] atheken@programming.dev 3 points 2 years ago* (last edited 2 years ago) (2 children)

At my last job, we used sleet in combination with S3 and a cloudfront distribution with an authorization lambda for pulling packages. I think the whole setup took about 2 hours and it was rock solid.

This was necessary because we were using Octopus Deploy and were bumping into storage limits with their built in feed.

We were a relatively small team, and relatively slow package publish rate (10x a day, probably).

Biggest issue with sleet is that it’s not going to support “pull through” so you’ll need to have multiple nuget feeds configured.

[–] atheken@programming.dev 2 points 2 years ago* (last edited 2 years ago)

There is no incentive for adding the friction of gas or PoW for these types of systems.

The parties involved can have a shared log and private keys for signing entries. Party A provides a thing and Party B signs an entry that says they were provided with the thing. Party A can wait for that signed entry before releasing the goods, etc. The problem with block chain to track physical stuff is that that handoffs are not instantaneous, so there’s always lag between the real state of the world and what the log says. In practice, this may be a few seconds, and a human might wait for confirmation before physically granting access to a recipient.

To put it another way, the party that is signing is not incentivized to forge that they have received an object from someone else, as that is effectively the fulfillment of the obligation. They’re only going to sign an entry if they get the object.

[–] atheken@programming.dev 2 points 2 years ago* (last edited 2 years ago)

Sorry, I didn't mean to reference the detail member, I meant "extension members" as defined in the RFC.

In the RFC, they are outlined as top-level elements. In the version I proposed, these are bundled up inside of an optional context member. This can be useful in making the serialization and deserialization process a little bit easier to implement in languages that support generics without the need to subclass for the common elements. The RFC specifically defines "extension members" as optional. The key difference is that in what I was describing, they'd be bundled into one object, rather than being siblings of the top-level response.

It also side-steps any future top-level reserved keyword collisions by keeping "user-defined" members a separate box.

You seem to be laboring under the notion that this spec produces something that can be entirely negotiated by generic clients, but I don't see that at all. Even for "trivial" examples (multiple validation errors, or rate-limiting thottling), clients would need to implement specialized handlers, which is only vaguely touched upon by the need to have a "problem registry".

And, like it or not, considering how easy or messy it is for a downstream client to consume a result is actually an important part of API design. I don't see how considering the browser, javascript, and the Fetch API behavior aren't relevent considerations when we're talking about extending HTTP with JSON responses.

Did you author this RFC? I don't exactly understand why you seem to be taking the criticism personally.

[–] atheken@programming.dev 2 points 2 years ago* (last edited 2 years ago) (1 children)

Thanks, and you are basically correct on both counts:

  • I supported an API that had a Batch endpoint that could be "partial success" -- and that was a mess.
  • I have been experimenting with something where you have standardized elements because the Fetch API doesn't throw on 4XX/5XX, so having one if check, rather than two, makes sense.

At this point, it's an experiment.

[–] atheken@programming.dev 2 points 2 years ago* (last edited 2 years ago) (3 children)

Context is whatever makes sense to provide to a consumer to help them debug it or respond to it - the same basic idea as in the rfc under details. IMO, it can't easily be generalized. Some APIs may have context to provide, others may not. These could be validation errors in a structured format, or backoff timings in the case of a 429.

Success is something that you can sniff for after deserializing, as IIRC Fetch API will not throw except for a network errors, even in the event of a 4XX or 5XX.

Consider something like:if(!obj.error_code){} vs if(obj.success){ }. Certainly, you could consolidate the error_code and success member, but with the sloppy truthiness of testing in Javascript, including something like that as a standard part of all responses may make sense.

[–] atheken@programming.dev 3 points 2 years ago* (last edited 2 years ago) (8 children)

I quickly skimmed this, and it looks kinda overwrought to me.

This is the format I’ve been using:

{
success: bool
error_code: number,
message: “human-centric error message”,
context:  { optional, user-defined details }
}
[–] atheken@programming.dev 8 points 2 years ago* (last edited 2 years ago)

Breaking larger tasks down effectively removes uncertainty.

My general rule of thumb in planning is that any task that is estimated for longer than 1 day should be broken up.

Longer than one day communicates that the person doing the estimate knows it’s a large task, but not super clear about the details. It also puts a boundary around how long someone waits before trying to re-scope:

A task that was expected to take one week, but ends up going 2x is a slide of a week, but a task that is estimated at one day but takes 3x before re-scope is a loss of 2 days.

You can pick up one or two days, but probably not one or two weeks.

[–] atheken@programming.dev 1 points 2 years ago (1 children)

Are P1/P2 designations? If so, how in the world can those be correlated to time/workload?

[–] atheken@programming.dev 3 points 2 years ago* (last edited 2 years ago) (2 children)

I mean, the inverse is probably more productive. Specify the observable behaviors you want and let the “AI” build the software.

[–] atheken@programming.dev 2 points 2 years ago* (last edited 2 years ago)

I shared the link partially because it’s a useful utility to check any public server TLS configs for vulnerabilities.

view more: ‹ prev next ›