frankel, to random
@frankel@mastodon.top avatar
msquebanh, to Bloomscrolling
@msquebanh@mastodon.sdf.org avatar
msquebanh, to Flowers
@msquebanh@mastodon.sdf.org avatar
leanpub, to random
@leanpub@mastodon.social avatar

Surviving Other People's APIs by Phil Sturgeon and Mike Bifulco is on sale on Leanpub! Its suggested price is $23.99; get it for $11.00 with this coupon: https://leanpub.com/sh/l7MWCth2

leanpub, to programming
@leanpub@mastodon.social avatar

NEW! A Leanpub Frontmatter Podcast Interview with Phil Sturgeon, Author of Surviving Other People's APIs | Watch here: https://youtu.be/KxDT3kXS82w

remixtures, to TechnicalWriting Portuguese
@remixtures@tldr.nettime.org avatar

: "Your API is nearing completion and it’s time to let the world know about it. This means that it is time to complete your API documentation effort. But, where should you start? How do you know if you covered everything that your decision makers and developers will need to select your API and get started successfully?

This article provides a checklist to help you identify the documentation you will need for launching your API. We will also include some things to consider post-launch as well to help you continue to improve your documentation."

https://bump.sh/blog/api-documentation-checklist?utm_source=linkedin&utm_campaign=doc-checklist

remixtures, to TechnicalWriting Portuguese
@remixtures@tldr.nettime.org avatar

: "Luckily there's a specification similar to OpenAPI but directed at defining event-driven APIs. I'm talking about AsyncAPI. It's a specification that lets you define an API that's asynchronous in nature. By using AsyncAPI you can define, among other things, the different topics where events will be dropped, and the shape of the messages that represent each topic.

And this is where things get interesting. The shape of messages or, in other words, its payload, can adhere to specific standards. Without messages, there's no way to communicate events. And, following standards helps to guarantee the correct publishing, transport, and consumption of messages. If messages don't follow any standards, it's hard for developers to understand the shape of the messages. In addition, it's easy for consumers to stop working because, suddenly, messages are being shared in a slightly different format.

Among different message standards, there's one particularly interesting to me. Apache Avro isn't just a theoretical standard. It's also a serialization format. You can say it's a competitor to JSON but specialized in working with event-driven APIs. In the same way you can use JSON Schema to define the shape of JSON data you have Avro Schema to help you specify what Avro message payloads look like."

https://apichangelog.substack.com/p/how-to-document-event-driven-api

CopernicusEU, to random
@CopernicusEU@respublicae.eu avatar

RT by @CopernicusEU: 📢Our second webinar dedicated to the Data Space Ecosystem 🇪🇺🛰️ is in less than two weeks!

It will provide an overview of the Application Programming Interfaces and other functionalities available on the platform

🗓 28 May
🕑17:00 CEST

[2024-05-17 10:07 UTC]

leanpub, to books
@leanpub@mastodon.social avatar
alecm, to AdobePhotoshop

Zuckerman vs: Zuckerberg: why and how this is a battle of the public understanding of APIs, and why Zuckerman needs to lose and Meta needs to win

Imagine that you’re a cool, high-school, technocultural teenager; you’ve been raised reading Cory Doctorow’s “Little Brother” series, you have a 3D printer, a soldering iron, you hack on Arduino control systems for fun, and you really, really want a big strobe light in your bedroom to go with the music that you blast-out when your parents are away.

So you build a stepper-motor with a wheel and a couple of little arms, link it to a microphone circuit which does a FFT of ambient sound, and hot-glue the whole thing to your bedroom lightswitch so that the wheel’s arms can flick the lightswitch on-and-off in time to the beat.

If you’re lucky the whole thing will work for a minute or two and then the switch will break, because it wasn’t designed to be flicked on-and-off ten times per second; or maybe you’ll blow the lightbulb. If you’re very unlucky the entire switch and wiring will get really hot, arc, and set fire to the building. And if you share, distribute, and encourage your friends to do the same then you’re likely to be held liable in one of several ways if any of them suffer cost or harm.

Who am I?

My name’s Alec. I am a long-term blogger and an information, network and cyber security expert. From 1992-2009 I worked for Sun Microsystems, from 2013-16 I worked for Facebook, and today I am a full-time stay at home dad and part-time consultant. For more information please see my “about” page.

What does this have to do with APIs?

Before I begin I want to acknowledge the work of Kin Lane, The API Evangelist, who has been writing about the politics of APIs for many years. I will not claim that Kin and I share the same views on everything, but we appear to overlap perspectives on a bunch of topics and a lot of the discussion surrounding his work resonates with my perspectives. Go read his stuff, it’s illuminating.

So what is an API? My personal definition is broad but I would describe an API as any mechanism that offers a public or private contract to observe (query, read) or manipulate (set, create, update, delete) the state of a resource (device, file, or data).

In other words: a light switch. You can use it to turn the light on if it’s off, or off if it’s on, and maybe there’s a “dimmer” to set the brightness if the bulb is compatible; but light switches have their physical limitations and expected modes of use, and they need to be chosen or designed to fit the desired usage model and purpose.

Perhaps to some this definition sounds a little too broad because it would literally include referring to (e.g.) “in-browser HTML widgets and ‘submit’ buttons for deleting friendships” as an “API”; but the history of computing is rife with human-interface elements being repurposed as application-interfaces, such as banking where it was once fashionable to link new systems to old backend mainframes by using software that pretends to be a traditional IBM 3270 terminal and then screen-scraping responses to queries which were “typed” into the terminal by the new system.

The modern equivalent for web-browsers is called Selenium WebDriver and is widely used by both automated software testers and criminal bot-farms, to name but two purposes.

So yes: the tech industry — or perhaps: the tech hacker/user community — has a long history of wiring programmable motors to light switches and hoping that their house does not catch on fire… but we should really aspire to do better than that… and that’s where we come to the history of EBay and Twitter.

History of Public APIs

In the early 2000s there was a proliferation of platforms that offered various services — “I can buy books over the internet? That’s amazing!” — and this was all before the concept of a “Public API” was invented.

People wanted to “add-value” or “auto-submit” or “retrieve data” from those platforms, or even to build “alternative clients”; so they examined the HTML, reverse-engineered the functions of Internal or Private APIs which made the platform work, wrote and shared ad-hoc tools that posted and scraped data, and published their work as hackerly acts of radical empowerment “on behalf of the users” … except for those tools which stole or misused your data.

Kin Lane particularly describes the launch of the Public APIs for EBay in November 2000 and for Twitter in September 2006; about the former he writes:

The eBay API was originally rolled out to only a select number of licensed eBay partners and developers. […] The eBay API was a response to the growing number of applications that were already relying on its site either legitimately or illegitimately. The API aimed to standardize how applications integrated with eBay, and make it easier for partners and developers to build a business around the eBay ecosystem.

link


…and regarding the latter:

On September 20, 2006 Twitter introduced the Twitter API to the world. Much like the release of the eBay API, Twitter’s API release was in response to the growing usage of Twitter by those scraping the site or creating rogue APIs.

link


…both of which hint at some issues:

  1. an ecosystem of ad-hoc tools that attempt to blindly and retrospectively track EBay’s own platform development would not offer standardisation across the tools that use those APIs, and so would thereby actually limit potential for third-party client development; each tool would be working with different assumed “contracts” of behaviour that were never meant to be fixed or exposed to the public, and would also replicate work
  2. proliferation of man-in-the-middle “services” that would act “on your behalf” — and with your credentials — on the Twitter and EBay platforms, presented both a massive trust and security risk to the user (fraudulent purchases? fake tweets? stolen credentials?) with consequent reputational risk to the platform

Why do Public APIs exist?

In short: to solve these problems. Kin Lane writes a great summary on the pros-and-cons of Public APIs and how they are used both to enable, but also to (possibly unfairly) limit, the power of third party clients that offer extra value to a platform’s users.

But at the most fundamental level: Public APIs exist in order to formalise contracts of adequate means by which third-parties can observe or manipulate “state” (e.g.; user data, postings, friendships, …) on the platform.

By offering a Public API the platform frees itself also to develop and use Private APIs which can service other or new aspects of platform functionality, and it’s in a position to build and “ring-fence” the Public API service in the expectation of both heavy use and abuse being submitted through it.

Similarly: the Private APIs can be engineered more simply to act like domestic light-switches: to be used in limited ways and at human speeds; it turns out that this can be important for matters like privacy and safety.

Third parties benefit from Public APIs by having a guaranteed set of features to work with, proper documentation of API behaviour, and confidence that the API will behave in a way that they can reason about, and an API lifecycle management process with which will enable them to make their own guarantees regarding their work.

What is the Zuckerman lawsuit?

First, let me start with a few references:

The shortest summary of the lawsuit that I have heard from one of its ardent supporters, is that the lawsuit:

[…] seeks immunity from [the Computer Fraud and Abuse Act] and [the Digital Millennium Copyright Act] [for legal] claims [against third parties or users] for automating a browser [to use Private APIs to obtain extra “value” from a website] and [the lawsuit also] does not seek state mandated APIs, or, indeed, any APIs

(private communication)


To make a strawman analogy so that we can defend it’s accuracy:

Let’s build and distribute motors to flick lightswitches on and off to make strobe lights, because what’s the worst that could happen? And we want people to have a fundamental right to do this, because Section 230 says we have such a right. We won’t be requiring any new switches to be installed, we just want to be allowed to use the ones that are already there, so it’s easy and low-cost to ask for, and there’s no risk to us doing this. But we also want legal immunity just in case what we provide happens to burn someone’s house down.

In other words: a return to the ways of the early 2000s, where scraping data and poking undocumented Private APIs was an accepted way to hack extra value into a website platform. To a particular mindset — especially the “big tech is irredeemably evil” folk — this sounds great, because clearly Meta intentionally prevents your having full, automated remote control over your user data on the grounds that it’s terribly valuable to them, and their having it keeps you addicted, so it helps them make money

And you know what? To a very limited extent I agree with that premise — or at least that some of the Facebook user-interface is unnecessarily painful to use.

E.g. I feel there is little (some, but little) practical excuse for the heavy user friction which Facebook imposes upon editing of the “topics you may be interested in receiving adverts about“; but the way to address this is not to encourage proliferation of browser plugins (of dubious provenance regarding privacy and regulatory compliance, let alone uncertain behaviour) which manipulate undocumented Private APIs.

Apart from any other reason, as alluded above, Private APIs are built in the expectation of being used in a particular way — e.g. by humans, at a particular cadence and frequency — and on advanced platforms like Facebook they are engineered with those expectations enforced by rate limits not only for efficiency but also for availability, security and privacy reasons.

This is something which I partially described in a presentation on behalf of Facebook at PasswordCon in 2014, but the short version is: if an API is expected to be used primarily by a human being, then for security and trust purposes it makes sense to limit it to human rates of activity.

If you start driving these Private APIs at rates which are inhuman — 10s or 100s of actions per second — then you should and will expect them to either be rate-limited, or else possibly break the platform in much the same way that flicking a lightswitch at such a rate would break that lightswitch or bulb.

With this we can describe the error in one of the proponent’s claims: We aren’t requiring any new [APIs] to be installed, we just want to be allowed to use the ones that are already there — but if the Private API is neither intended nor capable of being driven at automated speeds then either something (the platform?) will break, or else there will be loud demands that the Private APIs be re-engineered to remove “bottlenecks” (rate limits) to the detriment of availability and security.

But if you will be calling for the formalisation of Private APIs to provide functionality, why are you not instead calling for an obligation upon the platform to provide a Public API?

Private APIs are not Public APIs, and Public APIs may demand registration

The general theme of the lawsuit is to demand that any API which a platform implements — even undocumented Private ones — should be legally treated as a Public API, open for use by third party implementors, without reciprocal obligation that the third-party client obtain an “API Key” to identify itself, nor to abide by particular behaviour or rate-limits.

In short: all APIs, both Public and Private, should become “fair game” to third party implementors, and the Platforms should have no business to distinguish between one third-party or another, even in the instance that one or more of them are malicious.

This is a dangerous proposal. Platforms innovate new functionality and change their Private API behaviour at a relatively rapid speed, and there is currently nothing to prevent that; but if a true “right to use” for a Private API becomes somehow enshrined, what happens next?

Obviously: any behaviour which interferes with a public right-to-use is illegal, so it will therefore become illegal to change or remove Private APIs — or at very least any attempt to do so will lead to claims of “anticompetitive behaviour” and yet more punitive lawsuits. The free-speech rights of the platform will be abridged by compulsion to never change APIs, or to support legacy-publicly-used-yet-undocumented APIs forever more.

So, again, why not cut this Gordian knot by compelling platforms to make available a Public API that supports the desired functionality? After all, even Mastodon obligates developers of third-party apps to register their apps before use; but somehow big platforms should accept and and all non-human usage of Private APIs without discrimination?

Summary

I don’t want to keep flogging this horse, so I am just going to try and summarise in a few bullets:

  1. Private APIs exist to provide functionality to directly support a platform; they are implemented in ways which reflect their expected (usually: human) modes of use, they are not publicly documented, they can come and go, and this is normal and okay
  2. Public APIs exist to provide functionality to support third-party value-add to a platform; they are documented and offer some form of public “contract” or guarantee of behaviour, capability, and reliability. They are often designed in expectation of automated or bulk usage.
  3. Private APIs do not offer such a public contract; they are not meant to be built upon other than by the platform itself. They are meant to be able to “go away” without fuss, but if their use is a guaranteed “right” then how can they ever be deprecated?
  4. If third parties want to start using Private APIs as if they were Public APIs then the Private APIs will probably need to be re-engineered to support the weight of automated or bulk usage; but if they are going to be re-engineered anyway, why not push for them to become Public APIs?
  5. If Private APIs are not re-engineered and their excessive automated use by third party tools breaks the platform, why should the tool-user or the tool-provider not be held at least partly responsible as would happen in any other form of intentional or unintentional Denial-of-Service attack?
  6. If some (in-browser) third party tools claim to be acting “for the public good” then presumably they will have no problem in identifying themselves in order to differentiate themselves from (in-browser) evil cookie-stealing malware and worms; but to differentiate themselves would require use of an API Key and a Public API — so why are the third-party tool authors not calling to have the necessary Public APIs?

Just because an academic says “I wrote a script and I think it will work and that I [or one of your users] should be allowed to run it against your service without fear of reprisal even though [we] don’t understand how the back end system will scale with it”— does not mean that they should be permitted to do so willy-nilly, not against Facebook nor against your local community Mastodon instance.

https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/threads?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/add_to/mastodon?linkurl=https%3A%2F%2Falecmuffett.com%2Farticle%2F109757&linkname=Zuckerman%20vs%3A%20Zuckerberg%3A%20why%20and%20how%20this%20is%20a%20battle%20of%20the%20public%20understanding%20of%20APIs%2C%20and%20why%20Zuckerman%20needs%20to%20lose%20and%20Meta%20needs%20to%20winhttps://www.addtoany.com/share

https://alecmuffett.com/article/109757

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "We don’t have any way of validating whether the hashtags we selected captured all of the content on TikTok in the US about the war in Gaza. However, there are some signs we had decent coverage. Some of the hashtags were popular enough that we observed ‘trend chasing’, where creators will add many trending hashtags to unrelated posts. We also know our initial hashtag selection was overbroad because we had to remove some after our collection. This doesn’t mean we captured all relevant hashtags, of course! The research API lets you query data by hashtag, description, and user, so to a certain extent, you need to know what you’re looking for. The research API returns data about public videos, video comments, and user accounts. It allows fine-grained searching by keyword and hashtag, but there is no equivalent of the Twitter firehose. We are thinking through methods to estimate ‘top content’, and prior work from other researchers has attempted this for other platforms. Because of the very strong imbalance of views to content, to understand what (most) people are seeing, the top 1% of content appears to be a pretty good approximation. However, it clearly wouldn’t help researchers who wanted to study posting activity, either by itself or in connection to view activity."

https://cybersecurityfordemocracy.org/getting-to-know-the-tiktok-research-api

estherschindler, to random
@estherschindler@hachyderm.io avatar

Hive mind: What do you wish you knew before you started implementing public-facing APIs? (For an article, but I don’t have to quote anyone.)

remixtures, to TechnicalWriting Portuguese
@remixtures@tldr.nettime.org avatar

: "Why are SDKs so important to ensuring your product is tuned for developers?

It all comes down to helping your API consumers integrate faster. When people decide to integrate with your API, they have a job they want to get done. They think your API could be the fastest path to solving it. But too often, building an integration with the API is as painful (or even more painful) as the original job. That’s counterproductive, to put it mildly. There are a hundred things your users would rather be doing than reading your API docs and writing basic integration code. The less you require of them, the happier they’ll be. And SDKs are the best tool for making sure your API remains unobtrusive.

The definition of an SDK is straightforward: It’s a library that surrounds your API and handles the boring parts of the integrating process, such as:

  • Constructing an HTTP request
  • Managing an authentication token
  • Handling retries
  • Parsing paginated responses

More powerful SDKs will go beyond request and response-handling basics and provide type safety and hinting in the integrated development environment (IDE). This means users don’t have to open a docs page; they’ll get all the information and feedback they need directly in their coding environment. It doesn’t get more efficient than that."

https://thenewstack.io/api-builders-must-sell-to-developers-or-die-slowly

remixtures, to TechnicalWriting Portuguese
@remixtures@tldr.nettime.org avatar

: "Finding the right balance between being too simple and too sophisticated isn't easy. You might get away with a generic onboarding how-to guide if your API focuses on one single feature (as OpenCage does). Otherwise, you need to craft different onboarding experiences for each one of the consumer use cases you want to support.

One thing that works for me is learning as much as I can from consumers before writing any API documentation. Then, I focus on the top use cases potential consumers are interested in. Since those will be the top entry points for most new API users, I prepare a tutorial for each. Each tutorial offers a safe environment where developers can easily sign up to use the API. Then, by following the steps in the tutorial they will end up implementing the integration that fulfills their use case."

https://apichangelog.substack.com/p/api-documentation-consumes-attention

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "I often make the point about API users that they fall into one of two buckets: the conceptual user (the dreamer) and the procedural user (the implementer). Breaking those two down is a blog post for another day, but essentially, this book is aimed at both, leaning more heavily toward the former.

Bruno embarked on his book-writing journey armed with a hefty dose of product thinking. He took the scenic route chatting it up with API aficionados, getting the lowdown of their challenges and triumphs. Turns out, we Product Managers are drowning daily in a sea of technical jargon without a life raft in sight.

If you’re anything like me back in the day, when I was a fresh-faced newbie diving headfirst into the API industry, you’ll relate. I’m talking fingers dancing across the keyboard like they were in some kind of turbocharged typing marathon during every single sit-down with architects, developers, and engineers. Seriously, the clickety clack of the keystrokes echoed as my own personal symphony: Reverie of Desperate Recall.

This book was born specifically for those navigating the waters of building an API product, whether they be product managers, architects, development managers, you name it. So it is for those readers that I would recommend reading this book." https://theapinerd.com/an-api-product-managers-honest-take-on-bruno-pedro-s-book-building-an-api-product-f01038ad2bc3

leanpub, to devops
@leanpub@mastodon.social avatar

Network Automation Crash Course https://leanpub.com/b/networkautomationcrashcourse by GitforGits | Asian Publishing House is the featured bundle on the Leanpub homepage! https://leanpub.com

matthew, to webdev
@matthew@opinuendo.com avatar

REST APIs ≠ JSON over HTTP! HTMX brings back true REST with hypermedia. Disappointed with your SPA complexity? Say goodbye to #JavaScriptFatigue consider giving #HTMX a try. ➡️ https://netapinotes.com/tired-of-javascript-fatige-htmx-revives-restful-apis/ #webdev #APIs #hypermedia #hypertext

skinnylatte, to random
@skinnylatte@hachyderm.io avatar

Sometimes I get worlds colliding in my email when I read about activities and news about .

frankel, to random
@frankel@mastodon.top avatar

The first rule of #distributedsystems is "Don’t distribute your system". Designing distributed systems right is infamously hard for multiple reasons.

Imagine that the client sending a request sends a unique key along. The server keeps track of key-request pairs.

It’s precisely the idea behind the #IETF #specification The #Idempotency-Key HTTP Header Field.

https://blog.frankel.ch/fix-duplicate-api-requests/

#API #APIs

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Speaking to the WSJ, beta testers of Meta’s new Content Library and Content Library API noted that the system is promising but has some glaring flaws. Some of the flaws mentioned include the inability to filter search by a specific geographical area, a cap on the number of search results that a query can return and a prohibition from downloading public posts, including messages posted by politicians. While researchers quoted in the story are skeptical about Meta’s commitment to fully developing these new data tools, a few conceded that it is still in the early days and that Meta might still fix some of these flaws.

Speaking of beta testers, researchers from the Social Media Lab, Anatoliy Gruzd and Philip Mai, were also invited last fall by Meta to test the new Content Library and the Content Library API. Our researchers agreed to be beta testers in the hope that they can help shape the development of these tools and make them as useful as possible for research purposes.

Here are their initial impressions of the tools. As you read the remainder of this blog post, please note that our researchers no longer have access to these tools and that Meta has made some changes since the Fall." https://socialmedialab.ca/2024/03/25/a-first-look-at-metas-new-content-library-and-content-library-api/

janriemer, to rust

JSON Patch

https://jsonpatch.com/

"JSON Patch is a format for describing changes to a document. It can be used to avoid sending a whole document when only a part has changed. When used in combination with the PATCH method, it allows partial updates for HTTP in a standards compliant way."

JSON Patch crate:

https://lib.rs/crates/json-patch

remixtures, to TechnicalWriting Portuguese
@remixtures@tldr.nettime.org avatar

: "It's interesting to notice that while "maintenance costs will have peaks whenever there’s a new release or a fix to the API code, support costs will keep growing over time until they become almost 100%."

Why does that happen? Design and implementation costs evaporate as soon as an API is "finished" and consumers are using it. After that, the only thing that still matters to consumers is that the API behaves as it should. If not, they'll look for ways to fix the challenges they're having, and that's where support comes in.

Unless consumers can get all the information they need from the API documentation. If the API documentation can be the preferred method consumers use to troubleshoot their integrations, then you'll end up spending less on support. There will be less hand-holding required as consumers can fix their issues by themselves." https://apichangelog.substack.com/p/two-ways-to-influence-business-growth

matthew, to RSS
@matthew@opinuendo.com avatar

Had a wonderful time chatting on Treblle's roundtable panel. The recording can be found here: https://www.youtube.com/watch?v=l5XeEnTRcIk

I'd like to thank @docpop for creating the awesome RSS shirt that attracted all the lovely attention. You can get yours from his shop: https://docpop.threadless.com/

itnewsbot, to random

ngrok unveils API gateway-as-a-service - Ngrok, provider of an ingress service that delivers traffic from developer platforms t... - https://www.infoworld.com/article/3713080/ngrok-unveils-api-gateway-as-a-service.html#tk.rss_all

frankel, to security
@frankel@mastodon.top avatar

I lastly stumbled upon a list of 16 practices to secure your . In this two-post series, I’d like to describe how we can implement each item with (or not).

https://blog.frankel.ch/secure-api-practices-apisix/1/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • ngwrru68w68
  • everett
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • rosin
  • GTA5RPClips
  • Durango
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • tacticalgear
  • osvaldo12
  • tester
  • cubers
  • cisconetworking
  • mdbf
  • ethstaker
  • modclub
  • Leos
  • anitta
  • normalnudes
  • megavids
  • lostlight
  • All magazines