Working Unified PubSub Shopper in Manufacturing at Pinterest | by Pinterest Engineering | Pinterest Engineering Weblog | Nov, 2023

Pinterest Engineering
Pinterest Engineering Blog

Jeff Xiang | Software program Engineer, Logging Platform

Vahid Hashemian | Software program Engineer, Logging Platform

Jesus Zuniga | Software program Engineer, Logging Platform

At Pinterest, knowledge is ingested and transported at petabyte scale on daily basis, bringing inspiration for our customers to create a life they love. A central part of knowledge ingestion infrastructure at Pinterest is our PubSub stack, and the Logging Platform group at present runs deployments of Apache Kafka and MemQ. Over time, operational expertise has taught us that our clients and enterprise would drastically profit from a unified PubSub interface that the platform group owns and maintains, in order that software builders can give attention to software logic as an alternative of spending treasured hours debugging client-server connectivity points. Worth-add options on high of the native purchasers may also assist us obtain extra formidable targets for dev velocity, scalability, and stability. For these causes, and others detailed in our authentic PubSub Shopper weblog publish, our group has determined to spend money on constructing, productionalizing, and most just lately open-sourcing PubSub Client (PSC).

Within the 1.5 years since our earlier weblog publish, PSC has been battle-tested at giant scale in Pinterest with notably constructive suggestions and outcomes. From dev velocity and repair stability enhancements to seamless migrations from native consumer to PSC, we want to share a few of our findings from operating a unified PubSub consumer library in manufacturing.

In a distributed PubSub atmosphere, complexities associated to client-server communication can typically be onerous blockers for software builders, and fixing them typically require a joint investigation between the appliance and platform groups. One of many core motivations driving our growth of PSC was to cover these complexities from software builders in order that treasured time spent on debugging such points can as an alternative be used to give attention to the appliance logic itself.


  • Full automation in PubSub service endpoint discovery
  • Estimated 80% discount in time spent for establishing new PubSub producers and customers
  • Optimized consumer configurations managed by platform group

How We Did It

Automated Service Discovery

PSC presents a easy and acquainted answer to automate PubSub service endpoint discovery, which hides these complexities away from software builders. Via the introduction of Useful resource Names (RNs), PubSub assets (e.g. matters) at the moment are uniquely recognized with an RN string that comprises all the data PSC wants as a way to set up a reference to the servers that include the useful resource in query. This can be a related idea to Web URIs and Amazon ARNs. For instance,


is an RN that tells PSC precisely which matter, cluster, area, and PubSub backend the consumer wants to hook up with. Moreover, the protocol in entrance of the RN creates a whole Distinctive Useful resource Identifier (URI), letting PSC know precisely how the connection must be established.

This simplification stands in stark distinction to a few of the widespread pitfalls utilizing the native consumer, reminiscent of hardcoding probably invalid hostname/port combos, scattering SSL passwords throughout consumer configurations, and mistakenly connecting to a subject within the unsuitable area. With endpoint discovery totally automated and consolidated, consumer groups hardly ever / by no means report these points that used to require time-consuming investigations from our platform group.

Optimized Configurations and Monitoring

Previous to productionalizing PSC, software builders had been required to specify their very own consumer configurations. With this liberty got here points, notably:

  1. Some client-specified configurations could trigger efficiency degradation for each consumer and server
  2. Software builders could have a restricted understanding of every configuration and their implications
  3. Platform group had no visibility into what consumer configurations are getting used

At Pinterest, PSC comes out-of-the-box for our customers with consumer configurations which are optimized and standardized by the platform group, decreasing the necessity for software builders to specify particular person configurations that they might have in any other case wanted to carry out in-depth analysis into throughout configuration / software tuning. As an alternative, software builders at the moment are specializing in tuning solely the configurations that matter to them, and our platform group has spent considerably much less time investigating efficiency / connectivity points that got here with consumer misconfigurations.

PSC takes it one step additional with config logging. Having psc.config.logging.enabled=true turned on, our platform group now has additional insights into the consumer configurations used throughout the PubSub atmosphere in actual time.

These options quantity to not solely important dev velocity enhancements but in addition beneficial properties in stability and reliability of our PubSub providers.


  • >80% discount in Flink software restarts brought on by remediable consumer exceptions
  • Estimated 275+ FTE hours / yr saved in KTLO work by software and platform groups

How We Did It

Previous to PSC, consumer purposes typically encountered PubSub-related exceptions that resulted in software failure or restarts, severely impacting the steadiness of business-critical knowledge jobs and growing KTLO burden for each platform and software groups. Moreover, many of those exceptions had been resolvable through a consumer reset and even only a easy retry, which means that the KTLO burden brought on by these points was unnecessarily giant.

As an illustration, we observed that out-of-sync metadata between consumer and server can occur throughout common Kafka cluster upkeep and scaling operations reminiscent of dealer replacements and rolling restarts. When the consumer and server metadata go out-of-sync, the consumer begins to throw associated exceptions and turns into unstable, and doesn’t self-recover till the consumer is reconstructed or reset. All these auto-remediable points threatened our means to scale PubSub clusters effectively to satisfy enterprise wants, and brought on important KTLO overhead for all groups concerned.

Automated Error Dealing with

To fight these dangers, we carried out automated error dealing with inside PSC. Positioned between the native consumer and software layers, PSC has the distinctive benefit of having the ability to catch and remediate identified exceptions thrown by the backend consumer, all with out inflicting disruption to the appliance layer.

With automated error dealing with logic carried out, we additionally ship PSC with turned on by default, permitting all PubSub purchasers to run out-of-the-box with automated error dealing with logic managed by our platform group. Taking Flink-Kafka purchasers for instance, we’ve got noticed greater than 80% discount in job failures brought on by remediable consumer exceptions after migrating them to PSC, all with none adjustments to our common Kafka dealer atmosphere and scaling / upkeep actions:

Determine 1: Greater than 80% of remediable Flink job restarts had been prevented through PSC

Because of automated error dealing with in PSC, we’ve got been in a position to save greater than ~275 FTE hours per yr in KTLO work for each software and platform groups, driving important enhancements within the stability of consumer purposes and scalability of our PubSub atmosphere. We’re additionally actively including to PSC’s catalog of identified exceptions / remediation methods as we develop our understanding of those points, in addition to exploring choices to take proactive as an alternative of reactive measures to stop such points from occurring within the first place.


  • >90% of Java purposes migrated to PSC (100% for Flink)
  • 0 incidents brought on by migration
  • Full integration take a look at suite and CICD pipeline

How We Did It

Function and API Parity

Constructed with ease-of-adoption in thoughts, PSC comes with 100% function and API parity to the native backend consumer model it helps. With PSC being at present accessible for Kafka purchasers utilizing Java, we’ve got been in a position to migrate >90% of Pinterest’s Java purposes to PSC with minimal adjustments to their code and logic. Typically, the one adjustments required on the appliance had been:

  1. Change the native consumer imports and references with the corresponding PSC ones
  2. Replace the consumer configuration keys to match PSC’s
  3. Take away all earlier configurations associated to service discovery / SSL and substitute them with simply the Useful resource Identify (RN) string

Easy, low effort migrations enabled by function and API parity has been a robust promoting level for software groups to rapidly and effectively migrate their purchasers to PSC. We now have noticed 0 incidents to date with migrations and don’t count on this quantity to extend.

Apache Flink Integration

To assist the ever-growing share of purchasers utilizing Apache Flink knowledge streaming framework, we’ve got developed a Flink-PSC connector that permits Flink jobs to leverage the advantages of PSC. Provided that round 50% of Java purchasers at Pinterest are on Flink, PSC integration with Flink was key to attaining our platform targets of totally migrating Java purchasers to PSC.

With Flink jobs, we had to make sure that migrations from Flink-Kafka to Flink-PSC had been seamless in that the newly migrated Flink-PSC jobs should have the ability to recuperate from checkpoints generated by the pre-migration Flink-Kafka jobs. That is essential to Flink migrations as a consequence of the truth that Flink jobs retailer offsets and quite a lot of different state-related data throughout the checkpoint information. This offered a technical problem that required opening up the Flink-Kafka checkpoint information, understanding its contents, and understanding the way in which the contents are processed by Flink supply and sink operators. Finally, we had been in a position to obtain 100% adoption of Flink-PSC at Pinterest with the next efforts:

  1. We carried out Kafka to PSC checkpoint migration logic inside FlinkPscProducer and FlinkPscConsumer to make sure that state and offset data from the pre-migration Flink-Kafka checkpoint is recoverable through a Flink-PSC job
  2. We added a small quantity of custom code in our inner launch of Flink-Kafka connector to make sure Flink-Kafka and Flink-PSC checkpoints are deemed appropriate from the angle of Flink’s inner logic

Sturdy Integration Assessments and CICD

With PSC being within the energetic path for knowledge ingestion and processing at Pinterest, we’ve got taken further care to make sure that it’s robustly examined on all ranges previous to releases, notably in integration testing and dev / staging atmosphere testing. For that reason, PSC comes out-of-the-box with a full integration take a look at suite that covers many widespread eventualities that we’ve got noticed in our PubSub operational expertise. Moreover, we’ve got cataloged the general public APIs inside each PscConsumer and PscProducer to create a CICD pipeline that launches a PSC consumer software processing production-level site visitors and touches the entire public API’s. Sturdy integration testing and CICD, alongside expansive unit take a look at protection, have been instrumental in constructing our confidence in PSC’s means to tackle business-critical knowledge workloads from day one.

Having been battle-tested at scale for over one yr, PSC is now a core piece of the puzzle inside Pinterest’s knowledge infrastructure. There may be extra work deliberate for the long run, aiming to extend its technical functionality and worth to our enterprise.

Error Dealing with Enhancements

As PSC is onboarded to extra consumer purposes, we started to note and catalog the number of remediable errors that PSC at present doesn’t have the potential to mechanically resolve, and we’re actively including these capabilities to PSC with every new launch. One instance is to detect expiring SSL certificates so {that a} proactive consumer reset might be executed upon approaching certificates expiration to load a recent certificates into the consumer’s reminiscence and stop any interruptions to a consumer utilizing SSL protocol.

Price Attribution and Chargeback

PSC presents us the power to trace our purchasers, offering helpful data reminiscent of their attributed tasks, hostnames, configurations, and extra. One potential use case for this newfound visibility is to arrange a chargeback framework for PubSub purchasers in order that platform groups are in a position to break down how their PubSub price might be attributed to numerous consumer tasks and groups.

C++ and Python

PSC is at present accessible in Java. To develop the scope of PSC, C++ assist is being actively developed whereas Python assist is on the horizon.

PSC-Java is now open-sourced on GitHub with Apache License 2.0. Test it out here! Suggestions and contributions are welcome and inspired.

The present state of PSC wouldn’t have been potential with out important contributions and assist offered by Shardul Jewalikar and Ambud Sharma. Ping-Min Lin has additionally contributed considerably to design and implementation of the challenge. Particular because of Logging Platform and Xenon Platform Groups, Chunyan Wang and Dave Burgess for his or her steady steering, suggestions, and assist.


Apache®️, Apache Kafka, Kafka, Apache Flink, and Flink are emblems of the Apache Software program Basis.

To study extra about engineering at Pinterest, try the remainder of our Engineering Weblog and go to our Pinterest Labs site. To discover and apply to open roles, go to our Careers web page.