Tumgik
#rediscluster
govindhtech · 3 months
Text
How Memorystore for Redis Cluster Saved Day at Character.AI
Tumblr media
Google Memorystore
Google Cloud mission at Character.AI is to provide an unmatched user experience, and they believe that the key to maintaining user engagement is application responsiveness. Understanding the value of low latency and economy, they set out to optimise their cache layer, a vital part of the infrastructure supporting Google cloud service. Google cloud’s desire for efficiency and their requirement for scalability prompted us to choose Google Cloud’s Memorystore for Redis Cluster.
First incorporation Thanks to Django’s flexible caching settings, Google’s first attempt at utilising Memorystore for caching within their Django application was simple. Because Memorystore for Redis (non-cluster) fully supported Redis and satisfied their needs across several application pods, they selected it. Look-aside lazy caching greatly shortened response times for us; Memorystore now frequently returns results in the single digit millisecond range. This strategy, together with a high anticipated cache hit rate, made sure that our application continued to function well and be responsive even as we grew.
Turn away When an application uses caching, it first looks in the cache to determine if the requested data is there. In the unlikely event that the data is not in the cache, this calls a different datastore. This compromise benefits from memorystore’s high projected cache hit rate and typical single-digit millisecond response time once the cache heats up.
In contrast to pre-loading all of the possible data beforehand, lazy-loading loads data into the cache on a cache miss. Each data record has a customisable time-to-live (TTL) in the application logic, allowing data to expire from the cache after a certain amount of time. After it expires, the application idly loads the data again into the cache. As a result, there will be new data during the subsequent session and high cache hit rates during active user sessions.
Using a proxy to scale Google Cloud became aware of the constraints of a single Memorystore instance as Google Cloud user base increased. They were able to spread out their operations over several Memorystore instances by using Twemproxy to establish a consistent hash ring.
This was successful at first, but there were a few major problems. First, their staff had to work more to manage, tune, and keep an eye on Twemproxy, which added to the operational load and resulted to multiple outages as it couldn’t keep up with their demands. The main problem was that the hash ring was not self-healing, even though they still needed to adjust the proxy for performance.
If one of the numerous target Memorystore instances was overloaded or undergoing maintenance, the ring would break and we would lose cache hit rate, which would result in an excessive strain on the database. As Google Cloud team scaled both the application and Twemproxy pods in their Kubernetes cluster, they also encountered severe TCP resource contentions.
Software scaling After determining that, despite Twemproxy’s many advantages, it was not growing with them in terms of dependability, they added ring hashing to the application layer directly. By now, they had created three services and were scaling up their backend systems; two Python-based services shared a single implementation, while their Golang service had a different implementation. They also had to maintain ring implementations in three services. Once more, this was more effective than their earlier methods; nevertheless, when they scaled up, problems arose. The implementation of the per-service ring began to encounter issues with the indexes’ 65K maximum connection limit, and maintaining lengthy lists of hosts became laborious.
Moving to Memorystore for Redis Cluster One of the major milestones in their scaling journeys came in 2023 when Google Cloud released Memorystore for Redis Cluster. By utilising sharding logic that is tightly linked with the Redis storage layer, this service enables us to simplify their design and do away with the requirement for external sharding solutions. Their application’s scalability was improved by switching to Memorystore for Redis Cluster, which also allowed their primary database layer to sustain steady loads and high cache hit rates as they grew.
Character. AI’s experience with Memorystore for Redis Cluster is a prime example of their dedication to using state-of-the-art technology to improve user experience. They no longer need to bother about manually managing proxies or hashrings thanks to Memorystore for Redis Cluster. Alternatively, they can depend on a fully-managed system to provide scalability with minimal downtime and predictable low latencies. By shifting this administration burden to Google Cloud, they are able to concentrate on their core competencies and user experience, enabling them to provide their global user base with revolutionary AI experiences. Character.AI is committed to staying at the forefront of the AI market by always investigating new technologies and approaches to improve the performance and dependability of their application.
Overview of Redis Cluster Memorystore A completely managed Redis service is called Memorystore for Redis Cluster.
For Google Cloud, by utilising the highly scalable, accessible, and secure Redis service, applications running on Google Cloud may gain extraordinary speed without having to worry about managing intricate Redis setups.
Important ideas and terminology
Resource organisation in a hierarchy Memorystore for Redis Cluster streamlines administration and management by organising the many resources utilised in a Redis deployment into a hierarchical structure. This diagram serves as an example of this structure:
A collection of shards, each representing a portion of your key space, make up the memorystore for Redis Cluster instances. In a Memorystore cluster, each shard consists of one primary node and, if desired, one or more replica nodes. Memorystore automatically distributes a shard’s nodes across zones upon the addition of replica nodes in order to increase throughput and availability.
Instances Instances Your data is contained in a Memorystore for Redis Cluster instance. When speaking of a single Memorystore for a Redis Cluster unit of deployment, the terms instance and cluster are synonymous. You need to provision enough shards for your Memorystore instance in order to service the keyspace of your whole application.
Shards There are several shards in your cluster that are all of the same size.
Principal and secondary nodes Every shard has a single primary node. Replica nodes can be zero, one, or two per shard. Replicas are uniformly dispersed throughout zones and offer increased read throughput and excellent availability.
Redis version Redis version Memorystore for Redis Cluster supports a portion of the entire Redis command library and is based on the open-source Redis version 7.x.
Endpoints of clusters Your client connects to the discovery endpoint on each instance. The discovery endpoint is also used by your client to find cluster nodes. Refer to Cluster endpoints for additional details.
Prerequisites for networking Networking needs to be configured for your project before you can construct a Memorystore for a Redis Cluster instance.
Memorystore for Redis Cluster pricing For details on prices for the regions that are offered, visit the Pricing page.
Read more on Govindhtech.com
0 notes
govindhtech · 6 months
Text
Statsig Reaches 7.5M QPS In Memorystore for Redis Cluster
Tumblr media
They at Statsig are enthusiastic about trying new things. Their goal is to simplify the process of testing, iterating, and deploying product features for businesses so they can obtain valuable insights into user behavior and performance.
With Statsig, users can make data-driven decisions to improve user experience in their software and applications. Statsig is a feature-flag, experimentation, and product analytics platform. It’s vital that their platform can handle and update all of that data instantly. Their team expanded from eight engineers using an open-source Redis cache, so they looked for a working database that could support their growing number of users and their high query volume.
Redis Cluster Mode Statsig was founded in 2021 to assist businesses in confidently shipping, testing, and managing software and application features. The business encountered bottlenecks and connectivity problems and realized it needed a fully managed Redis service that was scalable, dependable, and performant. Memorystore for Redis Cluster Mode fulfilled all of the requirements. Memorystore offers a higher queries per second (QPS) capacity along with robust storage (99.99% SLA) and real-time analytics capabilities at a lower cost. This enables Statsig to return attention to its primary goal of creating a comprehensive platform for product observability that optimizes impact.
Playing around with a new database Their data store used to be dependent on a different cloud provider’s caching solution. Unfortunately, the high costs, latency slowdowns, connectivity problems, and throughput bottlenecks prevented the desired value from being realized. As loads and demand increased, it became increasingly difficult to see how their previous system could continue to function sustainably.
They chose to experiment by following in the footsteps of their platform and search for a solution that would combine features, clustering capabilities, and strong efficiency at a lower cost. they selected Memorystore for Redis Cluster because it enabled us to achieve they objectives without sacrificing dependability or affordability.
Additionally, since the majority of their applications are stateless and can readily accommodate cache modifications, the transition to Memorystore for Redis Cluster was an easy and seamless opportunity to align their operations with their business plan.
Statsig Improving data management in real time Redis Cluster memorystore has developed into a vital tool for Statsig, offering strong scalability and flexibility for thryoperations. Its capabilities are effectively put to use for things like read-through caching, real-time analytics, events and metrics storage, and regional feature flagging. Memorystore is also the platform that powers their core Statsig console functionalities, including event sampling and deduplication in their streaming pipeline and real-time health checks.
The high availability (99.99% SLA) of the memorystore for Redis Cluster guarantees the dependable performance that is essential to they services. They can dynamically adjust the size of their cluster as needed thanks to the ability to scale in and out with ease.
There is no denying the outcomes, as quantifiable advancements have been noted in several important domains
Enhanced database speed They are more confident in the caching layer with Memorystore for Redis Cluster, as it can support more use cases and higher queries per second (QPS). In order to keep up with their customers as they grow,they can now easily handle 1.5 million QPS on average and up to 7.5 million QPS at peak.
Increased scalability Memorystore for Redis Cluster‘s capacity to grow in or out with zero downtime has enabled us to support a variety of use cases and higher QPS, putting us in a position to expand their clientele and offerings.
Cost effectiveness and dependability They have managed to cut expenses significantly without sacrificing the quality of their services. When compared to the costs of using they previous cloud provider for the same workloads, the efficiency of their database running on Memorystore has resulted in a 70% reduction in Redis costs. Their requirements for processing data in real time have also benefited from Memorystore’s dependable performance.
Improved monitoring and administration Since they switched to Memorystore, they no longer have to troubleshoot persistent Redis connection problems with they database management. Memorystore’s user-friendly monitoring tools allowed they developers to concentrate on platform innovation rather than spend as much time troubleshooting database problems.
Integrated security An additional bit of security is always beneficial. The smooth integration of Memorystore with Google Cloud VPC improves .Their security posture.
Memorystore for Redis Cluster A completely managed Redis solution for Google Cloud is called Memorystore for Redis Cluster. By using the highly scalable, accessible, and secure Redis service, applications running on Google Cloud may gain tremendous speed without having to worry about maintaining intricate Redis setups.
A collection of shards, each representing a portion of your key space, make up the memorystore for Redis Cluster instances. In a Memorystore cluster, each shard consists of one main node and, if desired, one or more replica nodes. Memorystore automatically distributes a shard’s nodes across zones upon the addition of replica nodes in order to increase performance and availability.
Examples Your data is housed in a Memorystore for Redis Cluster instance. When speaking about a single Memorystore for a Redis Cluster unit of deployment, the words instance and cluster are synonymous. You need to provide enough shards for your Memorystore instance in order to service the keyspace of your whole application.
Anticipating the future of Memorystore
They are confident that Memorystore for Redis Cluster will continue to be a key component of .Their services a database they can rely on to support higher loads and increased demand and they are expanding they use of it to enable smarter and faster product development as they customer base grows. In the future, they will continue to find applications for Memorystore’s strong features and consistent scalability.
Read more on Govindhtech.com
0 notes