#RedisCluster
Explore tagged Tumblr posts
Text
Unity Ads Performs 10M Tasks Per Second With Memorystore

Memorystore powers up to 10 million operations per second in Unity Ads.
Unity Ads
Prior to using its own self-managed Redis infrastructure, Unity Ads, a mobile advertising company, was looking for a solution that would lower maintenance costs and scale better for a range of use cases. Memorystore for Redis Cluster, a fully managed service built for high-performance workloads, is where Unity moved their workloads. Currently, a single instance of their infrastructure can process up to 10 million Redis operations per second.The business now has a more dependable and expandable infrastructure, lower expenses, and more time to devote to high-value endeavors.
Managing one million actions per second is an impressive accomplishment for many consumers, but it’s just routine at Unity Ads. Unity’s mobile performance ads solution readily manages this daily volume of activities, which feeds ads to a wide network of mobile apps and games, showcasing the reliable capabilities of Redis clusters. Extremely high performance needs result from the numerous database operations required for real-time ad requests, bidding, and ad selection, as well as for updating session data and monitoring performance indicators.
Google Cloud knows that this extraordinary demand necessitates a highly scalable and resilient infrastructure. Here comes Memorystore for Redis Cluster, which is made to manage the taxing demands of sectors where speed and scale are essential, such as gaming, banking, and advertising. This fully managed solution combines heavier workloads into a single, high-performance cluster, providing noticeably higher throughput and data capacity while preserving microsecond latencies.
Providing Memorystore for Redis Cluster with success
Unity faced several issues with their prior Do-It-Yourself (DIY) setup before utilizing Memorystore. For starters, they employed several types of self-managed Redis clusters, from Kubernetes operators to static clusters based on Terraform modules. These took a lot of work to scale and maintain, and they demand specific understanding. They frequently overprovisioned these do-it-yourself clusters primarily to minimize possible downtime. However, in the high-performance ad industry, where every microsecond and fraction of a penny matters, this expense and the time required to manage this infrastructure are unsustainable.
Memorystore presented a convincing Unity Ads solution. Making the switch was easy because it blended in perfectly with their current configuration. Without the managerial overhead, it was just as expensive as their do-it-yourself solution. Since they were already Google Cloud users, they also thought it would be beneficial to further integrate with the platform.
The most crucial characteristic is scalability. The ability of Memorystore for Redis Cluster to scale with no downtime is one of its best qualities. With just a click or command, customers may expand their clusters to handle terabytes of keyspace, allowing them to easily adjust to changing demands. Additionally, Memorystore has clever features that improve use and dependability. The service manages replica nodes, putting them in zones other than their primary ones to guard against outages, and automatically distributes nodes among zones for high availability. What would otherwise be a difficult manual procedure is made simpler by this automated method.
All of this made Unity Ads decide to relocate their use cases, which included state management, distributed locks, central valuation cache, and session data. The relocation process proceeded more easily than expected. By using double-writing during the shift, they were able to successfully complete their most important session data migration, which handled up to 1 million Redis operations per second. Their valuation cache migration, which served up to half a million requests per second (equivalent to over 1 million Redis operations per second) with little impact on service, was even more astounding. It took only 15 minutes to complete. In order to prevent processing the same event twice, the Google team also successfully migrated Unity’s distributed locks system to Memorystore.
Memorystore in operation: the version of Unity Ads
Google Cloud has a life-changing experience using Memorystore for Redis Cluster. Its infrastructure’s greater stability was one of the most obvious advantages it observed. Because its prior DIY Redis cluster was operating on various tiers of virtualization services, such as Kubernetes and cloud computing, where it lacked direct observability and control, it frequently ran into erratic performance issues that were challenging to identify.
Consider this CPU utilization graph of specific nodes from its previous self-managed arrangement, for example:Image credit to Google Cloud
As you can see, the CPU consumption across many nodes fluctuated a lot and spiked frequently. It was challenging to sustain steady performance in these circumstances, particularly during times of high traffic.
During Kubernetes nodepool upgrades, which are frequently the result of automatic upgrades to a new version of Kubernetes, it also encountered issues with its do-it-yourself Redis clusters. The p99 latency is skyrocketing, as you can see!Image credit to Google Cloud
Another big benefit is that we can now grow production seamlessly because of Memorystore. This graph displays its client metrics as it increased the cluster’s size by 60%.
With very little variations, the operation rate was impressively constant throughout the procedure. For us, this degree of seamless scaling changed everything since it made it possible to adjust to shifting needs without compromising our offerings.
The tremendous performance and steady low latency it has been able to attain using Memorystore are described in the data presented above. For its ad-serving platform, where every microsecond matters, this performance level is essential.
Making a profit from innovation
Unity ads switch to Memorystore has resulted in notable operational enhancements in addition to performance gains. It no longer invests effort in preparing its Redis clusters for production by testing and fine-tuning them.
Business-wise, it anticipates that by properly scaling clusters and applying the relevant Committed Use Discounts, it should be able to get cost savings on par with its prior do-it-yourself solution, particularly with the addition of single zone clusters to cut down on networking expenses. For a comparable price to prior self-managed Redis deployment, it is now able to obtain a fully managed, scalable, and far more dependable (99.99% SLA) solution with Memorystore.
With an eye toward the future, Memorystore has created new opportunities for system architecture. For many of the use cases, it is currently thinking about taking a “Memorystore-first” strategy. For instance, engineers frequently choose persistent database solutions like Bigtable when creating crucial data systems because they don’t want to take chances, even whether the use case actually requires persistence and/or consistency.
Databases like Bigtable are better suited for data that will persist for months to years, but occasionally the use case just requires data durability for about an hour. It may save money and development time by avoiding such shortcuts and optimizing for its shorter persistent (such as hours-to-day) data use cases with a hardened, dependable Redis Cluster solution like Memorystore. Overall, it can confidently extend the use of Redis across more infrastructure to reduce costs and boost performance because of Memorystore’s scalability and dependability.
The ease with which persistence (AOF or RDB) can be enabled on Memorystore is another revolutionary advantage. Its use cases were restricted to caching scenarios with transitory data that it could afford to lose because its Kubernetes DIY Redis cluster did not allow permanence. It can expand use cases and even mix use cases within the same cluster with Memorystore’s one-click persistence, which boosts usage and reduces expenses.
Every second matters in business. Memorystore is helping them remain competitive and provide its publishers and advertisers with better outcomes by allowing the team to concentrate on core business innovation instead of infrastructure administration.
Read more on Govindhtech.com
#MemorystoreforRedisCluster#memorystore#rediscluster#Unity#unityads#Google#googlecloud#govindhtech#news#TechNews#Technology#technologynews#technologytrends
0 notes
Text
Vector Search In Memorystore For Redis Cluster And Valkey

Memorystore for Redis Cluster became the perfect platform for Gen AI application cases like Retrieval Augmented Generation (RAG), recommendation systems, semantic search, and more with the arrival of vector search earlier this year. Why? due to its exceptionally low latency vector search. Vector search over tens of millions of vectors may be done with a single Memorystore for Redis instance at a latency of one digit millisecond. But what happens if you need to store more vectors than a single virtual machine can hold?
Google is presenting vector search on the new Memorystore for Redis Cluster and Memorystore for Valkey, which combine three exciting features:
1) Zero-downtime scalability (in or out);
2) Ultra-low latency in-memory vector search;
3) Robust high performance vector search over millions or billions of vectors.
Vector support for these Memorystore products, which is now in preview, allows you to scale up your cluster by up to 250 shards, allowing you to store billions of vectors in a single instance. Indeed, vector search on over a billion vectors with more than 99% recall can be carried out in single-digit millisecond latency on a single Memorystore for Redis Cluster instance! Demanding enterprise applications, such semantic search over a worldwide corpus of data, are made possible by this scale.
Modular in-memory vector exploration
Partitioning the vector index among the cluster’s nodes is essential for both performance and scalability. Because Memorystore employs a local index partitioning technique, every node has an index partition corresponding to the fraction of the keyspace that is kept locally. The OSS cluster protocol has already uniformly sharded the keyspace, so each index split is about similar in size.
This architecture leads to index build times for all vector indices being improved linearly with the addition of nodes. Furthermore, adding nodes enhances the performance of brute-force searches linearly and Hierarchical Navigable Small World (HNSW) searches logarithmically, provided that the number of vectors remains constant. All in all, a single cluster may support billions of vectors that are searchable and indexable, all the while preserving quick index creation times and low search latencies at high recall.
Hybrid queries
Google is announcing support for hybrid queries on Memorystore for Valkey and Memorystore for Redis Cluster, in addition to better scalability. You can combine vector searches with filters on tag and numeric data in hybrid queries. Memorystore combines tag, vector, and numeric search to provide complicated query answers.
Additionally, these filter expressions feature boolean logic, allowing for the fine-tuning of search results to only include relevant fields by combining numerous fields. Applications can tailor vector search queries to their requirements with this new functionality, leading to considerably richer results than previously.
OSS Valkey in the background
The Valkey key-value datastore has generated a lot of interest in the open-source community. It has coauthored a Request for Comments (RFC) and are collaborating with the open source community to contribute its vector search capabilities to Valkey as part of its dedication to making it fantastic. The community alignment process begins with an RFC, and it encourage comments on its proposal and execution. Its main objective is to make it possible for Valkey developers worldwide to use Valkey vector search to build incredible next-generation AI applications.
The quest for a scalable and quick vector search is finished
In addition to the features already available on Memorystore for Redis, Memorystore now offers ultra-low latency across all of its most widely used engines with the addition of fast and scalable vector search on Memorystore for Redis Cluster and Memorystore for Valkey . Therefore, Memorystore will be difficult to beat for developing generative AI applications that need reliable and consistent low-latency vector search. To experience the speed of in-memory search, get started right now by starting a Memorystore for Valkey or Memorystore for Redis Cluster instance.
Read more on Govindhtech.com
#VectorSearch#Memorystore#RedisCluster#Valkey#virtualmachine#Google#generativeAI#AI#news#technews#technology#technologynews#technologytrends#govindhtech
0 notes
Text
How Memorystore for Redis Cluster Saved Day at Character.AI

Google Memorystore
Google Cloud mission at Character.AI is to provide an unmatched user experience, and they believe that the key to maintaining user engagement is application responsiveness. Understanding the value of low latency and economy, they set out to optimise their cache layer, a vital part of the infrastructure supporting Google cloud service. Google cloud’s desire for efficiency and their requirement for scalability prompted us to choose Google Cloud’s Memorystore for Redis Cluster.
First incorporation Thanks to Django’s flexible caching settings, Google’s first attempt at utilising Memorystore for caching within their Django application was simple. Because Memorystore for Redis (non-cluster) fully supported Redis and satisfied their needs across several application pods, they selected it. Look-aside lazy caching greatly shortened response times for us; Memorystore now frequently returns results in the single digit millisecond range. This strategy, together with a high anticipated cache hit rate, made sure that our application continued to function well and be responsive even as we grew.
Turn away When an application uses caching, it first looks in the cache to determine if the requested data is there. In the unlikely event that the data is not in the cache, this calls a different datastore. This compromise benefits from memorystore’s high projected cache hit rate and typical single-digit millisecond response time once the cache heats up.
In contrast to pre-loading all of the possible data beforehand, lazy-loading loads data into the cache on a cache miss. Each data record has a customisable time-to-live (TTL) in the application logic, allowing data to expire from the cache after a certain amount of time. After it expires, the application idly loads the data again into the cache. As a result, there will be new data during the subsequent session and high cache hit rates during active user sessions.
Using a proxy to scale Google Cloud became aware of the constraints of a single Memorystore instance as Google Cloud user base increased. They were able to spread out their operations over several Memorystore instances by using Twemproxy to establish a consistent hash ring.
This was successful at first, but there were a few major problems. First, their staff had to work more to manage, tune, and keep an eye on Twemproxy, which added to the operational load and resulted to multiple outages as it couldn’t keep up with their demands. The main problem was that the hash ring was not self-healing, even though they still needed to adjust the proxy for performance.
If one of the numerous target Memorystore instances was overloaded or undergoing maintenance, the ring would break and we would lose cache hit rate, which would result in an excessive strain on the database. As Google Cloud team scaled both the application and Twemproxy pods in their Kubernetes cluster, they also encountered severe TCP resource contentions.
Software scaling After determining that, despite Twemproxy’s many advantages, it was not growing with them in terms of dependability, they added ring hashing to the application layer directly. By now, they had created three services and were scaling up their backend systems; two Python-based services shared a single implementation, while their Golang service had a different implementation. They also had to maintain ring implementations in three services. Once more, this was more effective than their earlier methods; nevertheless, when they scaled up, problems arose. The implementation of the per-service ring began to encounter issues with the indexes’ 65K maximum connection limit, and maintaining lengthy lists of hosts became laborious.
Moving to Memorystore for Redis Cluster One of the major milestones in their scaling journeys came in 2023 when Google Cloud released Memorystore for Redis Cluster. By utilising sharding logic that is tightly linked with the Redis storage layer, this service enables us to simplify their design and do away with the requirement for external sharding solutions. Their application’s scalability was improved by switching to Memorystore for Redis Cluster, which also allowed their primary database layer to sustain steady loads and high cache hit rates as they grew.
Character. AI’s experience with Memorystore for Redis Cluster is a prime example of their dedication to using state-of-the-art technology to improve user experience. They no longer need to bother about manually managing proxies or hashrings thanks to Memorystore for Redis Cluster. Alternatively, they can depend on a fully-managed system to provide scalability with minimal downtime and predictable low latencies. By shifting this administration burden to Google Cloud, they are able to concentrate on their core competencies and user experience, enabling them to provide their global user base with revolutionary AI experiences. Character.AI is committed to staying at the forefront of the AI market by always investigating new technologies and approaches to improve the performance and dependability of their application.
Overview of Redis Cluster Memorystore A completely managed Redis service is called Memorystore for Redis Cluster.
For Google Cloud, by utilising the highly scalable, accessible, and secure Redis service, applications running on Google Cloud may gain extraordinary speed without having to worry about managing intricate Redis setups.
Important ideas and terminology
Resource organisation in a hierarchy Memorystore for Redis Cluster streamlines administration and management by organising the many resources utilised in a Redis deployment into a hierarchical structure. This diagram serves as an example of this structure:
A collection of shards, each representing a portion of your key space, make up the memorystore for Redis Cluster instances. In a Memorystore cluster, each shard consists of one primary node and, if desired, one or more replica nodes. Memorystore automatically distributes a shard’s nodes across zones upon the addition of replica nodes in order to increase throughput and availability.
Instances Instances Your data is contained in a Memorystore for Redis Cluster instance. When speaking of a single Memorystore for a Redis Cluster unit of deployment, the terms instance and cluster are synonymous. You need to provision enough shards for your Memorystore instance in order to service the keyspace of your whole application.
Shards There are several shards in your cluster that are all of the same size.
Principal and secondary nodes Every shard has a single primary node. Replica nodes can be zero, one, or two per shard. Replicas are uniformly dispersed throughout zones and offer increased read throughput and excellent availability.
Redis version Redis version Memorystore for Redis Cluster supports a portion of the entire Redis command library and is based on the open-source Redis version 7.x.
Endpoints of clusters Your client connects to the discovery endpoint on each instance. The discovery endpoint is also used by your client to find cluster nodes. Refer to Cluster endpoints for additional details.
Prerequisites for networking Networking needs to be configured for your project before you can construct a Memorystore for a Redis Cluster instance.
Memorystore for Redis Cluster pricing For details on prices for the regions that are offered, visit the Pricing page.
Read more on Govindhtech.com
0 notes
Text
Statsig Reaches 7.5M QPS In Memorystore for Redis Cluster

They at Statsig are enthusiastic about trying new things. Their goal is to simplify the process of testing, iterating, and deploying product features for businesses so they can obtain valuable insights into user behavior and performance.
With Statsig, users can make data-driven decisions to improve user experience in their software and applications. Statsig is a feature-flag, experimentation, and product analytics platform. It’s vital that their platform can handle and update all of that data instantly. Their team expanded from eight engineers using an open-source Redis cache, so they looked for a working database that could support their growing number of users and their high query volume.
Redis Cluster Mode Statsig was founded in 2021 to assist businesses in confidently shipping, testing, and managing software and application features. The business encountered bottlenecks and connectivity problems and realized it needed a fully managed Redis service that was scalable, dependable, and performant. Memorystore for Redis Cluster Mode fulfilled all of the requirements. Memorystore offers a higher queries per second (QPS) capacity along with robust storage (99.99% SLA) and real-time analytics capabilities at a lower cost. This enables Statsig to return attention to its primary goal of creating a comprehensive platform for product observability that optimizes impact.
Playing around with a new database Their data store used to be dependent on a different cloud provider’s caching solution. Unfortunately, the high costs, latency slowdowns, connectivity problems, and throughput bottlenecks prevented the desired value from being realized. As loads and demand increased, it became increasingly difficult to see how their previous system could continue to function sustainably.
They chose to experiment by following in the footsteps of their platform and search for a solution that would combine features, clustering capabilities, and strong efficiency at a lower cost. they selected Memorystore for Redis Cluster because it enabled us to achieve they objectives without sacrificing dependability or affordability.
Additionally, since the majority of their applications are stateless and can readily accommodate cache modifications, the transition to Memorystore for Redis Cluster was an easy and seamless opportunity to align their operations with their business plan.
Statsig Improving data management in real time Redis Cluster memorystore has developed into a vital tool for Statsig, offering strong scalability and flexibility for thryoperations. Its capabilities are effectively put to use for things like read-through caching, real-time analytics, events and metrics storage, and regional feature flagging. Memorystore is also the platform that powers their core Statsig console functionalities, including event sampling and deduplication in their streaming pipeline and real-time health checks.
The high availability (99.99% SLA) of the memorystore for Redis Cluster guarantees the dependable performance that is essential to they services. They can dynamically adjust the size of their cluster as needed thanks to the ability to scale in and out with ease.
There is no denying the outcomes, as quantifiable advancements have been noted in several important domains
Enhanced database speed They are more confident in the caching layer with Memorystore for Redis Cluster, as it can support more use cases and higher queries per second (QPS). In order to keep up with their customers as they grow,they can now easily handle 1.5 million QPS on average and up to 7.5 million QPS at peak.
Increased scalability Memorystore for Redis Cluster‘s capacity to grow in or out with zero downtime has enabled us to support a variety of use cases and higher QPS, putting us in a position to expand their clientele and offerings.
Cost effectiveness and dependability They have managed to cut expenses significantly without sacrificing the quality of their services. When compared to the costs of using they previous cloud provider for the same workloads, the efficiency of their database running on Memorystore has resulted in a 70% reduction in Redis costs. Their requirements for processing data in real time have also benefited from Memorystore’s dependable performance.
Improved monitoring and administration Since they switched to Memorystore, they no longer have to troubleshoot persistent Redis connection problems with they database management. Memorystore’s user-friendly monitoring tools allowed they developers to concentrate on platform innovation rather than spend as much time troubleshooting database problems.
Integrated security An additional bit of security is always beneficial. The smooth integration of Memorystore with Google Cloud VPC improves .Their security posture.
Memorystore for Redis Cluster A completely managed Redis solution for Google Cloud is called Memorystore for Redis Cluster. By using the highly scalable, accessible, and secure Redis service, applications running on Google Cloud may gain tremendous speed without having to worry about maintaining intricate Redis setups.
A collection of shards, each representing a portion of your key space, make up the memorystore for Redis Cluster instances. In a Memorystore cluster, each shard consists of one main node and, if desired, one or more replica nodes. Memorystore automatically distributes a shard’s nodes across zones upon the addition of replica nodes in order to increase performance and availability.
Examples Your data is housed in a Memorystore for Redis Cluster instance. When speaking about a single Memorystore for a Redis Cluster unit of deployment, the words instance and cluster are synonymous. You need to provide enough shards for your Memorystore instance in order to service the keyspace of your whole application.
Anticipating the future of Memorystore
They are confident that Memorystore for Redis Cluster will continue to be a key component of .Their services a database they can rely on to support higher loads and increased demand and they are expanding they use of it to enable smarter and faster product development as they customer base grows. In the future, they will continue to find applications for Memorystore’s strong features and consistent scalability.
Read more on Govindhtech.com
0 notes