#MemorystoreforRedisCluster
Explore tagged Tumblr posts
govindhtech · 10 months ago
Text
Valkey 7.2 On Memorystore: Open-Source Key-Value Service
Tumblr media
The 100% open-source key-value service Memorystore for Valkey is launched by Google Cloud.
In order to give users a high-performance, genuinely open-source key-value service, the Memorystore team is happy to announce the preview launch of Valkey 7.2 support for Memorystore.
Memorystore for Valkey
A completely managed Valkey Cluster service for Google Cloud is called Memorystore for Valkey. By utilizing the highly scalable, reliable, and secure Valkey service, Google Cloud applications may achieve exceptional performance without having to worry about handling intricate Valkey deployments.
In order to guarantee high availability, Memorystore for Valkey distributes (or “shards”) your data among the primary nodes and duplicates it among the optional replica nodes. Because Valkey performance is greater on many smaller nodes rather than fewer bigger nodes, the horizontally scalable architecture outperforms the vertically scalable architecture in terms of performance.
Memorystore for Valkey is a game-changer for enterprises looking for high-performance data management solutions reliant on 100% open source software. It was added to the Memorystore portfolio in response to customer demand, along with Memorystore for Redis Cluster and Memorystore for Redis. From the console or gcloud, users can now quickly and simply construct a fully-managed Valkey Cluster, which they can then scale up or down to suit the demands of their workloads.
Thanks to its outstanding performance, scalability, and flexibility, Valkey has quickly gained popularity as an open-source key-value datastore. Valkey 7.2 provides Google Cloud users with a genuinely open source solution via the Linux Foundation. It is fully compatible with Redis 7.2 and the most widely used Redis clients, including Jedis, redis-py, node-redis, and go-redis.
Valkey is already being used by customers to replace their key-value software, and it is being used for common use cases such as caching, session management, real-time analytics, and many more.
Customers may enjoy a nearly comparable (and code-compatible) Valkey Cluster experience with Memorystore for Valkey, which launches with all the GA capabilities of Memorystore for Redis Cluster. Similar to Memorystore for Redis Cluster, Memorystore for Valkey provides RDB and AOF persistence, zero-downtime scaling in and out, single- or multi-zone clusters, instantaneous integrations with Google Cloud, extremely low and dependable performance, and much more. Instances up to 14.5 TB are also available.
Memorystore for Valkey, Memorystore for Redis Cluster, and Memorystore for Redis have an exciting roadmap of features and capabilities.
The momentum of Valkey
Just days after Redis Inc. withdrew the Redis open-source license, the open-source community launched Valkey in collaboration with the Linux Foundation in March 2024 (1, 2, 3). Since then, they have had the pleasure of working with developers and businesses worldwide to propel Valkey into the forefront of key-value data stores and establish it as a premier open source software (OSS) project. Google Cloud is excited to participate in this community launch with partners and industry experts like Snap, Ericsson, AWS, Verizon, Alibaba Cloud, Aiven, Chainguard, Heroku, Huawei, Oracle, Percona, Ampere, AlmaLinux OS Foundation, DigitalOcean, Broadcom, Memurai, Instaclustr from NetApp, and numerous others. They fervently support open source software.
The Valkey community has grown into a thriving group committed to developing Valkey the greatest open source key-value service available thanks to the support of thousands of enthusiastic developers and the former core OSS Redis maintainers who were not hired by Redis Inc.
With more than 100 million unique active users each month, Mercado Libre is the biggest finance, logistics, and e-commerce company in Latin America. Diego Delgado discusses Valkey with Mercado Libre as a Software Senior Expert:
At Mercado Libre, Google Cloud need to handle billions of requests per minute with minimal latency, which makes caching solutions essential. Google Cloud especially thrilled about the cutting-edge possibilities that Valkey offers. They have excited to investigate its fresh features and add to this open-source endeavor.”
The finest is still to come
By releasing Memorystore for Valkey 7.2, Memorystore offers more than only Redis Cluster, Redis, and Memcached. And Google Cloud is even more eager about Valkey 8.0’s revolutionary features. Major improvements in five important areas performance, reliability, replication, observability, and efficiency were introduced by the community in the first release candidate of Valkey 8.0. With a single click or command, users will be able to accept Valkey 7.2 and later upgrade to Valkey 8.0. Additionally, Valkey 8.0 is compatible with Redis 7.2, exactly like Valkey 7.2 was, guaranteeing a seamless transition for users.
The performance improvements in Valkey 8.0 are possibly the most intriguing ones. Asynchronous I/O threading allows commands to be processed in parallel, which can lead to multi-core nodes working at a rate that is more than twice as fast as Redis 7.2. From a reliability perspective, a number of improvements provided by Google, such as replicating slot migration states, guaranteeing automatic failover for empty shards, and ensuring slot state recovery is handled, significantly increase the dependability of Cluster scaling operations. The anticipation for Valkey 8.0 is already fueling the demand for Valkey 7.2 on Memorystore, with a plethora of further advancements across several dimensions (release notes).
Similar to how Redis previously expanded capability through modules with restricted licensing, the community is also speeding up the development of Valkey’s capabilities through open-source additions that complement and extend Valkey’s functionality. The capabilities covered by recently published RFCs (“Request for Comments”) include vector search for extremely high performance vector similarly search, JSON for native JSON support, and BloomFilters for high performance and space-efficient probabilistic filters.
Former vice president of Gartner and principal analyst of SanjMo Sanjeev Mohan offers his viewpoint:
The advancement of community-led initiatives to offer feature-rich, open-source database substitutes depends on Valkey. Another illustration of Google’s commitment to offering really open and accessible solutions for customers is the introduction of Valkey support in Memorystore. In addition to helping developers looking for flexibility, their contributions to Valkey also support the larger open-source ecosystem.
It seems obvious that Valkey is going to be a game-changer in the high-performance data management area with all of the innovation in Valkey 8.0, as well as the open-source improvements like vector search and JSON, and for client libraries.
Valkey is the secret to an OSS future
Take a look at Memorystore for Valkey right now, and use the UI console or a straightforward gcloud command to establish your first cluster. Benefit from OSS Redis compatibility to simply port over your apps and scale in or out without any downtime.
Read more on govindhtech.com
2 notes · View notes
govindhtech · 8 months ago
Text
Unity Ads Performs 10M Tasks Per Second With Memorystore
Tumblr media
Memorystore powers up to 10 million operations per second in Unity Ads.
Unity Ads
Prior to using its own self-managed Redis infrastructure, Unity Ads, a mobile advertising company, was looking for a solution that would lower maintenance costs and scale better for a range of use cases. Memorystore for Redis Cluster, a fully managed service built for high-performance workloads, is where Unity moved their workloads. Currently, a single instance of their infrastructure can process up to 10 million Redis operations per second.The business now has a more dependable and expandable infrastructure, lower expenses, and more time to devote to high-value endeavors.
Managing one million actions per second is an impressive accomplishment for many consumers, but it’s just routine at Unity Ads. Unity’s mobile performance ads solution readily manages this daily volume of activities, which feeds ads to a wide network of mobile apps and games, showcasing the reliable capabilities of Redis clusters. Extremely high performance needs result from the numerous database operations required for real-time ad requests, bidding, and ad selection, as well as for updating session data and monitoring performance indicators.
Google Cloud knows that this extraordinary demand necessitates a highly scalable and resilient infrastructure. Here comes Memorystore for Redis Cluster, which is made to manage the taxing demands of sectors where speed and scale are essential, such as gaming, banking, and advertising. This fully managed solution combines heavier workloads into a single, high-performance cluster, providing noticeably higher throughput and data capacity while preserving microsecond latencies.
Providing Memorystore for Redis Cluster with success
Unity faced several issues with their prior Do-It-Yourself (DIY) setup before utilizing Memorystore. For starters, they employed several types of self-managed Redis clusters, from Kubernetes operators to static clusters based on Terraform modules. These took a lot of work to scale and maintain, and they demand specific understanding. They frequently overprovisioned these do-it-yourself clusters primarily to minimize possible downtime. However, in the high-performance ad industry, where every microsecond and fraction of a penny matters, this expense and the time required to manage this infrastructure are unsustainable.
Memorystore presented a convincing Unity Ads solution. Making the switch was easy because it blended in perfectly with their current configuration. Without the managerial overhead, it was just as expensive as their do-it-yourself solution. Since they were already Google Cloud users, they also thought it would be beneficial to further integrate with the platform.
The most crucial characteristic is scalability. The ability of Memorystore for Redis Cluster to scale with no downtime is one of its best qualities. With just a click or command, customers may expand their clusters to handle terabytes of keyspace, allowing them to easily adjust to changing demands. Additionally, Memorystore has clever features that improve use and dependability. The service manages replica nodes, putting them in zones other than their primary ones to guard against outages, and automatically distributes nodes among zones for high availability. What would otherwise be a difficult manual procedure is made simpler by this automated method.
All of this made Unity Ads decide to relocate their use cases, which included state management, distributed locks, central valuation cache, and session data. The relocation process proceeded more easily than expected. By using double-writing during the shift, they were able to successfully complete their most important session data migration, which handled up to 1 million Redis operations per second. Their valuation cache migration, which served up to half a million requests per second (equivalent to over 1 million Redis operations per second) with little impact on service, was even more astounding. It took only 15 minutes to complete. In order to prevent processing the same event twice, the Google team also successfully migrated Unity’s distributed locks system to Memorystore.
Memorystore in operation: the version of Unity Ads
Google Cloud has a life-changing experience using Memorystore for Redis Cluster. Its infrastructure’s greater stability was one of the most obvious advantages it observed. Because its prior DIY Redis cluster was operating on various tiers of virtualization services, such as Kubernetes and cloud computing, where it lacked direct observability and control, it frequently ran into erratic performance issues that were challenging to identify.
Consider this CPU utilization graph of specific nodes from its previous self-managed arrangement, for example:Image credit to Google Cloud
As you can see, the CPU consumption across many nodes fluctuated a lot and spiked frequently. It was challenging to sustain steady performance in these circumstances, particularly during times of high traffic.
During Kubernetes nodepool upgrades, which are frequently the result of automatic upgrades to a new version of Kubernetes, it also encountered issues with its do-it-yourself Redis clusters. The p99 latency is skyrocketing, as you can see!Image credit to Google Cloud
Another big benefit is that we can now grow production seamlessly because of Memorystore. This graph displays its client metrics as it increased the cluster’s size by 60%.
With very little variations, the operation rate was impressively constant throughout the procedure. For us, this degree of seamless scaling changed everything since it made it possible to adjust to shifting needs without compromising our offerings.
The tremendous performance and steady low latency it has been able to attain using Memorystore are described in the data presented above. For its ad-serving platform, where every microsecond matters, this performance level is essential.
Making a profit from innovation
Unity ads switch to Memorystore has resulted in notable operational enhancements in addition to performance gains. It no longer invests effort in preparing its Redis clusters for production by testing and fine-tuning them.
Business-wise, it anticipates that by properly scaling clusters and applying the relevant Committed Use Discounts, it should be able to get cost savings on par with its prior do-it-yourself solution, particularly with the addition of single zone clusters to cut down on networking expenses. For a comparable price to prior self-managed Redis deployment, it is now able to obtain a fully managed, scalable, and far more dependable (99.99% SLA) solution with Memorystore.
With an eye toward the future, Memorystore has created new opportunities for system architecture. For many of the use cases, it is currently thinking about taking a “Memorystore-first” strategy. For instance, engineers frequently choose persistent database solutions like Bigtable when creating crucial data systems because they don’t want to take chances, even whether the use case actually requires persistence and/or consistency.
Databases like Bigtable are better suited for data that will persist for months to years, but occasionally the use case just requires data durability for about an hour. It may save money and development time by avoiding such shortcuts and optimizing for its shorter persistent (such as hours-to-day) data use cases with a hardened, dependable Redis Cluster solution like Memorystore. Overall, it can confidently extend the use of Redis across more infrastructure to reduce costs and boost performance because of Memorystore’s scalability and dependability.
The ease with which persistence (AOF or RDB) can be enabled on Memorystore is another revolutionary advantage. Its use cases were restricted to caching scenarios with transitory data that it could afford to lose because its Kubernetes DIY Redis cluster did not allow permanence. It can expand use cases and even mix use cases within the same cluster with Memorystore’s one-click persistence, which boosts usage and reduces expenses.
Every second matters in business. Memorystore is helping them remain competitive and provide its publishers and advertisers with better outcomes by allowing the team to concentrate on core business innovation instead of infrastructure administration.
Read more on Govindhtech.com
0 notes