Tumgik
#Google pubsub connectore
infometryinc · 6 months
Text
Infometry's Google Pub/Sub Connector for Informatica
Infometry's Google Pub/Sub Connector for Informatica. Infometry Google Pub/Sub Connector helps customer integrate various Pub/Sub APIs with on premise or cloud Applications like salesforce, NetSuite, Etc. Users can use the Infometry Google Pub/Sub connector to Publish and Pull messages to Google Pub/Sub.
0 notes
yahoodevelopers · 5 years
Text
Bullet Updates - Windowing, Apache Pulsar PubSub, Configuration-based Data Ingestion, and More
By Akshay Sarma, Principal Engineer, Verizon Media & Brian Xiao, Software Engineer, Verizon Media
This is the first of an ongoing series of blog posts sharing releases and announcements for Bullet, an open-sourced lightweight, scalable, pluggable, multi-tenant query system.
Bullet allows you to query any data flowing through a streaming system without having to store it first through its UI or API. The queries are injected into the running system and have minimal overhead. Running hundreds of queries generally fit into the overhead of just reading the streaming data. Bullet requires running an instance of its backend on your data. This backend runs on common stream processing frameworks (Storm and Spark Streaming currently supported).
The data on which Bullet sits determines what it is used for. For example, our team runs an instance of Bullet on user engagement data (~1M events/sec) to let developers find their own events to validate their code that produces this data. We also use this instance to interactively explore data, throw up quick dashboards to monitor live releases, count unique users, debug issues, and more.
Since open sourcing Bullet in 2017, we’ve been hard at work adding many new features! We’ll highlight some of these here and continue sharing update posts for future releases.
Windowing
Bullet used to operate in a request-response fashion - you would submit a query and wait for the query to meet its termination conditions (usually duration) before receiving results. For short-lived queries, say, a few seconds, this was fine. But as we started fielding more interactive and iterative queries, waiting even a minute for results became too cumbersome.
Enter windowing! Bullet now supports time and record-based windowing. With time windowing, you can break up your query into chunks of time over its duration and retrieve results for each chunk.  For example, you can calculate the average of a field, and stream back results every second:
youtube
In the above example, the aggregation is operating on all the data since the beginning of the query, but you can also do aggregations on just the windows themselves. This is often called a Tumbling window:
Tumblr media
With record windowing, you can get the intermediate aggregation for each record that matches your query (a Sliding window). Or you can do a Tumbling window on records rather than time. For example, you could get results back every three records:
Tumblr media
Overlapping windows in other ways (Hopping windows) or windows that reset based on different criteria (Session windows, Cascading windows) are currently being worked on. Stay tuned!
Tumblr media Tumblr media
Apache Pulsar support as a native PubSub
Bullet uses a PubSub (publish-subscribe) message queue to send queries and results between the Web Service and Backend. As with everything else in Bullet, the PubSub is pluggable. You can use your favorite pubsub by implementing a few interfaces if you don’t want to use the ones we provide. Until now, we’ve maintained and supported a REST-based PubSub and an Apache Kafka PubSub. Now we are excited to announce supporting Apache Pulsar as well! Bullet Pulsar will be useful to those users who want to use Pulsar as their underlying messaging service.
If you aren’t familiar with Pulsar, setting up a local standalone is very simple, and by default, any Pulsar topics written to will automatically be created. Setting up an instance of Bullet with Pulsar instead of REST or Kafka is just as easy. You can refer to our documentation for more details.
Tumblr media
Plug your data into Bullet without code
While Bullet worked on any data source located in any persistence layer, you still had to implement an interface to connect your data source to the Backend and convert it into a record container format that Bullet understands. For instance, your data might be located in Kafka and be in the Avro format. If you were using Bullet on Storm, you would perhaps write a Storm Spout to read from Kafka, deserialize, and convert the Avro data into the Bullet record format. This was the only interface in Bullet that required our customers to write their own code. Not anymore! Bullet DSL is a text/configuration-based format for users to plug in their data to the Bullet Backend without having to write a single line of code.
Bullet DSL abstracts away the two major components for plugging data into the Bullet Backend. A Connector piece to read from arbitrary data-sources and a Converter piece to convert that read data into the Bullet record container. We currently support and maintain a few of these - Kafka and Pulsar for Connectors and Avro, Maps and arbitrary Java POJOs for Converters. The Converters understand typed data and can even do a bit of minor ETL (Extract, Transform and Load) if you need to change your data around before feeding it into Bullet. As always, the DSL components are pluggable and you can write your own (and contribute it back!) if you need one that we don’t support.
We appreciate your feedback and contributions! Explore Bullet on GitHub, use and help contribute to the project, and chat with us on Google Groups. To get started, try our Quickstarts on Spark or Storm to set up an instance of Bullet on some fake data and play around with it.
8 notes · View notes
tellesposts · 3 years
Text
google pub sub
Infometry google pub sub can publish and pull the messages to/from GooglePubSub. Allowing users to create, update and delete topics, subscription and snapshot. You can perform read, insert, delete, and update operations on a Google PubSub target. Requires Google Cloud account access. Requires Service account credentials.
https://www.infometry.net/products/google-cloud-connectors/google-pub-sub-connector/
0 notes
infometry · 3 years
Link
GooglePubSub connector can publish and pull the messages to/from GooglePubSub. Allowing users to create, update and delete topics, subscription and snapshot. You can perform read, insert, delete, and update operations on a Google PubSub target. Requires Google Cloud account access. Requires Service account credentials.
0 notes
netmetic · 4 years
Text
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+
In a factory, any unplanned downtime is costly and can have disastrous ripple effects, so being able to develop a predictive maintenance solution with IoT sensors and machine learning is a no-brainer in the manufacturing industry. Predictive maintenance enables manufacturers to identify potential failures and monitor equipment so they can improve productivity and minimize asset downtime. It has been estimated predictive maintenance could save manufacturers up to $630B globally by 2025.
Most manufacturing machinery maintenance strategies are based on guesswork – guessing when the machines will fail, guessing when it will be a good time to fix them, guessing how often they should be checked. Sometimes, manufacturing plants simply run the machines until they fail and replace them when they do.
This “strategy” is very costly, inefficient and requires a great deal of labour. The best way to achieve the more effective return on investment for the machines is to be proactive, i.e. implement condition-based predictive maintenance using IoT sensors to perform three key functions:
Fault detection – recognize when something happens
Diagnostics – understand what happened
Prognostics – predict what will happen
Why IoT is the Key to Predictive Maintenance
To monitor the performance of manufacturing equipment in real-time, data from IoT sensors can be collected and analyzed to generate and populate prediction models. This goes far beyond maintenance inspections by manufacturing personnel, because small IoT devices can give you an accurate account of what is happening inside the equipment without the disruptions and downtime faced with a physical inspection.
IoT sensors monitor things like vibration, temperature, noise, pressure, oil levels, and other variables that are important to monitor and predict. As one can imagine, this data will come in a variety of formats and can be stored in many ways, locally or in the cloud.
Data exchange is at the heart of predictive maintenance. The data from the IoT sensors must be collected and shared with the appropriate systems so that action can be taken. A sustainable predictive maintenance system for collecting, sharing, and interpreting IoT sensor data will be a scalable one, and one that can be deployed across a variety of environments.
An Event-Driven Model for Real-Time Analysis
Streaming real-time data is an important aspect of predictive maintenance. The data from the equipment can be applied to a machine learning model so the output can be interpreted and acted upon. An event-driven model is ideal for streaming IoT sensor data because of the potential for a high volume of data, the diverse endpoints or networks the data could encounter, the different messaging patterns that could be used, and the requirement for low latency.
In an event-driven model for manufacturing predictive maintenance, events (a change of state) can be pushed from IoT devices to interested applications in a variety of environments in real-time. By enabling event-driven data flow, scalability, instant communication, and interoperability are made easier.
An event broker, or a network of event brokers (event mesh) can enable the real-time data from IoT sensors to move intelligently – filtering and routing information so it’s only sent to the applications and devices that need it.
Using GCP to Manage IoT Sensor Data for Predictive Maintenance
Google Cloud Platform (GCP) gives you unparalleled extensibility and scalability in managing data from your IoT sensors, especially when it comes to their machine learning capabilities. The manufacturing use case is ideal for showcasing how better productivity and innovation can be driven by monitoring machines and equipment more closely and developing a plan for predictive maintenance.
Google Cloud Dataflow is a fully managed data processing service. Dataflow provides automated provisioning and management of processing resources, as well as horizontal autoscaling of worker resources to maximize resource utilization.
When combined with an event mesh, Dataflow can stream vibration data, temperature, and time-in-use of manufacturing equipment all over the factory – from plant floors across WAN to Google Cloud Machine Learning to BigQuery.
Dataflow can automatically scale up or scale down based on demand and provide information to GCP’s machine learning algorithm to gain insight into fault tolerance levels and achieve predictive maintenance.
For machine learning to do its magic, it needs to “learn” about your environment. Based on the training period and data collection that it needs, you can then define the conditions of which fault detection, diagnostics, and prognostics are set.
PubSub+ and Event Mesh for Aggregation of IoT Sensor Data with Google Cloud Dataflow
PubSub+ is an excellent choice of event broker for the manufacturing IoT use case. It aggregates sensor data via a variety of open source protocols, making it easier to get data to where it is supposed to be – in real-time. A network of PubSub+ event brokers creates an event mesh, creating a dynamic infrastructure layer for distributing events among decoupled applications, cloud services, and devices. Integrating PubSub+ with Dataflow is done via PubSub+ Connector for Beam: I/O.
Customers can use Beam runners to stream real-time data between existing applications and GCP applications and services, like Cloud Machine Learning, Dataflow, BigTable, BigQuery, and TensorFlow​. This allows for real-time IoT sensor data to feed into predictive maintenance algorithms and machine learning already running in the GCP platform. On-demand scalability means that more sensors can be brought online and begin sharing data without added downtime – reducing development labour and with just-in-time flexibility using very little code.
Competitive Advantage & Lower Total Cost of Ownership
Of course, the main goal of manufacturing predictive maintenance is to establish continuous monitoring of manufacturing equipment and develop ongoing alerts and preventive follow-ups.
Taking advantage of how the PubSub+ event mesh can enable IoT sensor data to flow seamlessly from various sources to machine learning services is a surefire way to reduce total cost of ownership of expensive manufacturing equipment and gain that important competitive advantage. The event mesh takes care of all information feeding into Dataflow, with Dataflow auto-scaling and assigning approprrate resources. With PubSub+ as the event broker, events from IoT sensors are published, with alerts going back to IoT sensors and to plant floors for immediate action or preventive measures.
Being proactive with predictive maintenance and having a scalable and reliable system to capture, store, share, and anaylze data from the sensors can:
Improve service life of equipment
Reduce machine and plant downtime
Reduce the cost of unplanned downtime
Achieve higher plant productivity
Achieve a low code/no code development approach
Enable proactive capacity planning
To learn more about the manufacturing and IoT use case, visit https://solace.com/use-cases/industries/manufacturing/. Get started with the PubSub+ Connector for Beam: I/O by watching the video below or clicking on this link:
Tumblr media
The post Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ appeared first on Solace.
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+
In a factory, any unplanned downtime is costly and can have disastrous ripple effects, so being able to develop a predictive maintenance solution with IoT sensors and machine learning is a no-brainer in the manufacturing industry. Predictive maintenance enables manufacturers to identify potential failures and monitor equipment so they can improve productivity and minimize asset downtime. It has been estimated predictive maintenance could save manufacturers up to $630B globally by 2025.
Most manufacturing machinery maintenance strategies are based on guesswork – guessing when the machines will fail, guessing when it will be a good time to fix them, guessing how often they should be checked. Sometimes, manufacturing plants simply run the machines until they fail and replace them when they do.
This “strategy” is very costly, inefficient and requires a great deal of labour. The best way to achieve the more effective return on investment for the machines is to be proactive, i.e. implement condition-based predictive maintenance using IoT sensors to perform three key functions:
Fault detection – recognize when something happens
Diagnostics – understand what happened
Prognostics – predict what will happen
Why IoT is the Key to Predictive Maintenance
To monitor the performance of manufacturing equipment in real-time, data from IoT sensors can be collected and analyzed to generate and populate prediction models. This goes far beyond maintenance inspections by manufacturing personnel, because small IoT devices can give you an accurate account of what is happening inside the equipment without the disruptions and downtime faced with a physical inspection.
IoT sensors monitor things like vibration, temperature, noise, pressure, oil levels, and other variables that are important to monitor and predict. As one can imagine, this data will come in a variety of formats and can be stored in many ways, locally or in the cloud.
Data exchange is at the heart of predictive maintenance. The data from the IoT sensors must be collected and shared with the appropriate systems so that action can be taken. A sustainable predictive maintenance system for collecting, sharing, and interpreting IoT sensor data will be a scalable one, and one that can be deployed across a variety of environments.
An Event-Driven Model for Real-Time Analysis
Streaming real-time data is an important aspect of predictive maintenance. The data from the equipment can be applied to a machine learning model so the output can be interpreted and acted upon. An event-driven model is ideal for streaming IoT sensor data because of the potential for a high volume of data, the diverse endpoints or networks the data could encounter, the different messaging patterns that could be used, and the requirement for low latency.
In an event-driven model for manufacturing predictive maintenance, events (a change of state) can be pushed from IoT devices to interested applications in a variety of environments in real-time. By enabling event-driven data flow, scalability, instant communication, and interoperability are made easier.
An event broker, or a network of event brokers (event mesh) can enable the real-time data from IoT sensors to move intelligently – filtering and routing information so it’s only sent to the applications and devices that need it.
Using GCP to Manage IoT Sensor Data for Predictive Maintenance
Google Cloud Platform (GCP) gives you unparalleled extensibility and scalability in managing data from your IoT sensors, especially when it comes to their machine learning capabilities. The manufacturing use case is ideal for showcasing how better productivity and innovation can be driven by monitoring machines and equipment more closely and developing a plan for predictive maintenance.
Google Cloud Dataflow is a fully managed data processing service. Dataflow provides automated provisioning and management of processing resources, as well as horizontal autoscaling of worker resources to maximize resource utilization.
When combined with an event mesh, Dataflow can stream vibration data, temperature, and time-in-use of manufacturing equipment all over the factory – from plant floors across WAN to Google Cloud Machine Learning to BigQuery.
Dataflow can automatically scale up or scale down based on demand and provide information to GCP’s machine learning algorithm to gain insight into fault tolerance levels and achieve predictive maintenance.
For machine learning to do its magic, it needs to “learn” about your environment. Based on the training period and data collection that it needs, you can then define the conditions of which fault detection, diagnostics, and prognostics are set.
PubSub+ and Event Mesh for Aggregation of IoT Sensor Data with Google Cloud Dataflow
PubSub+ is an excellent choice of event broker for the manufacturing IoT use case. It aggregates sensor data via a variety of open source protocols, making it easier to get data to where it is supposed to be – in real-time. A network of PubSub+ event brokers creates an event mesh, creating a dynamic infrastructure layer for distributing events among decoupled applications, cloud services, and devices. Integrating PubSub+ with Dataflow is done via PubSub+ Connector for Beam: I/O.
Customers can use Beam runners to stream real-time data between existing applications and GCP applications and services, like Cloud Machine Learning, Dataflow, BigTable, BigQuery, and TensorFlow​. This allows for real-time IoT sensor data to feed into predictive maintenance algorithms and machine learning already running in the GCP platform. On-demand scalability means that more sensors can be brought online and begin sharing data without added downtime – reducing development labour and with just-in-time flexibility using very little code.
Competitive Advantage & Lower Total Cost of Ownership
Of course, the main goal of manufacturing predictive maintenance is to establish continuous monitoring of manufacturing equipment and develop ongoing alerts and preventive follow-ups.
Taking advantage of how the PubSub+ event mesh can enable IoT sensor data to flow seamlessly from various sources to machine learning services is a surefire way to reduce total cost of ownership of expensive manufacturing equipment and gain that important competitive advantage. The event mesh takes care of all information feeding into Dataflow, with Dataflow auto-scaling and assigning approprrate resources. With PubSub+ as the event broker, events from IoT sensors are published, with alerts going back to IoT sensors and to plant floors for immediate action or preventive measures.
Being proactive with predictive maintenance and having a scalable and reliable system to capture, store, share, and anaylze data from the sensors can:
Improve service life of equipment
Reduce machine and plant downtime
Reduce the cost of unplanned downtime
Achieve higher plant productivity
Achieve a low code/no code development approach
Enable proactive capacity planning
To learn more about the manufacturing and IoT use case, visit https://solace.com/use-cases/industries/manufacturing/. Get started with the PubSub+ Connector for Beam: I/O by watching the video below or clicking on this link:
Tumblr media
The post Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ appeared first on Solace.
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+
In a factory, any unplanned downtime is costly and can have disastrous ripple effects, so being able to develop a predictive maintenance solution with IoT sensors and machine learning is a no-brainer in the manufacturing industry. Predictive maintenance enables manufacturers to identify potential failures and monitor equipment so they can improve productivity and minimize asset downtime. It has been estimated predictive maintenance could save manufacturers up to $630B globally by 2025.
Most manufacturing machinery maintenance strategies are based on guesswork – guessing when the machines will fail, guessing when it will be a good time to fix them, guessing how often they should be checked. Sometimes, manufacturing plants simply run the machines until they fail and replace them when they do.
This “strategy” is very costly, inefficient and requires a great deal of labour. The best way to achieve the more effective return on investment for the machines is to be proactive, i.e. implement condition-based predictive maintenance using IoT sensors to perform three key functions:
Fault detection – recognize when something happens
Diagnostics – understand what happened
Prognostics – predict what will happen
Why IoT is the Key to Predictive Maintenance
To monitor the performance of manufacturing equipment in real-time, data from IoT sensors can be collected and analyzed to generate and populate prediction models. This goes far beyond maintenance inspections by manufacturing personnel, because small IoT devices can give you an accurate account of what is happening inside the equipment without the disruptions and downtime faced with a physical inspection.
IoT sensors monitor things like vibration, temperature, noise, pressure, oil levels, and other variables that are important to monitor and predict. As one can imagine, this data will come in a variety of formats and can be stored in many ways, locally or in the cloud.
Data exchange is at the heart of predictive maintenance. The data from the IoT sensors must be collected and shared with the appropriate systems so that action can be taken. A sustainable predictive maintenance system for collecting, sharing, and interpreting IoT sensor data will be a scalable one, and one that can be deployed across a variety of environments.
An Event-Driven Model for Real-Time Analysis
Streaming real-time data is an important aspect of predictive maintenance. The data from the equipment can be applied to a machine learning model so the output can be interpreted and acted upon. An event-driven model is ideal for streaming IoT sensor data because of the potential for a high volume of data, the diverse endpoints or networks the data could encounter, the different messaging patterns that could be used, and the requirement for low latency.
In an event-driven model for manufacturing predictive maintenance, events (a change of state) can be pushed from IoT devices to interested applications in a variety of environments in real-time. By enabling event-driven data flow, scalability, instant communication, and interoperability are made easier.
An event broker, or a network of event brokers (event mesh) can enable the real-time data from IoT sensors to move intelligently – filtering and routing information so it’s only sent to the applications and devices that need it.
Using GCP to Manage IoT Sensor Data for Predictive Maintenance
Google Cloud Platform (GCP) gives you unparalleled extensibility and scalability in managing data from your IoT sensors, especially when it comes to their machine learning capabilities. The manufacturing use case is ideal for showcasing how better productivity and innovation can be driven by monitoring machines and equipment more closely and developing a plan for predictive maintenance.
Google Cloud Dataflow is a fully managed data processing service. Dataflow provides automated provisioning and management of processing resources, as well as horizontal autoscaling of worker resources to maximize resource utilization.
When combined with an event mesh, Dataflow can stream vibration data, temperature, and time-in-use of manufacturing equipment all over the factory – from plant floors across WAN to Google Cloud Machine Learning to BigQuery.
Dataflow can automatically scale up or scale down based on demand and provide information to GCP’s machine learning algorithm to gain insight into fault tolerance levels and achieve predictive maintenance.
For machine learning to do its magic, it needs to “learn” about your environment. Based on the training period and data collection that it needs, you can then define the conditions of which fault detection, diagnostics, and prognostics are set.
PubSub+ and Event Mesh for Aggregation of IoT Sensor Data with Google Cloud Dataflow
PubSub+ is an excellent choice of event broker for the manufacturing IoT use case. It aggregates sensor data via a variety of open source protocols, making it easier to get data to where it is supposed to be – in real-time. A network of PubSub+ event brokers creates an event mesh, creating a dynamic infrastructure layer for distributing events among decoupled applications, cloud services, and devices. Integrating PubSub+ with Dataflow is done via PubSub+ Connector for Beam: I/O.
Customers can use Beam runners to stream real-time data between existing applications and GCP applications and services, like Cloud Machine Learning, Dataflow, BigTable, BigQuery, and TensorFlow​. This allows for real-time IoT sensor data to feed into predictive maintenance algorithms and machine learning already running in the GCP platform. On-demand scalability means that more sensors can be brought online and begin sharing data without added downtime – reducing development labour and with just-in-time flexibility using very little code.
Competitive Advantage & Lower Total Cost of Ownership
Of course, the main goal of manufacturing predictive maintenance is to establish continuous monitoring of manufacturing equipment and develop ongoing alerts and preventive follow-ups.
Taking advantage of how the PubSub+ event mesh can enable IoT sensor data to flow seamlessly from various sources to machine learning services is a surefire way to reduce total cost of ownership of expensive manufacturing equipment and gain that important competitive advantage. The event mesh takes care of all information feeding into Dataflow, with Dataflow auto-scaling and assigning approprrate resources. With PubSub+ as the event broker, events from IoT sensors are published, with alerts going back to IoT sensors and to plant floors for immediate action or preventive measures.
Being proactive with predictive maintenance and having a scalable and reliable system to capture, store, share, and anaylze data from the sensors can:
Improve service life of equipment
Reduce machine and plant downtime
Reduce the cost of unplanned downtime
Achieve higher plant productivity
Achieve a low code/no code development approach
Enable proactive capacity planning
To learn more about the manufacturing and IoT use case, visit https://solace.com/use-cases/industries/manufacturing/. Get started with the PubSub+ Connector for Beam: I/O by watching the video below or clicking on this link:
Tumblr media
The post Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ appeared first on Solace.
Using IoT and Machine Learning for Predictive Maintenance: Managing Sensor Data with Google Cloud Dataflow and PubSub+ published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
Why You Should Event Enable Salesforce: A Taxi Tale
Salesforce is a goldmine of information about potential clients: names, addresses, contact information, licenses, even their favorite coffee order. Once you close the deal and the account is now a paying customer, it’s crucial that you get the information out of Salesforce and instantly spread it to the rest of your organization. In other words, you should event enable Salesforce.
Do it well and you enhance your initial customer experience with insights from the sales process.
Don’t do it well, and your newly minted customer must supply information you already know, or even worse, faces a delay while you onboard them.
Your best bet: combine Solace and Boomi to create beautiful event-driven integration that distributes Salesforce information in real-time and lets you easily add on new functionality. I will talk about how to event enable Salesforce in a minute. But first, let’s talk about Tommy the Taxi Driver:
  Adopting a new strategy in a real-time world
Portrait of handsome taxi driver outdoors
Tommy works for NYC Modern Taxi Company. Solace is following the journey of NYC Modern Taxi company, a traditional taxi company struggling to compete with ride sharing. Even though NYC Modern Taxi is completely made up (what, the name didn’t give it away?), they face many of the challenges I have encountered with customers.:
Create a better experience for customer and employees by using real-time data.
Integrate legacy technology with innovative tech like IoT and connected devices
Speed up the process for getting innovative ideas to market and reduce development cost
NYC Modern Taxi company uses Solace in combination with Boomi to address many different use cases, from IoT to analytics to SaaS integration. Today’s challenge: encouraging Tommy the Taxi Driver to use the brand-new NYC Modern Taxi ridesharing app.
After some discussion with the taxi cab psychologists, the business side of NYC Modern Taxi Company developed a 4-step strategy:
Treat drivers like prospective clients: each driver is an “account” in Salesforce and has an assigned  account rep
  Each account rep has an unlimited coffee budget with which to woo drivers
  Once the coffee does its job, the driver agrees to use the app. As quickly as possible, the driver gets onboarded into the operations database
  The question now is: what’s the best way to make it work?
Dead ends: “annoying sibling” polling and REST calls
Figure 1: The Goal
NYC Modern Taxi Company knows that Salesforce contains a wealth of information. But they are concerned about how to get the information out of Salesforce and across the enterprise.
The first method is what highly technical folks call the “annoying sibling technique” — constantly polling the Salesforce REST API to see if there’s been a recent update.
Figure 2: The “annoying sibling” architectural pattern
    However, that solution means that one of the goals of the project won’t happen because polling consumes resources and slows down information movement.
While that isn’t ideal, NYC Modern Taxi might be willing to try it. But even after they pestered the information out of Salesforce, NYC Modern Taxi would still need to get the information to the right place at the right time. They could connect multiple microservices using REST. But then the other requirements become challenging because:
REST make it hard to push data to hundreds and potentially thousands of mobile apps used by drivers
Adding features down the line means changing existing code, manually adding in another endpoint and potentially slowing everything down.
Point #2 is key because while version 1 has its scope locked in, imagine what version 2 could do:
Initiate a background check with a security company
Populate key fields in the accounting software that sends checks out to drivers (boring but helpful)
Analyze driver likes and dislikes using Artificial intelligence. Key question: what makes a driver commit to using the app: free coffee consistently? Random acts of coffee? Playing hard to get with coffee?
Generate social media buzz using Tommy the Taxi Driver’s Instagram account
Integrate with coffee shop APIs for automatic entry of driver’s coffee orders as “favorites”
Yielding to Event-Driven Integration
Given the challenges involved, REST isn’t the right fit for NYC Modern Taxi. But what is the alternative to the annoying sibling/point-to-point architecture? The answer: event driven integration with Solace and Boomi.
Rather than using brittle, tightly coupled REST calls, Solace distributes events throughout NYC Modern Taxi, which means:
Applications around NYC Modern Taxi know about business events faster by eliminating polling
Simultaneous connectivity to vast numbers of mobile apps and IoT devices through other messaging open standard protocols like MQTT and/or AMQP
Adding new, innovative applications doesn’t affect existing components because the architecture is now loosely coupled. That means features can evolve quickly (and Tommy’s Instagram can rack up more “likes”)
Using Boomi, things get even better:
Huge numbers of easy-to-use connectors to things like Salesforce Platform Events (stay tuned for more), databases, email and other applications.
Access to the content of each event, for graphical mapping and content-based routing
Mapping out a Solution
Now that NYC Modern Taxi has a new architectural concept in mind (event-based integration), they’ve identified that determining where events are heading and what the content looks like can be a little difficult. Luckily, there is a solution. Solace PubSub+ Event Portal (or as some call it, the Google Maps of Microservices) discovers and presents the flow of events in a visual form.
Using the Event Portal, NYC Modern Taxi can map out how events flow through the enterprise and design what the schemas and topic strings will look like. That shared knowledge is important during the design phase of a project, but it’s also key during the implementation phase as well. You’ll see how Event Portal helps in that regard a little later.
Figure 3: The Salesforce portion of the NYC Modern Taxi Event-Driven Architecture
Bustin’ Events Straight Outta Salesforce
Before NYC Modern Taxi can be fully-event driven, they need to get events out of Salesforce.
The magic starts with triggers. Within Salesforce, every change to an account (including the ones for Tommy and other drivers) can fire a trigger written in a language called Apex.
Apex is a powerful language; it can execute logic, run more queries, make REST calls, and more. But this is where things really get interesting: Apex can produce Salesforce Platform Events.
Platform Events is an internal message bus that comes with Salesforce. It lets both internal and external applications listen for events. It’s not the most flexible event broker, it’s only focused on Salesforce events, but it gets the job done. It also has a snazzy logo that looks like an antenna. And it just so happens that Boomi has a built-in connector that brings Platform Events out of Salesforce and into NYC Modern Taxi’s integration stack.
Now the implementation is getting somewhere! The next logical step is to intelligently distribute the information from Salesforce to other interested applications. Enter Solace. The Solace PubSub+ Connector for Boomi can help with:
Transforming events from Salesforce into a standard
Formulating a finer-grained topic string
Publishing events
From there, the event can go to any application within NYC Modern Taxi. Adding in more applications and devices is as simple as adding a subscription to get the information flow going. Mobile applications, data lakes, Instagram accounts, and more can be added without disturbing the existing code.
  Down to the Nitty Gritty: Event Enable Salesforce
Now NYC Taxi has an enterprise architecture and knows how events will get from Salesforce into the target system. The only thing left to do is create it.
Luckily, NYC Modern Taxi has a giant head start on implementation. When developers start building solutions in Boomi, the PubSub+ Connector gives them access to the architect-defined schemas and topics previously defined in the design phase in the PubSub+ Event Portal. No more guessing about what the data looks like, or which topic to subscribe to. It’s all at your fingertips.
Hit the Event-Enabling Trifecta: Salesforce, Boomi and Solace
Reading about the implementation is one thing, getting to see Tommy’s Salesforce data speed through NYC Modern Taxi in real-time is a whole ‘nother enchilada. Check out the CodeLab for this post: you’ll get free access to Boomi, Solace, and Salesforce and get to see the solution in action. Extra credit if you can connect it to Instagram!
We’d love to hear your feedback in our Solace Developer Community. Even better, showcase your solution there to share it with the community!
The post Why You Should Event Enable Salesforce: A Taxi Tale appeared first on Solace.
Why You Should Event Enable Salesforce: A Taxi Tale published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
If you’re like most enterprises – 84 percent – are reconsidering their enterprise architecture and adopting a hybrid cloud strategy, combining on-premises systems with public clouds, private clouds, or a mix of each, as the below chart from a recent Flexera report illustrates.
The main benefit of hybrid clouds is agility. Enterprises need to be able to quickly adapt and redirect their IT to remain competitive. Hybrid cloud offers the best of all worlds — the cost optimization, agility, flexibility, scalability and elasticity benefits of public cloud, and the control, compliance, security and reliability of private cloud and on-premises environments.
For example, it’s unlikely that an enterprise will build and maintain big-data processing capabilities on premises or in a private cloud because they require a lot of resources and aren’t always needed, at least not to the same degree as other systems. Instead, they can use public cloud big data analytics resources, scaling up and down as necessary, while using a private cloud to ensure data security and keep sensitive big data behind the corporate firewall.
Solace has been working with many of its enterprise customers to modernize their architecture and make the most of their hybrid cloud strategy. What follows are five common uses cases and how Solace can help.
1. Migrating existing on-premises functional workloads to cloud (a.k.a. lift-and-shift)
Many enterprises move on-premises IT workloads to the cloud to save money, be more flexible or to improve security. The advantage of migrating existing applications over building from scratch is it allows applications to be moved quickly and easily without having to re-architect them, but it still requires a lot of planning to ensure data sets will be matched with handling systems in the new environment and applications have the resources they need to operate effectively.
For example, when you have systems on premises that are already set up to communicate with one another – say through an enterprise service bus – and then lift-and-shift some of them to the cloud, how does this work?
Because PubSub+ Event Broker works both on premises and in the cloud, your application’s event routing doesn’t have to be rewritten when the application is rehosted; just point it to the local event broker in the cloud, which will ensure all events are dynamically routed to where they need to go.
2. Enhancing existing on-premises applications with cloud-native services
Organizations have invested in core systems of record for decades, many of which will never be suitable for cloud hosting. But many of those systems of record need to exchange data with services that include traditional datacenter resources, newer workloads deployed in the cloud, SaaS services, and a myriad of third-party services.
For example, an enterprise may want to stream data from a Kafka-based application to Google Cloud Platform for analytics. Solace has all the integration tools in place to make that possible. We have Kafka source and sink connectors to link your Kafka cluster to an on-premises Solace PubSub+ Event Broker, and once your data gets to the cloud, we offer an Apache Beam/Solace I/O connector so you can use the various Google data runners and repositories to get your data into the Google AI platform.
The same goes for other on-premises applications and cloud environments; we can securely connect to cloud native services like data lakes in GCP, AWS, and Azure with native integration between our event broker using REST, and have developed more robust connectors for Kafka, Apache Beam, Databricks, Apache Spark, and the AWS API Gateway. When combined with our event brokers – which support protocols including JMS, AMQP, MQTT, HTTP and WebSocket – we can connect just about any on-premises application to the most popular cloud native services in a low-code/no code fashion.
3. Faster application development
As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons.
The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.
If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment.
And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.
4. Enabling cloud-to-cloud integration
Many enterprises are using services from multiple cloud service providers to do things like avoid lock-in or to attain different functional advantages from different cloud providers, because each offers different best-of-breed cloud-native services. For example, AWS is known for their cheap S3 bucket, Google is widely thought to be the leader in analytics, and Azure has easy-to-use IoT infrastructure. Some organizations may want to use a mix of these resources and will need to be able to easily exchange information between them all. Additionally, because cloud providers offer different capabilities in different regions of the world, and because of data residency requirements, international enterprises might need resources from multiple cloud providers that varies depending on the region.
If you’re adopting a multi-cloud architecture, a PubSub+ powered event mesh extends into all of the popular public clouds, both within their public compute resources, as well as within the virtual private clouds offered by those providers, either using our software or our SaaS offering. And as mentioned, we can connect to many of the popular cloud native services, like Databricks, Apache Spark, and others.
5. Hybrid cloud event-driven microservices
To be more agile and to better manage scalability, reliability and availability, many enterprise applications are moving from monolithic architectures, where single applications are responsible for all aspects of a workflow, to microservices, which decompose the monolithic applications into smaller chunks of code. Those microservices then notify each other of changes using events. Microservices can be located wherever makes the most sense, on premises, in public or private clouds, or in PaaS or IaaS environments. And, as with application development, microservice app development can often start within a cloud environment and then be migrated elsewhere for production.
As Gartner points out, “Event-driven architecture (EDA) is inherently intermediated and implementations of event-driven applications must use some technology in the role of an event broker.”* This means you absolutely need an event broker underpinning your event-driven microservices architecture to make it work.
If those microservices are distributed across cloud and on-premises environments, it makes sense to have a robust and scalable broker that can connect to those microservices no matter where they are hosted or how they are run – on prem, on public or private clouds, in Spring, Kubernetes, OpenShift, as a Dell Boomi Atom – the list goes on. In every case, Solace PubSub+ has you covered with native deployments or integrations that can all be connected with an event mesh, and will support the easy movement of microservices between hosting environments, as required.
Summary – Making the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
In summary, hybrid and multi-cloud IT is now the norm for most large enterprises. But taking advantage of all the benefits of having data and applications on premises and in the clouds and sharing information between all the environments can be a tricky business. Thankfully, Solace has already done a lot of the heavy lifting for you and is always thinking of ways to make enterprise-wide event distribution as robust, secure and powerful as possible.
I’ve shard some of the most common hybrid cloud use cases our customers are asking us to address with our PubSub+ Platform. If you’d like to learn more about how to make the most of your enterprise architecture and hybrid cloud strategy, or have a specific example you’d like to discuss, we’d love to hear from you.
*Gartner, Innovation Insight for Event Brokers, Yefim Natis, Keith Guttridge, W. Roy Schulte, Nick Heudecker, Paul Vincent, 31 July 2018
The post 5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy appeared first on Solace.
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
If you’re like most enterprises – 84 percent – are reconsidering their enterprise architecture and adopting a hybrid cloud strategy, combining on-premises systems with public clouds, private clouds, or a mix of each, as the below chart from a recent Flexera report illustrates.
The main benefit of hybrid clouds is agility. Enterprises need to be able to quickly adapt and redirect their IT to remain competitive. Hybrid cloud offers the best of all worlds — the cost optimization, agility, flexibility, scalability and elasticity benefits of public cloud, and the control, compliance, security and reliability of private cloud and on-premises environments.
For example, it’s unlikely that an enterprise will build and maintain big-data processing capabilities on premises or in a private cloud because they require a lot of resources and aren’t always needed, at least not to the same degree as other systems. Instead, they can use public cloud big data analytics resources, scaling up and down as necessary, while using a private cloud to ensure data security and keep sensitive big data behind the corporate firewall.
Solace has been working with many of its enterprise customers to modernize their architecture and make the most of their hybrid cloud strategy. What follows are five common uses cases and how Solace can help.
1. Migrating existing on-premises functional workloads to cloud (a.k.a. lift-and-shift)
Many enterprises move on-premises IT workloads to the cloud to save money, be more flexible or to improve security. The advantage of migrating existing applications over building from scratch is it allows applications to be moved quickly and easily without having to re-architect them, but it still requires a lot of planning to ensure data sets will be matched with handling systems in the new environment and applications have the resources they need to operate effectively.
For example, when you have systems on premises that are already set up to communicate with one another – say through an enterprise service bus – and then lift-and-shift some of them to the cloud, how does this work?
Because PubSub+ Event Broker works both on premises and in the cloud, your application’s event routing doesn’t have to be rewritten when the application is rehosted; just point it to the local event broker in the cloud, which will ensure all events are dynamically routed to where they need to go.
2. Enhancing existing on-premises applications with cloud-native services
Organizations have invested in core systems of record for decades, many of which will never be suitable for cloud hosting. But many of those systems of record need to exchange data with services that include traditional datacenter resources, newer workloads deployed in the cloud, SaaS services, and a myriad of third-party services.
For example, an enterprise may want to stream data from a Kafka-based application to Google Cloud Platform for analytics. Solace has all the integration tools in place to make that possible. We have Kafka source and sink connectors to link your Kafka cluster to an on-premises Solace PubSub+ Event Broker, and once your data gets to the cloud, we offer an Apache Beam/Solace I/O connector so you can use the various Google data runners and repositories to get your data into the Google AI platform.
The same goes for other on-premises applications and cloud environments; we can securely connect to cloud native services like data lakes in GCP, AWS, and Azure with native integration between our event broker using REST, and have developed more robust connectors for Kafka, Apache Beam, Databricks, Apache Spark, and the AWS API Gateway. When combined with our event brokers – which support protocols including JMS, AMQP, MQTT, HTTP and WebSocket – we can connect just about any on-premises application to the most popular cloud native services in a low-code/no code fashion.
3. Faster application development
As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons.
The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.
If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment.
And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.
4. Enabling cloud-to-cloud integration
Many enterprises are using services from multiple cloud service providers to do things like avoid lock-in or to attain different functional advantages from different cloud providers, because each offers different best-of-breed cloud-native services. For example, AWS is known for their cheap S3 bucket, Google is widely thought to be the leader in analytics, and Azure has easy-to-use IoT infrastructure. Some organizations may want to use a mix of these resources and will need to be able to easily exchange information between them all. Additionally, because cloud providers offer different capabilities in different regions of the world, and because of data residency requirements, international enterprises might need resources from multiple cloud providers that varies depending on the region.
If you’re adopting a multi-cloud architecture, a PubSub+ powered event mesh extends into all of the popular public clouds, both within their public compute resources, as well as within the virtual private clouds offered by those providers, either using our software or our SaaS offering. And as mentioned, we can connect to many of the popular cloud native services, like Databricks, Apache Spark, and others.
5. Hybrid cloud event-driven microservices
To be more agile and to better manage scalability, reliability and availability, many enterprise applications are moving from monolithic architectures, where single applications are responsible for all aspects of a workflow, to microservices, which decompose the monolithic applications into smaller chunks of code. Those microservices then notify each other of changes using events. Microservices can be located wherever makes the most sense, on premises, in public or private clouds, or in PaaS or IaaS environments. And, as with application development, microservice app development can often start within a cloud environment and then be migrated elsewhere for production.
As Gartner points out, “Event-driven architecture (EDA) is inherently intermediated and implementations of event-driven applications must use some technology in the role of an event broker.”* This means you absolutely need an event broker underpinning your event-driven microservices architecture to make it work.
If those microservices are distributed across cloud and on-premises environments, it makes sense to have a robust and scalable broker that can connect to those microservices no matter where they are hosted or how they are run – on prem, on public or private clouds, in Spring, Kubernetes, OpenShift, as a Dell Boomi Atom – the list goes on. In every case, Solace PubSub+ has you covered with native deployments or integrations that can all be connected with an event mesh, and will support the easy movement of microservices between hosting environments, as required.
Summary – Making the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
In summary, hybrid and multi-cloud IT is now the norm for most large enterprises. But taking advantage of all the benefits of having data and applications on premises and in the clouds and sharing information between all the environments can be a tricky business. Thankfully, Solace has already done a lot of the heavy lifting for you and is always thinking of ways to make enterprise-wide event distribution as robust, secure and powerful as possible.
I’ve shard some of the most common hybrid cloud use cases our customers are asking us to address with our PubSub+ Platform. If you’d like to learn more about how to make the most of your enterprise architecture and hybrid cloud strategy, or have a specific example you’d like to discuss, we’d love to hear from you.
*Gartner, Innovation Insight for Event Brokers, Yefim Natis, Keith Guttridge, W. Roy Schulte, Nick Heudecker, Paul Vincent, 31 July 2018
The post 5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy appeared first on Solace.
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
If you’re like most enterprises – 84 percent – are reconsidering their enterprise architecture and adopting a hybrid cloud strategy, combining on-premises systems with public clouds, private clouds, or a mix of each, as the below chart from a recent Flexera report illustrates.
The main benefit of hybrid clouds is agility. Enterprises need to be able to quickly adapt and redirect their IT to remain competitive. Hybrid cloud offers the best of all worlds — the cost optimization, agility, flexibility, scalability and elasticity benefits of public cloud, and the control, compliance, security and reliability of private cloud and on-premises environments.
For example, it’s unlikely that an enterprise will build and maintain big-data processing capabilities on premises or in a private cloud because they require a lot of resources and aren’t always needed, at least not to the same degree as other systems. Instead, they can use public cloud big data analytics resources, scaling up and down as necessary, while using a private cloud to ensure data security and keep sensitive big data behind the corporate firewall.
Solace has been working with many of its enterprise customers to modernize their architecture and make the most of their hybrid cloud strategy. What follows are five common uses cases and how Solace can help.
1. Migrating existing on-premises functional workloads to cloud (a.k.a. lift-and-shift)
Many enterprises move on-premises IT workloads to the cloud to save money, be more flexible or to improve security. The advantage of migrating existing applications over building from scratch is it allows applications to be moved quickly and easily without having to re-architect them, but it still requires a lot of planning to ensure data sets will be matched with handling systems in the new environment and applications have the resources they need to operate effectively.
For example, when you have systems on premises that are already set up to communicate with one another – say through an enterprise service bus – and then lift-and-shift some of them to the cloud, how does this work?
Because PubSub+ Event Broker works both on premises and in the cloud, your application’s event routing doesn’t have to be rewritten when the application is rehosted; just point it to the local event broker in the cloud, which will ensure all events are dynamically routed to where they need to go.
2. Enhancing existing on-premises applications with cloud-native services
Organizations have invested in core systems of record for decades, many of which will never be suitable for cloud hosting. But many of those systems of record need to exchange data with services that include traditional datacenter resources, newer workloads deployed in the cloud, SaaS services, and a myriad of third-party services.
For example, an enterprise may want to stream data from a Kafka-based application to Google Cloud Platform for analytics. Solace has all the integration tools in place to make that possible. We have Kafka source and sink connectors to link your Kafka cluster to an on-premises Solace PubSub+ Event Broker, and once your data gets to the cloud, we offer an Apache Beam/Solace I/O connector so you can use the various Google data runners and repositories to get your data into the Google AI platform.
The same goes for other on-premises applications and cloud environments; we can securely connect to cloud native services like data lakes in GCP, AWS, and Azure with native integration between our event broker using REST, and have developed more robust connectors for Kafka, Apache Beam, Databricks, Apache Spark, and the AWS API Gateway. When combined with our event brokers – which support protocols including JMS, AMQP, MQTT, HTTP and WebSocket – we can connect just about any on-premises application to the most popular cloud native services in a low-code/no code fashion.
3. Faster application development
As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons.
The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.
If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment.
And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.
4. Enabling cloud-to-cloud integration
Many enterprises are using services from multiple cloud service providers to do things like avoid lock-in or to attain different functional advantages from different cloud providers, because each offers different best-of-breed cloud-native services. For example, AWS is known for their cheap S3 bucket, Google is widely thought to be the leader in analytics, and Azure has easy-to-use IoT infrastructure. Some organizations may want to use a mix of these resources and will need to be able to easily exchange information between them all. Additionally, because cloud providers offer different capabilities in different regions of the world, and because of data residency requirements, international enterprises might need resources from multiple cloud providers that varies depending on the region.
If you’re adopting a multi-cloud architecture, a PubSub+ powered event mesh extends into all of the popular public clouds, both within their public compute resources, as well as within the virtual private clouds offered by those providers, either using our software or our SaaS offering. And as mentioned, we can connect to many of the popular cloud native services, like Databricks, Apache Spark, and others.
5. Hybrid cloud event-driven microservices
To be more agile and to better manage scalability, reliability and availability, many enterprise applications are moving from monolithic architectures, where single applications are responsible for all aspects of a workflow, to microservices, which decompose the monolithic applications into smaller chunks of code. Those microservices then notify each other of changes using events. Microservices can be located wherever makes the most sense, on premises, in public or private clouds, or in PaaS or IaaS environments. And, as with application development, microservice app development can often start within a cloud environment and then be migrated elsewhere for production.
As Gartner points out, “Event-driven architecture (EDA) is inherently intermediated and implementations of event-driven applications must use some technology in the role of an event broker.”* This means you absolutely need an event broker underpinning your event-driven microservices architecture to make it work.
If those microservices are distributed across cloud and on-premises environments, it makes sense to have a robust and scalable broker that can connect to those microservices no matter where they are hosted or how they are run – on prem, on public or private clouds, in Spring, Kubernetes, OpenShift, as a Dell Boomi Atom – the list goes on. In every case, Solace PubSub+ has you covered with native deployments or integrations that can all be connected with an event mesh, and will support the easy movement of microservices between hosting environments, as required.
Summary – Making the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
In summary, hybrid and multi-cloud IT is now the norm for most large enterprises. But taking advantage of all the benefits of having data and applications on premises and in the clouds and sharing information between all the environments can be a tricky business. Thankfully, Solace has already done a lot of the heavy lifting for you and is always thinking of ways to make enterprise-wide event distribution as robust, secure and powerful as possible.
I’ve shard some of the most common hybrid cloud use cases our customers are asking us to address with our PubSub+ Platform. If you’d like to learn more about how to make the most of your enterprise architecture and hybrid cloud strategy, or have a specific example you’d like to discuss, we’d love to hear from you.
*Gartner, Innovation Insight for Event Brokers, Yefim Natis, Keith Guttridge, W. Roy Schulte, Nick Heudecker, Paul Vincent, 31 July 2018
The post 5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy appeared first on Solace.
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy published first on https://jiohow.tumblr.com/
0 notes
netmetic · 4 years
Text
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
If you’re like most enterprises – 84 percent – are reconsidering their enterprise architecture and adopting a hybrid cloud strategy, combining on-premises systems with public clouds, private clouds, or a mix of each, as the below chart from a recent Flexera report illustrates.
The main benefit of hybrid clouds is agility. Enterprises need to be able to quickly adapt and redirect their IT to remain competitive. Hybrid cloud offers the best of all worlds — the cost optimization, agility, flexibility, scalability and elasticity benefits of public cloud, and the control, compliance, security and reliability of private cloud and on-premises environments.
For example, it’s unlikely that an enterprise will build and maintain big-data processing capabilities on premises or in a private cloud because they require a lot of resources and aren’t always needed, at least not to the same degree as other systems. Instead, they can use public cloud big data analytics resources, scaling up and down as necessary, while using a private cloud to ensure data security and keep sensitive big data behind the corporate firewall.
Solace has been working with many of its enterprise customers to modernize their architecture and make the most of their hybrid cloud strategy. What follows are five common uses cases and how Solace can help.
1. Migrating existing on-premises functional workloads to cloud (a.k.a. lift-and-shift)
Many enterprises move on-premises IT workloads to the cloud to save money, be more flexible or to improve security. The advantage of migrating existing applications over building from scratch is it allows applications to be moved quickly and easily without having to re-architect them, but it still requires a lot of planning to ensure data sets will be matched with handling systems in the new environment and applications have the resources they need to operate effectively.
For example, when you have systems on premises that are already set up to communicate with one another – say through an enterprise service bus – and then lift-and-shift some of them to the cloud, how does this work?
Because PubSub+ Event Broker works both on premises and in the cloud, your application’s event routing doesn’t have to be rewritten when the application is rehosted; just point it to the local event broker in the cloud, which will ensure all events are dynamically routed to where they need to go.
2. Enhancing existing on-premises applications with cloud-native services
Organizations have invested in core systems of record for decades, many of which will never be suitable for cloud hosting. But many of those systems of record need to exchange data with services that include traditional datacenter resources, newer workloads deployed in the cloud, SaaS services, and a myriad of third-party services.
For example, an enterprise may want to stream data from a Kafka-based application to Google Cloud Platform for analytics. Solace has all the integration tools in place to make that possible. We have Kafka source and sink connectors to link your Kafka cluster to an on-premises Solace PubSub+ Event Broker, and once your data gets to the cloud, we offer an Apache Beam/Solace I/O connector so you can use the various Google data runners and repositories to get your data into the Google AI platform.
The same goes for other on-premises applications and cloud environments; we can securely connect to cloud native services like data lakes in GCP, AWS, and Azure with native integration between our event broker using REST, and have developed more robust connectors for Kafka, Apache Beam, Databricks, Apache Spark, and the AWS API Gateway. When combined with our event brokers – which support protocols including JMS, AMQP, MQTT, HTTP and WebSocket – we can connect just about any on-premises application to the most popular cloud native services in a low-code/no code fashion.
3. Faster application development
As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons.
The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.
If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment.
And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.
4. Enabling cloud-to-cloud integration
Many enterprises are using services from multiple cloud service providers to do things like avoid lock-in or to attain different functional advantages from different cloud providers, because each offers different best-of-breed cloud-native services. For example, AWS is known for their cheap S3 bucket, Google is widely thought to be the leader in analytics, and Azure has easy-to-use IoT infrastructure. Some organizations may want to use a mix of these resources and will need to be able to easily exchange information between them all. Additionally, because cloud providers offer different capabilities in different regions of the world, and because of data residency requirements, international enterprises might need resources from multiple cloud providers that varies depending on the region.
If you’re adopting a multi-cloud architecture, a PubSub+ powered event mesh extends into all of the popular public clouds, both within their public compute resources, as well as within the virtual private clouds offered by those providers, either using our software or our SaaS offering. And as mentioned, we can connect to many of the popular cloud native services, like Databricks, Apache Spark, and others.
5. Hybrid cloud event-driven microservices
To be more agile and to better manage scalability, reliability and availability, many enterprise applications are moving from monolithic architectures, where single applications are responsible for all aspects of a workflow, to microservices, which decompose the monolithic applications into smaller chunks of code. Those microservices then notify each other of changes using events. Microservices can be located wherever makes the most sense, on premises, in public or private clouds, or in PaaS or IaaS environments. And, as with application development, microservice app development can often start within a cloud environment and then be migrated elsewhere for production.
As Gartner points out, “Event-driven architecture (EDA) is inherently intermediated and implementations of event-driven applications must use some technology in the role of an event broker.”* This means you absolutely need an event broker underpinning your event-driven microservices architecture to make it work.
If those microservices are distributed across cloud and on-premises environments, it makes sense to have a robust and scalable broker that can connect to those microservices no matter where they are hosted or how they are run – on prem, on public or private clouds, in Spring, Kubernetes, OpenShift, as a Dell Boomi Atom – the list goes on. In every case, Solace PubSub+ has you covered with native deployments or integrations that can all be connected with an event mesh, and will support the easy movement of microservices between hosting environments, as required.
Summary – Making the Most of Your Enterprise Architecture and Hybrid Cloud Strategy
In summary, hybrid and multi-cloud IT is now the norm for most large enterprises. But taking advantage of all the benefits of having data and applications on premises and in the clouds and sharing information between all the environments can be a tricky business. Thankfully, Solace has already done a lot of the heavy lifting for you and is always thinking of ways to make enterprise-wide event distribution as robust, secure and powerful as possible.
I’ve shard some of the most common hybrid cloud use cases our customers are asking us to address with our PubSub+ Platform. If you’d like to learn more about how to make the most of your enterprise architecture and hybrid cloud strategy, or have a specific example you’d like to discuss, we’d love to hear from you.
*Gartner, Innovation Insight for Event Brokers, Yefim Natis, Keith Guttridge, W. Roy Schulte, Nick Heudecker, Paul Vincent, 31 July 2018
The post 5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy appeared first on Solace.
5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy published first on https://jiohow.tumblr.com/
0 notes
netmetic · 5 years
Text
Trading Firm Creates a Hybrid Event Mesh to Benefit from AI Cloud Services
It’s no secret that product marketers love to tell stories about customers using our products, but this story has me more excited than usual.
Tan T-Kiang (TK), Grasshopper’s Chief Investment Officer, seemed amused when I asked him several weeks ago:
What business impact does a Solace-powered event-driven architecture have on your trading firm?”
How Tan T-Kiang looks when I’m not asking him questions.
Why was he so amused?
Because trading firms have been using event-driven architectures for decades! TK explained that investment banks couldn’t predict what applications they would need to share information with in the future. Ten years ago, he didn’t know that Grasshopper would want to use GCP’s BigQuery and Cloud Dataflow to make near real-time trading decisions and develop trading strategies.
But what he did know years ago is that he needed to build a trading platform that enabled different applications to share events across different environments; he needed the platform to be flexible and dynamic so that it could take on and adapt to new technologies as they evolved. So that’s what TK and his team did.
It paid off. In 2011, Grasshopper built an event distribution platform to exchange real-time market data and transactions across many exchanges in Singapore and Tokyo, and move data from a variety of applications and algorithms across asset classes.
TK and his team selected Solace’s messaging appliances because he had used them successfully in previous companies. He was introduced to Solace in 2007, and he thought our integration of middleware into a device was innovative. To TK, our PubSub+ appliances provided seamless data movement using open standards and protocols and was capable of sharing events in real-time with applications that he hadn’t thought of yet.
Fast forward a few years and technology advanced dramatically.
Grasshopper wanted to move data between their on-premises locations and Ahab, their new cloud-native application. Ahab, an Apache Beam-based Java application is a high-performance data processing pipeline that uses real-time market data to calculate the value of the order book—a dynamic list of buy and sell orders that dictates stock pricing—from real-time market data received from multiple stock exchanges. As the order book is the foundation for many trading and analytics applications, high performance was truly mission critical.
From there, Ahab ingests the result to BigQuery, GCP’s fully managed data processing and analysis tool, utilizing the tool’s availability and scalability. Though they had to make many other changes, the one thing they didn’t need to change was that their trading platform was built on Solace technology. (Let me be clear: I’m not saying Solace stood still. In addition to the appliances, enterprises can also take advantage of software brokers or as a service.) Grasshopper ultimately used PubSub+ Event Brokers to create an event mesh – a modern messaging layer that can be deployed across every environment and component of their company, to stream events across them all.
How it works
Let’s drill down into the architecture at Grasshopper.
The Solace PubSub+ Event Brokers (in GCP and on-premises) create a hybrid cloud event mesh allowing orders published to the on-premises brokers to flow easily and dynamically to the cloud PubSub+ brokers and SolaceIO connectors. The SolaceIO connector (running in Google Cloud Dataflow) consumes this market data stream and makes it available for further processing with Apache Beam running in Cloud Dataflow. The processed data can be ingested into Cloud Bigtable, BigQuery and other destinations, processing in a pub/sub manner.
“Why PubSub+?” you may ask. “Couldn’t Grasshopper use some other technology?” In TK’s own words:
Solace provides the reliable, robust event distribution we need to exchange all kinds of real-time market data and transactions across many exchanges. Our decision to use PubSub+ to incorporate cloud-based machine learning into our system was easy given Solace’s hybrid cloud capabilities.”
Grasshopper has experienced many benefits already, including:
Improved speed and sophistication of their trading;
Increased on-demand scalability while retaining latency, performance and connectivity requirements; and,
Cost-effectively complementing the expertise of their algo/quant developer.
Solace PubSub+ can help you create a robust and dynamic event mesh as you shift some applications or workloads to the cloud. If you’d like to learn more about Solace PubSub+, PubSub+ Cloud, or a Solace-enabled event mesh, contact us.
Get the Grasshopper Success Story PDF
The post Trading Firm Creates a Hybrid Event Mesh to Benefit from AI Cloud Services appeared first on Solace.
Trading Firm Creates a Hybrid Event Mesh to Benefit from AI Cloud Services published first on https://jiohow.tumblr.com/
0 notes