#Hyperscale testing automation tools
Explore tagged Tumblr posts
clearskytest · 2 years ago
Text
Hyperscalable Testing Automation Tools - Clearsky
We're proud to offer our hyperscalable testing automation tools, designed for businesses of all sizes. Our cutting-edge solutions are aimed at streamlining your workflow and helping you automate the process of testing your software applications in a scalable manner. Get started with us today and see how we can make a difference!
0 notes
digitalmore · 4 months ago
Text
0 notes
govindhtech · 2 years ago
Text
Expanding Azure’s edge capabilities through partnerships
Tumblr media
We are experiencing a time of quick, profound change. Every industry is moving forward as a result of the convergent pressures of the post-COVID-19 hybrid world we live in, a volatile financial environment, and the entrance of disruptive technology like AI. We observe clients utilizing technology and innovating as a result of these possibilities and challenges to stay competitive and strengthen their resistance to change. As consumers, many of us can relate to how automakers had to retool their factories to produce and deliver vehicles based on real-time supply chain data because of how severely global supply chain disruptions had affected them.
Unlocking the capacity to make the best decisions is crucial, both in terms of company strategies and technological investments. The rate at which our consumers are adjusting to the complexity of the world around us is exciting. Customers want cloud-native agility to harness data and draw insights across an increasingly internationally distributed estate encompassing traditional data centers, hyperscale clouds, and distributed edge locations like factory floors and fast service restaurants in order to make the best decisions.
Through a portfolio of services, tools, and infrastructure, including Azure Arc, Microsoft Azure gives businesses the ability to bring exactly the right amount of the cloud-native capabilities to wherever they are needed. Azure Stack HCI offers a thorough curated infrastructure stack that is Azure Arc-enabled by design for customers seeking an end-to-end Azure experience at the edge.
Through partner cooperation, the cloud-to-edge experience can be made simpler
In order to provide dependable and trustworthy solutions that mix hardware, software, and cloud services to meet customer demands from the smallest scattered site to the largest office, Microsoft works with infrastructure partners. To deliver a system that is dependable, manageable, and secure, these components must be seamlessly integrated.
Building the solutions that satisfy one of our customers’ top demands today bringing the power of the cloud out into the real world requires the help of our partnerships. I recently mentioned some of the work we are doing with partners, such as our partnership with Dell Technologies to deliver the new Dell APEX Cloud Platform for Microsoft Azure, which offers a turnkey Azure Stack HCI experience beyond the scope of our current Validated Node and Integrated System solution categories. Because of its native integration with Azure, it satisfies our shared goal of streamlining the customer experience and giving users the flexibility to manage and analyze their data across geographically dispersed footprints.
Premier Solutions for Azure Stack HCI, which further demonstrates the success of these relationships, has been launched, and I’m thrilled to announce it. A new class of Azure Stack HCI products called Premier Solutions offers clients a better operating experience, a quicker time to value, and more flexibility with as-a-service procurement choices. The best of our technologies are being combined into a completely integrated and all-encompassing edge infrastructure solution that seamlessly combines hardware, software, and cloud for customers. This solution was developed in partnership with top partners including Dell Technologies and Lenovo. Every Premier Solution consists of:
The best possible hardware, software, and cloud service integration, allowing users to spend less time maintaining infrastructure and more time on creativity.
Microsoft and our partners do ongoing testing to guarantee improved reliability and fewer outages.
End-to-end deployment routines that make it simple to regularly and consistently deploy one cluster, a few clusters, or a thousand clusters.
Full-stack updates that are automated and collaboratively tested offer seamless updates with a single click.
To lessen the difficulty of getting started, flexible purchasing options and a range of value-added services are made accessible.
Global accessibility provides a uniform solution throughout a customer’s worldwide estate.
Find the best fit for your company’s needs by learning more about each of the Azure Stack HCI solution categories and the options available. To provide the greatest flexibility and selection for practically any edge computing demand, our partners already provide more than 200 Azure Stack HCI solutions across our Integrated Systems and Validated Nodes categories in addition to Premier Solutions.
Learn how Azure Stack HCI with Arc support can aid in your digital transformation process
With the introduction of Dell APEX Cloud Platform for Microsoft Azure, the first Premier Solutions are now accessible in the Azure Stack HCI Catalog.
A completely new class of deeply integrated and automated solutions that simplify data and application management across Azure public cloud, data centers, and edge environments are delivered by our collaboration with Microsoft around Dell APEX Cloud Platform for Microsoft Azure and the introduction of Premier Solutions for Azure Stack HCI. Sudhir Srinivasan, Senior Vice President of Data and Multicloud Solutions at Dell Technologies.
We intend to add more cutting-edge edge systems from Lenovo to our portfolio of Premier Solutions in the upcoming months. It’s one of the numerous ways we continue to develop and evolve alongside our partners to provide clients solutions that are better able to address their particular problems.
“Lenovo is dedicated to enabling the intelligent transformation of our customers by streamlining the edge-to-cloud process. We are thrilled to offer turnkey ThinkAgile Premier Solutions that will assist organizations in smoothly unlocking insights from their data, wherever they need it, by leveraging Lenovo’s extensive array of AI-optimized edge-to-cloud servers as the foundation for collaborative solutions with Microsoft”, Kamran Amini, vice president and general manager of Lenovo Infrastructure Solutions Group’s server, storage, and software-defined solutions division.
0 notes
analyticsindiam · 6 years ago
Text
Data Science & AI Trends In India To Watch Out For In 2020 | By Analytics India Magazine & AnalytixLabs
Tumblr media
The year 2019 was great in terms of analytics adoption as the domestic analytics industry witnessed a significant growth this year. There has been a visible shift towards intelligent automation, AI and machine learning that is changing the face of all major sectors — right from new policies by the Indian Government, to micro-adoption by startups and SMEs. While customer acquisition, investment in enterprise-grade data infrastructure, personalised products were some of the trends from 2018, this year our industry interaction suggested that democratisation of AI, AI push into hardware and software are much talked about.  Our annual data science and AI trends report for the upcoming year 2020 aims at exploring the key strategic shifts that enterprises are most likely to make in the coming year to stay relevant and intelligent. This year we collaborated with AnalytixLabs, a leading Applied AI & Data Science training institute to bring out the key trends. Some of the key areas that have witnessed remarkable developments are deep learning, RPA and neural networks which, in turn, is affecting all the major industries such as marketing, sales, banking and finance, and others. Some of the most popular trends, according to our respondents, were the rise in robotic process automation or hyper-automation that has begun to use machine learning tools to work effectively.  The rise in explainable AI is another exciting development that the industry is likely to see in the popularity charts in the coming year, along with the importance of saving data lakes and the rise of hyperscale data centres, among others. Some of the other trends like advancements in conversational AI and augmented analytics, are here to stay. Semantic AI, enterprise knowledge graphs, hybrid clouds, self-service analytics, real-time analytics and multilingual text processing were some of the other popular trends mentioned by the respondents which are likely to be on the rise in 2020. 
Tumblr media
01. The Rise Of Hyper-Automation
Tumblr media
"2019 has seen rising adoption of Robotic Process Automation (RPA) across various industries. Intelligence infused in automation through data science and analytics is leading to an era of hyper-automation that enables optimization and modernisation. It is cost-effective too but may have risks. 2020 will see enterprises evaluating risks and control mechanisms associated with hyper-automation."Anjani Kommisetti, Country Manager, India & SAARC
Tumblr media
"Hyper-automation uses a combination of various ML, automation tools and packaged software to work simultaneously and in perfect sync. These include RPA, intelligent business management software and AI, to take the automation of human roles and organizational processes to the next level. Hyper automation requires a mix of devices to support this process to recreate exactly where the human employee is involved with a project, after which it can carry out the decision-making process independently."Suhale Kapoor, Executive VP & Co-founder, Absolutdata
Tumblr media
"Automation is going to increase in multitudes. Over 30% of data-based tasks will become automated. This will result in higher rates of productivity and analysts will have broader access to data. Automation will additionally assist decision makers to take better decisions for their customers with the help of correct analytics." Vishal Shah, Head of Data Sciences, Digit Insurance 02. Humanized Artificial Intelligence Products
Tumblr media
"We will see AI getting deeper into Homes and lifestyle and Human Interaction would begin to increase in the coming year. This means a reliable AI Engine. We have already seen some voice based technology making a comfortable place in homes. Now, with Jio Fiber coming home and Jio disrupting telecom sector it will be interesting to see how the data can be leveraged to improve/ develop devices that are more human than products." Tanuja Pradhan, Head- Special Projects, Consumer Insights & New Commerce Analytics, Jio
Tumblr media
"Rise of AI has been sensationalised in the media as a battle between man and machine and there are numerous numbers flying around on impact on job loss for millions of workers globally. However, only less than 10% of roles will really get automated in the near future. Most of the impact is rather on non-value-added tasks which will free-up time for humans to invest in more meaningful activities. We are seeing more and more companies releasing this now and investing in reskilling workforce to co-exist with and take advantage of technology." Abhinav Singhal, Director, tk Innovations, Thyssenkrupp
Tumblr media
"The effects of data analysis on vast amounts of data have now reached a tipping point, bringing us landmark achievements. We all know Shazam, the famous musical service where you can record sound and get info about the identified song. More recently, this has been expanded to more use cases, such as clothes where you shop simply by analyzing a photo, and identifying plants or animals. In 2020, we’ll see more use-cases for “shazaming” data in the enterprise, e.g. pointing to a data-source and getting telemetry such as where it comes from, who is using it, what the data quality is, and how much of the data has changed today. Algorithms will help analytic systems fingerprint data, find anomalies and insights, and suggest new data that should be analyzed with it. This will make data and analytics leaner and enable us to consume the right data at the right time."Dan Sommer, Market Intelligence Lead, Qlik 03. Advancements in Natural Language Processing & Conversational AI
Tumblr media
"Data Scientists form backbone of organisation’s success and employers have set the bar high while hiring these unicorns. With voice search and voice assistants becoming the next paradigm shift in AI, organisations are now possessing a massive amount of audio data, which means those with NLP skills have an edge over others. While this has always been a part of data science, it has gained more steam than ever due to the advancements in voice searches and text analysis for finding relevant information from documents." Sourabh Tiwari, CIO, Meril Group of Companies
Tumblr media
"NLP is becoming a necessary element for companies looking to improve their data analytics capabilities by enhancing visualized dashboards and reports within their BI systems. In several cases, it is facilitating interactions via Q&A/chat mediums to get real-time answers and useful visualizations in response to data-specific questions. It is predicted that natural-language generation and artificial intelligence will be standard features of 90% of advanced business intelligence platforms including those which are backed by cloud platforms. Its increasing use across the market indicates that, by bringing in improved efficiency and insights, NLP will be instrumental in optimizing data exploration in the years to come." Suhale Kapoor, Executive VP & Co-founder, Absolutdata
Tumblr media
"The advent of transformers for solving sequence-to-sequence tasks has revamped natural language processing and understanding use-cases, dramatically. For instance, BERT framework built using transformers is widely being tapped onto, for development of natural language applications like Bolo, demonstrating the applicability of AI for education. AI in education is here to stay." Deepika Sandeep, Practice Head, AI & ML, Bharat Light & Power
Tumblr media
"2019 was undeniably the year of Personal assistants. Though Google assistant and Siri have seen many winters since their launch but 2019 saw Amazon Alexa and Google home making way into our personal space and in some cases have already become an integral part of some households. Ongoing research in the area of computational linguistics will definitely changes the way we communicate with machines in the coming years." Ritesh Mohan Srivastava, Advanced Analytics Leader, Novartis 04. Explainable Artificial Intelligence (XAI)
Tumblr media
"Decisions and predictions made by artificial intelligence are becoming complex and critical especially in areas of fraud detection, preventive medical science and national security. Trusting a neural network has become increasingly difficult owing to the complexity of work. Data scientists train and test a model for accuracy and positive predictive values. However, they hesitate to use it in areas of fraud detection, security and medicine. Models inherently lack transparency and explanation on what is made or why something can go wrong. Artificial intelligence can no longer be a black box and data scientists need to understand the impact, application and decision the algorithm is making. XAI will be an exciting new trend in 2020. Its model agnostic nature allows it to be applied to answer some critical questions in data science." Pramod Singh, Chief Analytics Officer & VP, Envestnet|Yodlee
Tumblr media
"Another area that is taking shape in the last few years is Explainable AI. While the data science community is divided on how much explainability should be built into ML models, top level decision makers are extremely keen to get as much of an insight as possible into the so-called AI mind. As the business need for explainability increases people will build methods to peep into the AI models to get a better sense of their decision making abilities. Companies will also consider surfacing some such explanations to their users in an effort to build more user confidence and trust in the company’s models. Look out for this area in the next 5 years." Bhavik Gandhi, Sr. Director, Data Science & Analytics, Shaadi.com 05. Augmented Analytics & Artificial Intelligence
Tumblr media
"Augmented Analytics is the merger of statistical and linguistic technology. It is connected to the ability to work with Big Data and transform them into smaller usable subsets that are more informative. It makes use of Machine Learning and Natural Language Processing algorithms to extract insights. Data Scientists spend 80% of their time in Data Collection and Data Preparation. The final goal of augmented analytics is to completely replace this standard process with AI, taking care of the entire analysis process from data collection to business recommendations to decision makers." Kavita D. Chiplunkar, Head, Data Science, Infinite-Sum Modelling Inc.
Tumblr media
"Augmented Assistance to exploit human-algorithm synergy will be a big trend in the coming years. While the decision support systems have been around for a long time, we believe that advancements in Conversation systems will propel the digital workers in a totally different realm. We witnessed early progress in ChatOps in 2019 but 2020 should see development of similar technology for diverse personas like Database Admin, Data Steward and Governance Officers." Sameep Mehta, Senior Manager, Data & AI Research, IBM Research India
Tumblr media
"With the proliferation of AI-based solutions comes the need to show how they deliver value. This is giving rise to the evolution of explainable “white box” algorithms and the development of frameworks that allow for the encoding of domain expertise and a strong emphasis on data storytelling." Zabi Ulla S, Sr. Director Advanced Analytics, Course5 Intelligence
Tumblr media
"The increasing amount of big data that enterprises have to deal with today – from collection to analysis to interpretation – makes it nearly impossible to cover every conceivable permutation and combination manually. Augmented analytics is stepping in to ensure crucial insights aren’t missed, while also unearthing hidden patterns and removing human bias. Its widespread implementation will allow valuable data to be more widely accessible not just for data and analytics experts, but for key decision-makers across business functions." Suhale Kapoor, Executive Vice President & Co-founder, Absolutdata 06. Innovations In Data Storage Technologies "Data explosion increases every year and 2019 was no different. But to manage this ever-increasing data SDS saw an exponential rise, not just to attain agility but also make data more secure, that again has been a boon to SMEs. 2020 will see SME/ SMB sectors rising in the wave of intelligent transformation to make intelligent choices and reducing the total cost of ownership." Vivek Sharma, MD India, Lenovo DCG
Tumblr media
"Hyperscale data centre construction has dominated the data centre industry in 2019 and provided enterprises with an opportunity to adopt Data Centre Infrastructure Management (DCIM) solutions, befitting their modern business and environment. With the help of DCIM solutions, 2020 will see enterprises designing smart data centres enabling operators to integrate proactive sustainability and efficiency measures." Anjani Kommisetti, Country Manager, India & SAARC, Raritan
Tumblr media
"There is a rise of new innovations in data collection and storage technologies that will directly impact how we do store, process and do data science. These graphical database systems will greatly expedite data science model building, scale analytics at rapid speed and provides greater flexibility, allowing users to insert new data into a graph without changing the structure of the overall functionality of a graph." Zabi Ulla S, Sr. Director, Advanced Analytics, Course5 Intelligence
Tumblr media
"Data Science and Data Engineering are working more closely than ever. And T-shaped data scientists are very popular! With an increasing need for data scientists to deploy their algorithms and models, they need to work closely with engineering teams to ensure the right computation power, storage, RAM, streaming abilities etc are made available. A lot of organisations have created multi-disciplinary teams to achieve this objective." Abhishek Kothari, Co founder, Flexi Loans 07. Data Privacy Getting Mainstream
Tumblr media
"Consumers have finally matured to the need for robust data privacy as well as data protection in the products they use, and app developers cannot ignore that expectation anymore. In 2020, we can expect much more investment towards this facet of the business as well as find entirely new companies coming up to cater to this requirement alone." Shantanu Bhattacharya, Data Scientist, Locus
Tumblr media
"Data security will be the biggest challenging trend. Most AI-driven businesses are in nascent stages and have grown too fast. Businesses will have to relook at data security and build safer and robust infrastructure. Data and Analytics industry will face this biggest challenge in 2020 due to lack of orientation of data security in India. Focus has been on growth and 2020 will get the focus on sustaining this growth by securing data and building sustainability." Dr Mohit Batra, Founder & CEO, Marketmojo.com
Tumblr media
"As governments start to dive deeper into data & technology, more & more sensitive information will be unearthed. More importantly, we see an increasing trend in collaboration between governments and private sector, for design & delivery of public goods. To make the most of this phase of innovation, it will be critical for governments at all levels to not only articulate how it sees the contours of data sharing and usage (in India, we currently have a draft Personal Data Protection Bill) but also how these nitty-gritties are embedded in the day to day working of the governments and decision makers." Poornima Dore, Head, Data-driven Governance, Tata Trusts 08. Increasing Awareness On Ethical Use Of Artificial Intelligence
Tumblr media
"The analytics community is starting to awaken to the profound ways our algorithms will impact society, and are now attempting to develop guidelines on ethics for our increasingly automated world. The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, and other countries and corporations including OECD. We don’t have the perfect answers yet for concerns around privacy, biases or its criminal misuse, but it’s good to see at least an attempt in the right direction." Abhinav Singhal, Director, tk Innovations, Thyssenkrupp
Tumblr media
"Artificial Intelligence comes with great challenges, such as AI bias, accelerated hacking, and AI terrorism. The success of using AI for good depends upon trust, and that trust can only be built over time with the utmost adherence to ethical principles and practices. As we plough ahead into the 2020s, the only way we can realistically see AI and automation take the world of business by storm is if it is smartly regulated. This begins with incentivising further advancements and innovation to the tech, which means regulating applications rather than the tech itself. Whilst there is a great deal of unwarranted fear around AI and the potential consequences it may have, we should be optimistic about a future where AI is ethical and useful." Asheesh Mehra, Co-founder and Group CEO of AntWorks 09. Quantum Computing & Data Science
Tumblr media
"Quantum computers perform calculations based on the probability of the state of an object before it is measured- rather than just microseconds- which means that they have the potential to process more data exponentially compared to conventional computers. In a quantum system, the qubits or quantum bits store much more data and can run complex computations within seconds. Quantum computing in data science can allow companies to test and refine enormous data for various business use cases. Quantum computers can quickly detect, analyze, integrate and diagnose patterns from large scattered datasets." Vivek Zakarde, Segment Head- Technology (Head BI & DWH), Reliance General Insurance Company Ltd.
Tumblr media
"While still in the very nascent stages quantum computing holds a promise that no one can ignore. The ability to do 10000 years of computations in 200 seconds coupled with the exabytes of data that we generate daily can allow data scientists to train massive super complex models that can accomplish complex tasks with human or superhuman levels of accuracy. 8-10 years down the line we would be seeing models being trained on quantum computers and for that we need AI that works on quantum computers and this area will grow a lot in the coming years."Bhavik Gandhi, Sr Director, Data Science & Analytics, Shaadi.com 10. Saving The Data Lakes
Tumblr media
"While Data Lakes may have solved the problem of data centralization, they in turn become an unmanaged dump yard of data. As the veracity of data becomes a suspect, analytics development has slowed down. Pseudo-anonymization to check the quality of incoming data, strong governance and lineage processes to ensure integrity and a marketplace approach to consumption would emerge as the next frontier for enterprises in their journey of being data-driven. Further, smart data discovery will enable uncovering of patterns and trends to maximize organizations’ ROI by breaking information silos." Saurav Chakravorty, Principal Data Scientist, Brillio
Tumblr media
"Data Lake will become more mainstream as the technology starts maturing and getting consolidated. External data will become as one of the main data sources and Data Lake will be the de-facto choice in forming a base for a single customer view. It will help in improving the customer journey thereby increasing efficiency." Vishal Shah, Head of Data Science, Digit Insurance Download the complete report Data_Science_AI_Trends_India_2020_Analytics_India_MagazineDownload Read the full article
0 notes
stephenlibbyy · 6 years ago
Text
Cumulus Networks 4th-Generation open, modern networking for applications of the future
The dynamics of IT are changing, especially when it comes to the demands on the network. As many have predicted, big data, mobile and the Internet of Things are putting significant and ever-increasing pressure on the network. Most networks and legacy management tools, therefore, are unprepared for the added stress placed on already-fragile infrastructures while the rest of the data center has sped ahead.
As more and more data is created and transferred between resources, the network must be increasingly resilient, dynamic and agile to adjust to application demands accordingly. As data and applications become increasingly distributed, there is an inherent architectural dependence on the interconnect, which enables these resources to work in concert to deliver application workloads. That interconnect, the network, must undergo its own transformation to meet the new needs of a modern network.
Our founders at Cumulus Networks recognized the challenges that were mounting nearly a decade ago and set out to build a more modern network, one that is modeled from the web-scale giants including Google, Amazon, and Facebook to better address applications of the future.
I’m very happy to announce that Cumulus Networks is now enabling customers to meet modern network challenges with our 4th-Generation of open, modern software designed to run and operate modern, data center and campus networks that are simple, open, agile, resilient and scalable.
With nearly 10 years of focused development, our 4th-Generation of Cumulus Linux 4.0 extends our open source leadership in networking that began with contributions such as ONIE, VRF, FRRouting, Prescriptive Topology Manager (PTM), and EVPN among others.   In addition, we’ve made cutting edge enhancements across the board that span the latest in L2/L3 connectivity, NetDevOps practices including automation, simulation, CI/CD and Infrastructure-as-Code(IaC), as well as useability, operations and visibility/troubleshooting.
Most recently, we’ve added enhancements in Linux including kernel and support for SwitchDev, an open source in-kernel abstraction model, providing a standardized way to program switch ASICs and speed development time. In addition, Cumulus Linux simulation capabilities are unequaled in the industry, allowing our customers to leverage native integration with automation tools to enable high fidelity, 1-to-1 network simulation across both data center and campus with almost no size limits. This unique feature allows operators to easily validate network configurations and eliminate the risk of error.
Cumulus Linux 4.0 is our most reliable, robust and performant version and is a highly competitive and mature alternative to proprietary networks. It offers a host of cutting edge features that move our customers beyond legacy layer-2 networks into Layer 3 network virtualization with EVPN, EVPN multicast-based replication, and EVPN Multihoming that enables MLAG architectures without legacy interswitch links. Plus, Cumulus Linux enables simplified data center configurations with smart data center defaults and advanced automation as we begin to introduce full data model-driven configurations.
In addition, 4.0 includes support for the widest range of hardware platforms: 134 platforms across 14 ASICs. Newly added support for Mellanox’s Spectrum-2 chipset for faster performance, Broadcom’s Qumran chipset for deep buffering at the top of rack, Facebook’s Minipack– an open, modular chassis with a single 12.8TB chip for high density spine deployments, and additional campus networking platforms with Dell.
Spectrum-2 platforms are the latest addition to the Cumulus hardware supported switches and are a combination of extremely fast and extremely smart ASIC technology. With its rich set of features and innovative capabilities, including increased flexibility, port density, and What Just Happened (WJH) visibility into hardware packet drops, Spectrum-2 is optimized for cloud, hyperscale, big data, artificial intelligence, financial, storage applications and more.
Cumulus Linux is now compatible with over 130 switch platforms, allowing maximized choice and flexibility to customers.
Also included in our 4th-Generation announcement is Cumulus NetQ 2.4. In this latest release of NetQ and NetQ Cloud, Cumulus has added a number of features and support for ecosystem partner features including Mellanox’s “What Just Happened” (WJH) feature. WJH is an advanced streaming telemetry technology that provides real time visibility into problems in the network, going beyond conventional telemetry solutions by providing actionable details on abnormal network behavior. WJH provides visibility into hardware packet drops for any reason including buffer congestion, incorrect routing, ACL or layer 1 problems. When the single box WJH capabilities are combined with the analytics engine of Cumulus NetQ, you now have the ability to hone in on any loss, anywhere in the fabric, from a single management console.
Cumulus NetQ with Mellanox WJH provides the ability to not only view any current or historic drops and specific drop reasons, but also the ability to identify any flow or endpoints and pin-point exactly where communication is failing in the network. The NetQ event analytics engine will identify any changes in the network that triggered the packet drops, reducing mean time to innocence or repair.
NetQ/NetQ Cloud 2.4 offers new features including Snapshot and Compare and Threshold Crossing Alerts (TCA).. TCA combined with NetQ’s existing CPU/Memory/Interface monitoring features, provides a faster way to get to the root cause of network issues by alerting IT when critical resource limits have been reached.
In addition, NetQ/NetQ Cloud 2.4 includes enhancements to clustering, high availability, workflows and the NetQ GUI. Finally, NetQ 2.4 offers the new Follow the MAC feature, which is very powerful when used in a virtualization or container environment.
Summary:
As closed/proprietary vendors struggle to innovate or integrate technology into complex existing network architectures, the pressure of increasingly demanding applications, snowballing complexity, vendor lock-in, high cost of innovation, high margins, lack of flexibility and deteriorating customer satisfaction that is a result, continues to erode customer confidence and their perception of the derived value.
Conversely, there is a similar set of factors that are increasing customer confidence and value in favor of open, simple, disaggregated and modern network architectures. These capabilities enable interoperability and increased flexibility and choice, while delivering supply chain freedom with uniform operating models and lower overall TCO.
The transition to open networking, led by Cumulus Networks and our ecosystem partners, is accelerating network transformation, exactly like we’ve seen for server and storage architectures already. The transition to open networking is inevitable.
If you’d like to learn more, you’ll find a full set of collaterals, videos, web pages and a press release below but don’t take my word for it, I would suggest you try Cumulus Linux and NetQ for yourself by taking a test drive.
Tumblr media
4th-Generation Cumulus Networks Press Release Mellanox Spectrum-2 and WJH Overview Video Cumulus Linux Web Page Cumulus Linux Datasheet Cumulus Linux Solution Overview EVPN-PIM Blog Part 1 EVPN-PIM Blog Part 2 Cumulus NetQ and NetQ Cloud Web Page Cumulus NetQ and NetQ Cloud Datasheet
To join our live webinar with Amit Katz, Mellanox Technologies, and Pete Lumbis, Cumulus Networks, entitled, “Network Wide Streaming Telemetry“ on Wednesday, November 20th at 9 am Pacific, please register here.
If you would like to see Cumulus Linux 4.0 and NetQ/NetQ Cloud 2.4 in action, don’t hesitate to contact your sales team for a demo. Please let me know if you have any comments or questions, or via Twitter at @CicconeScott.
Cumulus Networks 4th-Generation open, modern networking for applications of the future published first on https://wdmsh.tumblr.com/
0 notes
multi-cloud-strategy · 5 years ago
Text
As the multi-cloud space continues to mature and become a mainstream component of enterprise IT environments, CIOs must have a clear picture of business objectives, constraints and deliverables. It is also necessary to understand that multi-cloud is not a solution to every problem that enterprise IT teams face. Also, since there is no single, all-encompassing approach for all organizations, each company will need to build their own multi-cloud roadmap for their unique business needs.
 At the same time, organizations need to follow some best practices, to ensure long term success of their multi-cloud strategy. Here are 10 important practices that enterprises should adhere to while defining, implementing and managing their multi-cloud environment.
 Map Workloads to Cloud Services
Mapping workloads is possibly the most critical step in creating a robust multi-cloud strategy. This enables that the right infrastructure components and cloud services are allocated / provisioned to the right business need. It also enables IT teams to define effective SLAs, depending on specific needs around data privacy, availability / uptime, latency, rapid scalability, realtime streaming, batch processing, heavy-duty compute, etc.
 Incorporate Hybrid Cloud Concepts
Current conversations around multi-cloud and hybrid cloud concepts have been somewhat disjointed. However, any sustainable multi-cloud strategy needs to consider as many IT delivery models as possible – including public / private clouds, hosting services, DCs, Hyper Converged Infrastructure (HCI) and Hyperscale DCs.
 Streamline Vendor Management
The fundamental premise of the multi-cloud concept is that it involves a wide spectrum of technology vendors – for DCs, colocation services, cloud infrastructure, SaaS applications, mobile apps, application development companies, QA / testing teams, SOCs / NOCs and managed service providers. In a multicloud set up, vendor management runs the risk of becoming disjointed, often departmentalized, resulting in a loss of control and increased business risks.
 Centralize IT Governance
Enterprises need to leverage a strong Cloud Management Platform that enables teams to provision / de-provision cloud services, auto scale (new VMs), orchestrate services, monitor traffic and track performance parameters like latency, availability, etc. While cloud-based applications and cloud services are the easiest to govern using a Cloud Management Platform, an optimized multi-cloud environment would eventually bring on-premise systems, colocated infrastructure and DCs under a common management platform.
Drive Usability and Adoption
As traditional IT environments transform to dynamic multi-cloud ecosystems, organizations will need to put in strong change management initiatives to drive adoption. Also, IT teams must ensure that user behaviours and expectations are met in a fastchanging multi-cloud set up.
 Create a Robust Integration Framework
The integration scenario in on-premise setups is complex as it is. In a multi-cloud environment, the complexity further increases due to a number of additional integration points between on-premise systems and data stores with third-party cloud-based applications and services. Integrating applications on the same cloud infrastructure are less complex. However, aggregating data across different cloud platforms and onpremise legacy often requires custom APIs and integration tools.
 Benchmark Service Levels
Any organizations, over the years have ended up creating multivendor, multi-location IT infrastructure and service relationships, with highly non-standard SLAs. This makes its extremely challenging to provide a uniform set of business services consistently to business stakeholders. While implementing a multi-cloud roadmap, CIOs need to ensure that they have created a single, consistent and benchmarked set of SLAs for all resources (on-premise and cloud). The vendor consolidation step mentioned earlier goes a long way in implementing standard service levels across the enterprise.
 Build Consistent Security Policies
Data privacy and security will become a core area of concern in a multi-cloud environment. With a diverse set of IT resources in use, keeping your enterprise perimeter (including applications, data sources, users and endpoints) secure will become significantly more complex. IT decision makers need to centralize and standardize security policies across the enterprise and may need to partner with Managed Security Service Providers (MSSPs) to unify their security environment.
 Redefine Your DR Strategy
While implementing a DR strategy for multi-cloud environments, enterprises need to address three distinct challenges.
 • DR during migration: The first challenge is during migration of existing systems and on-premise workloads to cloud environments. This is generally a period of uncertainty and requires meticulous planning to ensure uptime and business continuity.
 • DR for multi-cloud environment: Current DR set ups in organizations are designed for traditional on-premise systems. Multi-cloud environments increase the complexity of the IT environment at many levels – due to a large number of dynamic parameters (scale, nature of workload, data type, geographical coverage), deployment models (SaaS, IaaS), infrastructure services (public cloud, private cloud, hosting, etc.) and cloud service providers (Netmagic, AWS, MS Azure, others).
 • DR for new requirements (CI/CD): As multi-cloud environments are extremely scalable and adaptable, the CR set up needs to have the ability to adapt in equal measure. Having a continuous integration and continuous delivery approach (standard part of DevOps environments) is useful to handle fast-changing IT needs.
 Leverage Analytics for Continuous Improvement
With process automation, strong integration and the use of Cloud Management Platforms, a multi-cloud environment will generate a large amount of data around performance, availability, downtime, resource utilization, traffic patterns, usage trends and correlations. This gives CIOs a great opportunity to go beyond traditional network monitoring, generate powerful insights from vast amounts of data, and use these insights to enhance performance.
While many public cloud vendors provide their own analytics and dashboards for network visibility, organizations will need to build a unified view of all IT resources, irrespective of vendor. One way to do this is to use APIs to connect various data sources and create consolidated dashboards. Some Cloud Management Platforms offer extensive pre-built capabilities to do this.
 While many of the above processes seem effort intensive and time consuming, they are critical to the successful development and growth of your multi-cloud environment. It may not be possible to achieve all these goals simultaneously, but companies should identify a few low-hanging fruits to begin with – e.g., workload mapping, incorporating hybrid cloud concepts and streamlining vendor management. For the more complex needs, working with a leading Managed Service Provider like Netmagic will help companies navigate many initial challenges and bring a high level of process maturity to their operations.
0 notes
compassdatacenters · 5 years ago
Text
Dispelling Data Center Automation Myths
With a global pandemic underway and so many of us doing jobs, learning, gaming, and streaming entertainment from home, a lot of attention is being paid to data centers and the need for more of them. High demand will drive growth; growth will demand innovation. All signs point to that innovation coming in the form of advanced data center automation.
Analyst firm IDC recently surveyed 400 data center professionals. About one-third of respondents said they are investing in automation tools to address equipment and application challenges*. Logically, investments in tools like data center infrastructure management (DCIM) and technology asset management (TAM) software would follow a decade of heavy investment in hardware. Taking it to the next level, we're in the early stages of layering on automation and artificial intelligence to support more remote monitoring, improved performance, and more uptime.
Despite these benefits, the words "automation" and "artificial intelligence" set off mental alarm bells for a lot of people. They shouldn't. An automated data center isn't a new concept, it's just evolving to cover more aspects of the data center environment. Here's what to expect and why "automation" is not a thing to be feared, but actually something to embrace.
Myth #1: Automation and artificial intelligence will eliminate jobs.
Automation doesn't mean robots are taking over anytime soon. People aren't going anywhere. Jobs won't be eliminated, though a fundamental shift will occur in how data center folks perform their functions and in management solutions. Data center automation has the potential to make existing jobs more focused, meaningful, and free of tedium and stress over potential, small missteps, or missed opportunities. Automating the boring, repetitive functions so server admins are free to work on more challenging tasks has a lot of upsides.
Advancing technology applied to data centers is most likely going to create more jobs. You still need engineers, technicians, and consultants, and quite possibly more of them, to design the logic, build and install the systems, train operators, fine-tune the programs and, well into the future, service the systems.
Myth #2: We need automation to make up for the dearth of talented people to staff data centers.
Even with automation, we will still need people. (See Myth #1.) And with automation, we have more attractive jobs to offer and new management solutions and applications. Data center automation has the potential to be a huge draw and attract a more diverse group of prospects to jobs that have predominately been filled by men.
The more automation, machine learning, and AI become fundamental tools used in data center careers, the more attractive the industry becomes to young graduates. The same IDC survey of 400 data center IT and facilities professionals found most data center operators (about 35%) are hiring additional IT staff to manage new equipment and application challenges…not the other way around. Advanced technology can be parlayed into a recruiting advantage.
Myth #3: Automation is imminent.
The largest, most competitive players are dipping their toes into the data center automation arena now. These well-funded trendsetters will rely more quickly and more heavily on automation and AI to bring new, hyperscale facilities online and staff them efficiently. Through these deployments, technology will be delivered and tested. Adoption of these data center tools in smaller centers will roll out over 5 to 10 years. This process won't happen overnight. It will take time.
The road ahead
Automation and AI applied to data centers is an exciting prospect with a lot of upsides and potential to improve the way we run data centers today. The pandemic definitely prompted the industry to re-evaluate operational processes and procedures in search of ways to streamline or extend teams to service other functions or facilities, as well as data center management. It has fast-tracked conversations on and adoption of data center automation tools to service more capacity and opened the door to what's next, a future full of expansion with the tools to make it work
* IDC's 2019 Datacenter Operational Survey: Key Findings and Implications for Multitenant and Colocation Datacenter Providers
Sudhir Kalra
SVP of Global Operations
Sudhir Kalra serves as Compass' SVP of Global Operations. Prior to joining Compass, Mr. Kalra served as Executive Director, Global Head of Enterprise Data Centers for Morgan Stanley. Prior to Morgan Stanley, Sudhir was Director, Corporate Real Estate and Services - Global Head of Engineering and Critical Systems at Deutsche Bank where he was responsible for mission-critical support of a real estate portfolio comprised of over 30 million square feet. Mr. Kalra began his career in technical roles at Securities Industry Automation Corporation supporting mission-critical operations for the NYSE and American Stock Exchange. Sudhir holds a BEEE from City University of NY and an MSEE from NYU-Poly University.
The post Dispelling Data Center Automation Myths is courtesy of:
0 notes
ehteshamuniverse · 5 years ago
Text
DevOps Market Opportunity Assessment, Future Estimations and Key Industry Segments Poised for Strong Growth in Future 2023 | Impact of COVID-19
Market Highlights
Market Research Future (MRFR) expects the DevOps market 2020 to advance at a remarkable rate between 2018 and 2023 (review period), as a result of the surging dependence of enterprises on cloud-based solutions. We will provide COVID-19 impact analysis with the report, offering an in-depth review of the market following the coronavirus disease outbreak.
COVID-19 Analysis
The COVID-19 impact has led to various enterprises going totally digital in a space of few months. Even as the pandemic is sweeping the world, the need to deliver reliable and good quality services and software at a fast pace has become crucial. Following the lockdown imposed by governments, enterprises of every size across diverse industries are deploying some version of DevOps, with the use of a broad range of tools as well as best practices. Post SARS-CoV-2, more and more companies are leveraging DevOps to arrive at sound decisions while analyzing the risks posed by digital products.
The novel coronavirus has emerged as a lucrative opportunity for the market, as more and more companies are now following basic principles that include bold actions taken with a solid understanding of the challenges or risks, high focus on a holistic approach along with speed and flexibility, which is mostly possible with the adoption of DevOps. A fortified DevOps strategy is also helping organizations in delivering better quality software to the end users at faster pace. Especially since the COVID-19 outbreak, DevOps has not only emerged as a valuable commodity for end users but has also significantly benefited organizations to a large extent.
Primary Drivers and Key Restraints
The rapid digitization of enterprises with automated operations, rising uptake of cloud technologies, increasing consumption of agile frameworks, and the need for enhanced communication between IT teams for better operational efficiency can induce growth of the DevOps market. Organizations are progressively using DevOps tools and services to deliver more advanced software, bring down the time for marketing, boost productivity, streamline workflows and reduce the costs of software delivery, maintenance and development.
Containerization is a trend that is bolstering the market growth, as it helps simplify the use of the software across organizations. Containers are making it easier to deliver the software to the delivery system in a uniform and proper outer shell, while facilitating the automation of the adoption of these applications. Platform-as-a-service or PaaS is hailed as another trend that can add to the strength of the DevOps market, as it is a more cost-effective and efficient option to run a service.
The MRFR report throws light on a few collaborations between companies that have fostered the market expansion for DevOps. To cite a reference, in January 2020, XebiaLabs and CollabNet VersionOne collaborated to build a DevOps platform for vendors, to enable them to offer their customers with end-to-end management features as well as the visibility required to offer secure software solutions. Even as countries are struggling to find a COVID-19 breakthrough, the market is deemed to perform relatively well, given the increasing prevalence of digitization and the ceased physical mobility across IT companies due to lockdown and the consequent surge in the use of cloud services.
Segmentation:
The DevOps industry share has been considered for solution, deployment, organization size, as well as industry verticals.
The primary solutions described in the market research include lifecycle management, analytics, monitoring and performance management, testing & development, delivery & operations management, and more.
The deployment-based sections are on-premise as well as on-cloud. The types of on-cloud deployment –wise segments are private cloud, hybrid cloud and public cloud, which are noting higher adoption post COVID-19.
The organization size ranges outlined are large enterprises and small and medium-sized enterprises or SMEs.
The industry verticals that majorly deploy DevOps include media & entertainment, information and telecommunication technology enabled services (ITES), manufacturing, BSFI, government & public, healthcare, retail, and others.
Regional Study
The regional study of the market covers Europe, Asia Pacific or APAC, North America, and the Rest of the World or RoW.
MRFR reckons North America to be a prominent growth pocket, armed with a strong economy and the knack for fast adoption of the latest technologies. The high concentration of renowned companies, especially in the United States (U.S) is also deemed to be a growth booster in the regional market. Other governing factors that favor the market include the high uptake of DevOps solutions in IT and telecommunications, retail and finance industries and the rapid consumption of hybrid cloud solutions by organizations.
Moving ahead, it is projected that the APAC market can procure the fastest growth rate in the following years, as a result of the mounting demand for digital services, rising use of mobile devices and the massive spending on advancements in IT infrastructure. The surging need for automated software in India, Japan, Singapore and China also warrants incredible market growth in the region. A number of SMEs are surfacing in these countries that are making immense demand for DevOps solutions to streamline their business operations.
Top Players
The top contenders in the DevOps industry include Clarive (Spain), TO THE NEW (India), Docker, Inc., Cisco Systems, Inc., VersionOne, Inc., Red Hat, Inc. (the U.S.), RapidValue (the U.S.), Google, Inc., IBM Corporation, Oracle Corporation, Chef, Inc., Micro Focus (the U.K.), Clarizen Inc, Perforce (the U.S.), XebiaLabs (the U.S.), CA Technologies (the U.S.), GitLab (the U.S.), Amazon Web Services, Inc., Puppet Labs, Inc. (the U.S.), Hewlett Packard Enterprise Development LP (the U.S.), Atlassian (Australia), Rackspace (the U.S.), CollabNet (the U.S.), Microsoft Corporation, HashiCorp (the U.S.), EMC Corporation, CFEngine (the U.S.), Electric Cloud (the U.S.), Cigniti (India), OpenMake Software (the U.S.), and more.
Related Reports:
https://ehteshamtech.kinja.com/hyperscale-data-center-industry-application-solutions-1844213690?rev=1593490430393
https://ehteshamtech.kinja.com/risk-analytics-market-latest-innovations-and-top-player-1844213734?rev=1593490890393
https://ehteshamtech.kinja.com/traffic-management-market-size-analysis-emerging-oppo-1844213773?rev=1593491270398
https://ehteshamtech.kinja.com/disaster-recovery-as-a-service-market-opportunities-gr-1844213819?rev=1593491730394
https://ehteshamtech.kinja.com/streaming-analytics-market-global-trends-and-industry-s-1844213870?rev=1593492216392
0 notes
clearskytest · 2 years ago
Link
Looking for testing automation tools that can scale with your business? Hyperscalable's testing automation solutions help you save time and money on quality assurance. Get the best tools and expert support, and see how our scalability helps you grow. Get the performance, reliability, and scalability you need with hyperscalable testing automation tools. Reduce costs and time-to-market with innovative solutions that accelerate testing processes and provide AI-powered insights to optimize test environments.
0 notes
holytheoristtastemaker · 5 years ago
Link
Fears about hardware shortages, staff absenteeism and how to keep sites up and running with social distancing have dogged the datacentre sector since the start of the pandemic, so how is the sector faring?  About 75% of all companies have seen their supply chains disrupted by the Covid-19 coronavirus, according to the US-based Institute of Supply Management (ISM), with many already bracing to take a hit or adjusting revenue targets downwards.
But datacentre operators have mostly not been among them. This is despite lead times for certain kit reportedly lengthening from weeks to months, in some cases. Meanwhile, initially bullish sales forecasts for Chinese-made hardware, such as servers or switches, may be revised down for some time, for a range of reasons not all to do with Covid-19.
Growth in the SD-WAN software market, targeting managed services provision, is still predicted. Datacentre traffic has been surging because of increased remote working on distributed communications across porous platforms – with all the security concerns for customer businesses that this entails.
But there could be further issues down the track. Omdia has forecast delays to emerging tech infrastructure deployments such as 5G, and around logistics, transportation, packaging and testing of kit – affecting construction projects, for example.
With somewhat mixed signals, it is tough to predict how far operators should be scrambling to reshape themselves for a “new normal” post-Covid-19. However, Devan Adams, principal cloud and datacentre switching analyst at Omdia, has confirmed that purchasing behaviour changes are “inevitable”, with “increased demand for internet bandwidth unable to compensate” for the pandemic’s negative impact.
Supply issues could continue
Jennifer Cooke, research director for cloud to edge datacentre trends and strategies at IDC, tells Computer Weekly that supply could yet be delayed in the next few months, although it seems that larger, multi-tenant datacentres, which may find sourcing easier, can still get what they need.
“What datacentres are seeing, however, is a spike in demand for remote monitoring tech,” says Cooke. “Colocation providers that invested in these platforms are seeing customers log in and stay logged in for a lot longer, relying on remote monitoring tools when being there in person is difficult or impossible.”
IDC analysis so far notes that while most supply chain organisations have already activated business continuity plans, these have mostly been designed for short-term, localised disruption. This suggests that business continuity planning could have to ramp up, targeting better visibility of supply chain capabilities at both ends, as well as the overall risk backdrop – making short-term adjustments where possible, and acquiring more external data through third parties.
Alternative supply sources should be located to guard against future production shutdowns, logistical constraints or custom disruptions. This might be about developing surge capacity and alternative transport options, for example sea instead of air, or substituting products – running scenarios where possible to identify potential pain points. Luckily, operators massively expanded overall datacentre capacity between 2017 and 2019, and sufficiently optimised space, power and central connectivity has been available so far.
That said, IDC reckons it could take a global effort to mitigate delays to datacentre construction. Facilities and last-mile bandwidth support to all end locations should be scrutinised. On the other hand, Covid-19 could accelerate the shift to service-provider built or operated colocation and cloud facilities, with overall power and capacity expected to expand by 8-10% over the next five years.
Monitoring amid social distancing
Andy Lawrence, executive director of research at the Uptime Institute, highlights ongoing skill shortages amid a complex picture around cloud migrations and risk perceptions that could negatively affect growth forecasts. Dealing with these issues will be key to successful future-proofing of the datacentre equipment supply chain.
“Covid-19’s impact has been about the things you would expect – dividing people into shifts, staff shortages, people having to be off work for self-isolation – but also deferring maintenance, which has caused a lot of worries,” says Lawrence. “Even missing one service on a generator can affect your warranties, and even your permit.
“Industry is moving slowly that way anyway, but everyone we’ve spoken to says they’ll do more remote monitoring in future. One long-term change is having fewer people on site, instead of visitors all the time, all different companies, all there every day – a real issue in this crisis.”
Yet datacentres are expected to continue in an expansionary mode, for suppliers that get the situation in hand by sourcing appropriately amended, updated documentation and advice on maintenance delays.
“Everyone we’ve spoken to says they’ll do more remote monitoring in future”
Andy Lawrence, Uptime Institute
With more automation, remote monitoring and condition-based monitoring, unexpected failures should become less likely. Instead of quarterly technician visits, say, remote sensors and monitors could be checking data against an analytics program, perhaps with artificial intelligence, that reveals the likelihood of failure over the next 90 days.
As long as the security strategies keep pace in a more remote, automated, cloud-based world, datacentre operations could be well positioned for a post-Covid-19 future, says Lawrence.
“Datacentre management is religious about this kind of continuity,” he adds. “The level of thinking they do is extraordinary – they really take a forensic engineering kind of view of every problem.
“I’ve been to datacentres where they have rooms with beds in, two weeks’ food supply and that kind of thing. They are ready for fires, floods and famine, because what they’re paid for is to keep the thing running at all times.
“They keep critical parts on site, they usually have contracts that enable the key components to be delivered fast, and obviously they have disaster recovery plans to move workloads to another site in extremis. No one has ever seen an issue like this before, so they had to think on the fly, but they’ve done a good job on the whole.”
Dual sourcing to stage comeback
Also, says Lawrence, some of the very large hyperscale datacentres, building as quickly and cheaply as possible with just-in-time manufacturing techniques, are talking about reintroducing dual sourcing for things like IT facilities and some spares.
This practice had been gradually abandoned over the last 20 years in favour of very tight supplier relationships, but it could potentially reduce the risks that come with over-reliance on certain suppliers or segments of a supply chain. Might this be good news for some of the smaller suppliers out there?
“Yes, but what you hear people talking about and what people actually end up doing can differ,” says Lawrence. “It all has costs involved. Once the emergency is over, quite often you might get that they’re not going to make that change they’ve thought about. It’s cheaper to have everything identical – but they are quite wary of that.”
Standardisation in the datacentre has been talked about for decades, he says, but has mostly been around the IT side, for racks, servers, and so on. When it comes to instrumentation, UPS (uninterruptible power supply) or integration of software components, cooling systems or building management software, suppliers haven’t really wanted to open up.
“They’re quite proprietary,” says Lawrence. “It usually takes either disruptive new suppliers or very powerful buyers to drive change, because they tend to go their own way and they tell the suppliers what they want. So if that happens, it will probably come through the likes of Google or Amazon.”
Read more about coronavirus and its impact on datacentres
Since the roll-out of social distancing measures by governments across Europe, businesses have had to adapt swiftly to having their largely office-based workforce transform into one that is almost exclusively working remotely.
Ensuring their IT infrastructure is equipped to cope with the sudden surge in employees accessing on-premise and cloud applications from remote locations in response to the Covid-19 coronavirus outbreak is a challenge facing many CIOs right now.
Datacentre resiliency think-tank issues 18-page guidance to help operators protect staff during the coronavirus pandemic while keeping their facilities ticking over.
He agrees that having more remote working has largely worked well, not least by smoothing out daily workload patterns that previously saw large cyclical peaks and troughs of demand. But new risks must be accounted for and customers assured of service delivery.
This means service-level agreements (SLAs) and contracts should all be looked at again, and with the customer’s requirements more in mind, rather than the defence of the service provider. Non-critical systems could become “must-haves” as a result.
“An example might be a ticketing system at an airline,” says Lawrence. “It wasn’t built to be 24/7 critical, but now if you can’t use it, nothing happens. People will start to look at the end-to-end risk of that system. They may need to demonstrate that more. They won’t want people coming in all the time to check.
“So, definitely, everyone needs to be looking at staffing issues. Think about operating with two teams, like at a nuclear plant, with sophisticated processes for changing from one team to another, and being able to do that for ever.”
View from the coalface
Suppliers may be reluctant to provide comment that could be construed as guidance. However, Gabriel Bonilha, Europe, Middle East and Africa (EMEA) professional services manager at datacentre infrastructure provider Vertiv, largely agrees with Lawrence on many points.
He says: “The global supply chain has focused a lot on efficiency, but now we must pair it with very high levels of resiliency. We all want the best quality, price and lead times – not much news there. These challenges will ease after the pandemic, but how long will it take? And what about the next crisis to come along? No one can say.”
Considering where to localise manufacturing and assembly might help, along with continuous improvement across critical infrastructure servicing and maintenance, expanding and optimising to keep up with demand. The industry will need to do better around spares, logistics and dependencies – even though that is likely to require heavier investment in some areas. With that in mind, Bonilha notes that the current pandemic is, in fact, an opportunity to strike the right balance between resiliency and efficiency.
Volker Ludwig, senior EMEA datacentres vice-president at tech services provider NTT, says his firm has not seen “long delays” in terms of infrastructure component delivery due to Covid-19 in EMEA. However, because datacentres are the “backbone of digitisation”, he says it is crucial that long lead times and expected build-out trajectories are well planned. Teams or partners must be able to reach the locations in question, spare parts and maintenance must available and adaptability built in.
“To manage risk, we work with a minimum of two, or ideally three, different suppliers per critical infrastructure component, such as generators, UPS and chillers, to make sure we are not dependent,” says Ludwig.
“Covid-19 has also accelerated digitisation in terms of how datacentre operators work. Where there were lots of people on-site, in particular for testing in commissioning, we now have very few staff present. Most are working remotely and participating through video.”
Paul Hohnsbeen, vice-president for EMEA IBX operations at global interconnection and datacentre firm Equinix, echoes many of these sentiments.
“Covid-19 has accelerated digitisation in terms of how datacentre operators work”
Volker Ludwig, NTT
He reiterates that the pandemic has piled on the pressure, accelerating digital trends from remote working to virtual events, online streaming and purchasing. All these must be underpinned by shoring up critical datacentre operations, infrastructure and services, spurring change to working protocols and “creative planning” to keep operations going, he says. Support services need to be redesigned not only for sudden capacity expansion, but to guide customers managing their digital infrastructure.
“Datacentre operators also need a plan of action for when customers readdress levels of usage as the world returns to a new kind of normal,” says Hohnsbeen. “But what that will look like is yet to be seen.
“Support and advice from regional and national agencies have proven invaluable and we have also seen increased amounts of useful information being freely shared by industry bodies and other datacentre operators. We must continue to share knowledge to assist with the new set of challenges that customers will face during the recovery phase.”
Mark Daly, director at datacentre services provider Digital Realty, says the industry should benefit from “valuable geopolitical learnings” about equipment sourcing and location in the short, medium and longer term – and he suggests that embedding sustainability and “greener” sourcing into specifications should become more top of mind.
Adapt, but stay flexible
“Sourcing critical items locally instead of in a lower-cost country may become more popular,” says Daly. “At the same time, the importance of having multiple sources for critical items has been highlighted.
“Post-Covid-19, a company not managing its supply chain will see longer lead times. Equally, multiple parties could be competing for the same resources, either from an equipment or resource perspective.”
Daly points out that many datacentre organisations could be ramping up at the same time, with similar pressures from deferred activity. “Working closely with all parties in advance of that ramp-up will be incredibly important,” he says.
Beyond the world of datacentres specifically, the Hackett Group has been looking closely at coronavirus impacts on business. Any supply chain company post-Covid-19, the consultancy suggests, should, in the near term, optimise the design of its supply networks, as well as identify alternative supply scenarios incorporating multi-year cost and capacity modelling.
Hackett also recommends more focus on common, interoperable platforms or technologies that can easily be adopted for different use cases or locations.
“Automate core processes, including order management, planning and scheduling; standardise processes across geographies,” it says.
Final configurations should be delayed where possible, with outsourcing options ready to step into the breach, says the consultancy. Relationships should be analysed and managed to understand and enable rapid changes in business demand, with longer-term plans for different scenarios developed, even in the face of uncertainty.
But Hackett adds a caveat that is equally valid for datacentre operators: “No battle plan survives contact with the enemy.”
0 notes
kayawagner · 7 years ago
Text
NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE
Big data is bigger than ever. Now, thanks to GPUs, it will be faster than ever, too.
NVIDIA founder and CEO Jensen Huang took the stage Wednesday in Munich to introduce RAPIDS, accelerating “big data, for big industry, for big companies, for deep learning,” Huang told a packed house of more than 3,000 developers and executives gathered for the three-day GPU Technology Conference in Europe.
Already backed by Walmart, IBM, Oracle, Hewlett-Packard Enterprise and some two dozen other partners, the open-source GPU-acceleration platform promises 50x speedups on the NVIDIA DGX-2 AI supercomputer compared with CPU-only systems, Huang said.
The result is an invaluable tool as companies in every industry look to harness big data for a competitive edge, Huang explained as he detailed how RAPIDS will turbo-charge the work of the world’s data scientists.
“We’re accelerating things by 1000x in the domains we focus on,” Huang said. “When we accelerate something 1000x in ten years, if your demand goes up 100 times your cost goes down by 10 times.”
Over the course of a keynote packed with news and demos, Huang detailed how NVIDIA is bringing that 1000x acceleration to bear on challenges ranging from autonomous vehicles to robotics to medicine.
Among the highlights: Volvo Cars had selected the NVIDIA DRIVE AGX Xavier computer for its next generation of vehicles; King’s College London is adopting NVIDIA’s Clara medical platform; and startup Oxford Nanopore will use Xavier to build the world’s first handheld, low-cost, real-time DNA sequencer.
Big Gains for GPU Computing
Huang opened his talk by detailing the eye-popping numbers driving the adoption of accelerated computing — gains in computing power of 1,000x over the past 10 years.
“In ten years time, while Moore’s law has ended, our computing approach has resulted in a 1000x increase in computing performance.” Huang said. “It’s now recognized as the path forward.”
Huang also spoke about how NVIDIA’s new Turing architecture — launched in August — brings AI and computer graphics together.
Turing combines support for next-generation rasterization, real-time ray-tracing and AI to drive big performance gains in gaming with NVIDIA GeForce RTX GPUs, visual effects with new NVIDIA Quadro RTX pro graphics cards, and hyperscale data centers with the new NVIDIA Tesla T4 GPU, the world’s first universal deep learning accelerator.
One Small Step for Man…
With a stunning demo, Huang showcased how our latest NVIDIA RTX GPUs — which enable real-time ray-tracing for the first time — allowed our team to digitally rebuild the scene around one of the lunar landing’s iconic photographs, that of astronaut Buzz Aldrin clambering down the lunar module’s lander.
The demonstration puts to rest the assertion that the photo can’t be real because Buzz Aldrin is lit too well as he climbs down to the surface of the moon while in the shadow of the lunar lander. Instead the simulation shows how the reflectivity of the surface of the moon accounts for exactly what’s seen in the controversial photo.
“This is the benefit of NVIDIA RTX, using this type of rendering technology we can simulate light physics and things are going to look the way things should look,” Huang said.
…One Giant Leap for Data Science
Bringing GPU computing back down to Earth, Huang announced a plan to accelerate the work of data scientists at the world’s largest enterprises.
RAPIDS open-source software gives data scientists facing complex challenges a giant performance boost. These challenges range from predicting credit card fraud to forecasting retail inventory and understanding customer buying behavior, Huang explained.
Analysts estimate the server market for data science and machine learning at $20 billion. Together with scientific analysis and deep learning, this pushes up the value of the high performance computing market to approximately $36 billion.
NVIDIA CEO Jensen Huang shows off a hand-drawn chart by a data scientist showing how GPU acceleration makes him more productive.
Developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors, RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization.
RAPIDS has already won support from tech leaders such as Hewlett-Packard Enterprise, IBM and Oracle as well as open-source pioneers such as Databracks and Anaconda, Huang said.
“We have integrated RAPIDS into basically the world’s data science ecosystem, and companies big and small, their researchers can get into machine learning using RAPIDS and be able to accelerate it and do it quickly, and if they want to take it as a way to get into deep learning, they can do so,” Huang said.
Bringing Data to Your Drive
Huang also outlined the strides NVIDIA is making with automakers, announcing that Swedish automaker Volvo has selected the NVIDIA DRIVE AGX Xavier computer for its vehicles, with production starting in the early 2020s.
DRIVE AGX Xavier — built around our Xavier SoC, the world’s most advanced — is a highly integrated AI car computer that enables Volvo to streamline development of self-driving capabilities while reducing total cost of development and support.
The initial production release will deliver Level 2+ automated driving features, going beyond traditional advanced driver assistance systems.The companies are working together to develop automated driving capabilities, uniquely integrating 360-degree surround perception and a driver monitoring system.
The NVIDIA-based computing platform will enable Volvo to implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.
It’s a vision that’s backed by a growing number of automotive companies, with Huang announcing Wednesday that, in addition to Volvo Cars, Volvo Trucks, tier one automotive components supplier Continental, and automotive technology companies Veoneer and Zenuity and have all adopted NVIDIA DRIVE AGX.
Jensen also showed the audience a video of how, this month, an autonomous NVIDIA test vehicle, nicknamed BB8, completed a jam-packed 80-kilometer, or 50 mile, loop, in Silicon Valley without the need for the safety driver to take control — even once.
Running on the NVIDIA DRIVE AGX Pegasus AI supercomputer, the car handled highway entrance and exits and numerous lane changes entirely on its own.
From Hospitals Serving Millions to Medicine Tailored Just for You
AI is also driving breakthroughs in the healthcare, Huang explained, detailing how NVIDIA Clara will harness GPU computing for everything from medical scanning to robotic surgery.
He also announced a partnership with King’s College London to bring AI tools to radiology, and deploy it to three hospitals serving 8 million patients in the U.K.
In addition, he announced NVIDIA Clara AGX — which brings the power of Xavier to medical devices — has been selected by Oxford Nanopore to power its personal DNA sequencer MinION, which promises to driven down the cost and drive up the availability of medical care that’s tailored to a patient’s DNA.
A New Computing Era
Huang finished his talk by recapping the new NVIDIA platforms being rolled out — the Turing GPU architecture; the RAPIDS data science platform; and DRIVE AGX for autonomous machines of all kinds.
Then he left the audience with a stunning demo of a nameless hero being prepared for action by his robotic assistants — before he returns to catch his robotics bopping along to K.C. and the Sunshine Band and join in the fun before returning to stage with a quick caveat.
“And I forgot to tell you everything was done in real time,” Huang said. “That was not a movie.”
The post NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE appeared first on The Official NVIDIA Blog.
NVIDIA Launches GPU-Acceleration Platform for Data Science, Volvo Selects NVIDIA DRIVE published first on https://supergalaxyrom.tumblr.com
0 notes
unixcommerce · 7 years ago
Text
The Future of E-Commerce Platforms
The competition in the e-commerce market is fiercer than ever, as brands wrangle to outdo rivals by deploying the latest techniques and practices technology can offer. However, it’s hard to predict an industry-leader for a longer duration with the future of e-commerce constantly shifting.
This fast-paced evolution of e-commerce has not only expanded the digital footprint of online brands but also served as an impetus to accelerate the performance of shopping carts and increase revenues for online merchants.
Retailers aren’t the only ones affected. Platform developers are also facing the challenge of meeting the demands of multi-channel users. All these criteria along with the expectation of improved delivery times, customer service and greater product selection will define the future of e-commerce platforms.
1. Personalization & Customer Experience
E-commerce personalization and enhanced user experience will remain as the leading fundamentals of e-commerce. Customer purchase decisions will be influenced by a combination of showrooming and webrooming; product demos as well as the unique in-store experience being offered by retailers. Hence, e-commerce platforms will evolve continuously to offer the great versatility and depth-of-choice that comes with online shopping, along with the option of in-store purchasing, collection, or returns.
e-commerce platforms will evolve continuously to offer the great versatility and depth-of-choice
Creating a perfect profile of your customers is essential to fabricate a hyper-personalized shopping experience. For many years, retailers have relied on “online-to-offline,” or O2O business tactics that include online ads, coupons, and other enticements to nudge customers into the sales funnel. However, as consumers are growing more mindful, the O2O is slowly deteriorating and we’re starting to see what might be described as an “O2O 2.0” approach.
Walmart Inc. stores in China now allow shoppers to pay for their purchases via WeChat, a multi-purpose Chinese messaging, social media, and mobile payment application. Developed by Tencent, WeChat analyzes the data on consumers’ shopping habits and preferences to suggest shopping lists, coupons, and other items.
2. Integration of AI Systems
Rapidly changing and improving technology will define the next big step for the online-retail industry, which is the full automation of the processes across e-commerce platforms. AI systems integrated with e-commerce platforms can run algorithms to determine the optimum conditions for the sales procedure, highest converting design, etc. for every unique online shop. By using algorithms to effectively run tests, optimize settings and repeat the process on loop, retailers can maximize their web store capabilities and yield higher conversions.
It also anticipated that visual content will play a more important role in buying decisions. While internet giants like Google and eBay have already launched their own versions of visual search—which are still very much in their infancy—retailers like West Elm are also capitalizing on latest AI technologies to add similar functionality to their stores.
In future stores will allow shoppers to input their height, weight, complexion, favorite color etc, and then suggest clothing purchases based on those results.
Digital marketing and e-commerce gurus predict that e-commerce platforms will integrate artificial intelligence and machine learning technologies into the shopping experience. This will give retailers more control over the buying process by gathering and storing information about shoppers’ buying habits. In future stores will allow shoppers to input their height, weight, complexion, favorite color etc, and then suggest clothing purchases based on those results. Retailers could use augmented reality to allow customers try on clothes virtually and further suggest other clothing items like shoes or trousers to round out a complete outfit.
3. Measurement Across All Devices
Owing to the abundance of available devices, consumers are now actively engaging on multiple devices at once. This means that e-commerce platforms must create solutions that can help retailers to engage with customers on all fronts. Though a lot of e-commerce platforms like Magento and WooCommerce are already providing extensions that can easily create a native mobile app for your store; a seamless mobile checkout is still a challenge for many platforms.
M-commerce has already achieved the trillion dollar mark
M-commerce has already achieved the trillion dollar mark and will slowly overtake e-commerce in the near future. Moreover, updates from Google mean that the mobile version of your website will soon become a starting point for what Google includes in their index, and an important ranking factor. So in future, we anticipate e-commerce platforms will work on providing more innovative mobile-friendly solutions for retailers.
4. The Decline of Monolithic Platforms
Traditional e-commerce platforms are inflexible and do not support features for performing dedicated tasks. To further define the problem, the issues can be broken down into three broad categories:
The first issue is the wastage of resources, that comes with getting a powerful server to handle the load from seasonal shoppers, but which may otherwise remain dormant during the rest of the year.
Secondly, servers in a certain physical location may not be able to provide the performance and speed to customers in another country. This can be a major setback to efforts in converting global customers.
Lastly, by housing all required servers in one location, they become more vulnerable to online attacks, server crashes, and numerous other issues—especially if the servers lack a backup. It can lead to major complications and tarnish brand reputation as well as loss of income.
To enhance customer experience retailers need to incorporate all sorts of customer analytics into their offerings. The efficiency ratio of this procedure varies depending on the platform and presents its challenges.
Converting a monolithic web application into smaller and simpler services…increases your website’s efficiency and its ability to scale
Custom-built platforms can successfully address these issues, but it can be a daunting task requiring a big team of highly-experienced developers working on the development and continuous optimization. Converting a monolithic web application into smaller and simpler services not only increases your website’s efficiency and its ability to scale, but will also allow you to react more quickly to change.
Back in 2011, Best Buy broke down its monolithic website into separate web services. This immediately benefited both the company and its customers. (However; this can be an expensive option for small retailers, who more than likely will not be able to rationalize these costs.)
5. Using Hyperscale Computing
Hyperscale is not only cost-effective and provides more space for innovation, but it allows retailers to explore different solutions for individual services. Moreover, retailers will have more freedom in managing the expenses and will be freed from the need of making a permanent commitment. Retailers will be able to focus on development in areas that highlight their strengths and attract customers in a highly competitive market.
There is no debate that cloud computing has helped e-commerce entrepreneurs to save both time and resources. It has opened the world to consumers and online retailers. Walmart has spent more than five years and millions of dollars just to built its own internal cloud network; this clearly indicates their determination to grab a bigger slice of online shopping and is an inspiration for all online retailers to quickly move to cloud computing not only to increase their sales but to improve their in-store operations as well.
What the Future Holds
New technologies and the latest products are increasingly changing the manner in which most consumers shop online. Innovative devices, such as Google Home, are decreasing the number of steps required for completing a purchase. Consumers can create a wish list using Google Home and directly place their orders without even launching a web browser or other apps.
the most elaborate solution may not necessarily be the most effective
Social media channels have also become a big part of the online retail process. They have proven effective means of advertising products according to demographics and specific customer behavior. More importantly, customers can use the social media channels to gain direct access to the e-commerce platform. The future of these integration tools seems to suggest that soon customers may even be able to purchase by simply selecting a product image displayed on a social media channel.
While the complexities of e-commerce continue to increase, retailers are starting to learn the most elaborate solution may not necessarily be the most effective.
Reducing the e-commerce platform into manageable sections and utilizing consumer data to better develop functions to address specific customer behavior are approaches which will set retailers on the track to prepare for the near future of e-commerce.
  Featured image via DepositPhotos.
Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!
Source p img {display:inline-block; margin-right:10px;} .alignleft {float:left;} p.showcase {clear:both;} body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}
https://www.webdesignerdepot.com
The post The Future of E-Commerce Platforms appeared first on Unix Commerce.
from WordPress https://ift.tt/2y14HRT via IFTTT
0 notes
iyarpage · 7 years ago
Text
The Future of E-Commerce Platforms
The competition in the e-commerce market is fiercer than ever, as brands wrangle to outdo rivals by deploying the latest techniques and practices technology can offer. However, it’s hard to predict an industry-leader for a longer duration with the future of e-commerce constantly shifting.
This fast-paced evolution of e-commerce has not only expanded the digital footprint of online brands but also served as an impetus to accelerate the performance of shopping carts and increase revenues for online merchants.
Retailers aren’t the only ones affected. Platform developers are also facing the challenge of meeting the demands of multi-channel users. All these criteria along with the expectation of improved delivery times, customer service and greater product selection will define the future of e-commerce platforms.
1. Personalization & Customer Experience
E-commerce personalization and enhanced user experience will remain as the leading fundamentals of e-commerce. Customer purchase decisions will be influenced by a combination of showrooming and webrooming; product demos as well as the unique in-store experience being offered by retailers. Hence, e-commerce platforms will evolve continuously to offer the great versatility and depth-of-choice that comes with online shopping, along with the option of in-store purchasing, collection, or returns.
e-commerce platforms will evolve continuously to offer the great versatility and depth-of-choice
Creating a perfect profile of your customers is essential to fabricate a hyper-personalized shopping experience. For many years, retailers have relied on “online-to-offline,” or O2O business tactics that include online ads, coupons, and other enticements to nudge customers into the sales funnel. However, as consumers are growing more mindful, the O2O is slowly deteriorating and we’re starting to see what might be described as an “O2O 2.0” approach.
Walmart Inc. stores in China now allow shoppers to pay for their purchases via WeChat, a multi-purpose Chinese messaging, social media, and mobile payment application. Developed by Tencent, WeChat analyzes the data on consumers’ shopping habits and preferences to suggest shopping lists, coupons, and other items.
2. Integration of AI Systems
Rapidly changing and improving technology will define the next big step for the online-retail industry, which is the full automation of the processes across e-commerce platforms. AI systems integrated with e-commerce platforms can run algorithms to determine the optimum conditions for the sales procedure, highest converting design, etc. for every unique online shop. By using algorithms to effectively run tests, optimize settings and repeat the process on loop, retailers can maximize their web store capabilities and yield higher conversions.
It also anticipated that visual content will play a more important role in buying decisions. While internet giants like Google and eBay have already launched their own versions of visual search—which are still very much in their infancy—retailers like West Elm are also capitalizing on latest AI technologies to add similar functionality to their stores.
In future stores will allow shoppers to input their height, weight, complexion, favorite color etc, and then suggest clothing purchases based on those results.
Digital marketing and e-commerce gurus predict that e-commerce platforms will integrate artificial intelligence and machine learning technologies into the shopping experience. This will give retailers more control over the buying process by gathering and storing information about shoppers’ buying habits. In future stores will allow shoppers to input their height, weight, complexion, favorite color etc, and then suggest clothing purchases based on those results. Retailers could use augmented reality to allow customers try on clothes virtually and further suggest other clothing items like shoes or trousers to round out a complete outfit.
3. Measurement Across All Devices
Owing to the abundance of available devices, consumers are now actively engaging on multiple devices at once. This means that e-commerce platforms must create solutions that can help retailers to engage with customers on all fronts. Though a lot of e-commerce platforms like Magento and WooCommerce are already providing extensions that can easily create a native mobile app for your store; a seamless mobile checkout is still a challenge for many platforms.
M-commerce has already achieved the trillion dollar mark
M-commerce has already achieved the trillion dollar mark and will slowly overtake e-commerce in the near future. Moreover, updates from Google mean that the mobile version of your website will soon become a starting point for what Google includes in their index, and an important ranking factor. So in future, we anticipate e-commerce platforms will work on providing more innovative mobile-friendly solutions for retailers.
4. The Decline of Monolithic Platforms
Traditional e-commerce platforms are inflexible and do not support features for performing dedicated tasks. To further define the problem, the issues can be broken down into three broad categories:
The first issue is the wastage of resources, that comes with getting a powerful server to handle the load from seasonal shoppers, but which may otherwise remain dormant during the rest of the year.
Secondly, servers in a certain physical location may not be able to provide the performance and speed to customers in another country. This can be a major setback to efforts in converting global customers.
Lastly, by housing all required servers in one location, they become more vulnerable to online attacks, server crashes, and numerous other issues—especially if the servers lack a backup. It can lead to major complications and tarnish brand reputation as well as loss of income.
To enhance customer experience retailers need to incorporate all sorts of customer analytics into their offerings. The efficiency ratio of this procedure varies depending on the platform and presents its challenges.
Converting a monolithic web application into smaller and simpler services…increases your website’s efficiency and its ability to scale
Custom-built platforms can successfully address these issues, but it can be a daunting task requiring a big team of highly-experienced developers working on the development and continuous optimization. Converting a monolithic web application into smaller and simpler services not only increases your website’s efficiency and its ability to scale, but will also allow you to react more quickly to change.
Back in 2011, Best Buy broke down its monolithic website into separate web services. This immediately benefited both the company and its customers. (However; this can be an expensive option for small retailers, who more than likely will not be able to rationalize these costs.)
5. Using Hyperscale Computing
Hyperscale is not only cost-effective and provides more space for innovation, but it allows retailers to explore different solutions for individual services. Moreover, retailers will have more freedom in managing the expenses and will be freed from the need of making a permanent commitment. Retailers will be able to focus on development in areas that highlight their strengths and attract customers in a highly competitive market.
There is no debate that cloud computing has helped e-commerce entrepreneurs to save both time and resources. It has opened the world to consumers and online retailers. Walmart has spent more than five years and millions of dollars just to built its own internal cloud network; this clearly indicates their determination to grab a bigger slice of online shopping and is an inspiration for all online retailers to quickly move to cloud computing not only to increase their sales but to improve their in-store operations as well.
What the Future Holds
New technologies and the latest products are increasingly changing the manner in which most consumers shop online. Innovative devices, such as Google Home, are decreasing the number of steps required for completing a purchase. Consumers can create a wish list using Google Home and directly place their orders without even launching a web browser or other apps.
the most elaborate solution may not necessarily be the most effective
Social media channels have also become a big part of the online retail process. They have proven effective means of advertising products according to demographics and specific customer behavior. More importantly, customers can use the social media channels to gain direct access to the e-commerce platform. The future of these integration tools seems to suggest that soon customers may even be able to purchase by simply selecting a product image displayed on a social media channel.
While the complexities of e-commerce continue to increase, retailers are starting to learn the most elaborate solution may not necessarily be the most effective.
Reducing the e-commerce platform into manageable sections and utilizing consumer data to better develop functions to address specific customer behavior are approaches which will set retailers on the track to prepare for the near future of e-commerce.
  Featured image via DepositPhotos.
Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!
Source p img {display:inline-block; margin-right:10px;} .alignleft {float:left;} p.showcase {clear:both;} body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;} The Future of E-Commerce Platforms published first on https://medium.com/@koresol
0 notes
ixiacom · 8 years ago
Text
Ixia to Showcase Advanced Security and Visibility Solutions at Black Hat USA 2017
Ixia to Showcase Advanced Security and Visibility Solutions at Black Hat USA 2017
CALABASAS, CA – July 19, 2017— Ixia, a leading provider of network test, visibility, and security solutions, will be demonstrating a wide range of technology advancements and interoperability at Black Hat USA 2017 to be held July 22nd - 27th  at Mandalay Bay in Las Vegas.
Attendees at this year’s Black Hat event can meet Ixia subject matter experts at Booth #208, on level one, to learn how to:
gain visibility into all traffic, which necessitates the ability to decrypt traffic that uses ephemeral key encryption mandated by the new TLS 1.3 standard 
resolve application performance bottlenecks, trouble shoot problems, and improve data center automation, as well as better utilize network analysis and security tools in hyperscale and microscale data centers
block up to 80 percent of malicious traffic, including Botnets and Ransomware, and the core principles for developing an appropriate resistance against increasingly complex ransomware threats
prepare for General Data Protection Regulation (GDPR)
boost network protection without negatively impacting performance
Ixia has 20 years of experience testing networking. Along the way, the company has witnessed a dramatic increase in network attack surfaces. Organizations must grapple with new cloud technologies, mobile devices on the network, a host of new compliance and monitoring tools, and the rise of the Internet of things.  Ixia analyzes the impacts of these new trends to help enterprises understand the shifting environment of network security, how security reach is expanding, new threats, and new areas of vulnerability.  Download the Ixia 2017 Security Report to learn more.
Ixia and the Ixia logo are trademarks or registered trademarks of Ixia in the United States and other jurisdictions. All other trademarks used herein are the property of their respective owners.
Connect with Ixia via: LinkedIn, Twitter, Ixia Blog, or YouTube
Ixia Media Contact:
Denise Idone, Director of Corporate Communications Email: [email protected] Office: (631) 849-3500 Mobile: (516) 659-7049
Divya Natarajan Wed, 07/19/2017 - 20:47 from Ixia http://ift.tt/2uczvxx via IFTTT
0 notes
stephenlibbyy · 7 years ago
Text
Comparing Upgrade Strategies with Cumulus Linux
You’ve been running your Cumulus Linux network for a while, and everything has been running perfectly. Cumulus Linux has sliced your bread, you’ve gotten a promotion because your boss can’t believe how successful the project was, and the cost savings being felt by the organization. Your company has even been able to fire the accountant because Cumulus Linux has surprisingly also done your taxes for the coming year, and in general everything is going swimmingly with your open networking.
So what now, is our story over? Well not exactly, enterprise networks have long lifespans. Hyperscalers typically operate on a refresh cycle of 3-5 years. For them, anything over 3/yrs old is considered tech debt. Anything over 5/yrs old is considered a critical fault point. Your typical enterprise network may be around even longer than that. It is very common in this timespan for the needs of the applications to change requiring the network to change too. This often requires support for newer features at some point in the lifecycle of the equipment.
While the scenario above is quite rosey, (Hey – this is our blog after all!) the reasons for wanting to upgrade are many and varied. New features, bug fixes, software end-of-life timelines and operational consistency are some of the many things that could drive the need to perform an upgrade. Understanding the upgrade options that exist in Cumulus Linux is helpful to being able to drive your internal processes in the most efficient way possible and reap the maximum benefit of web-scale networking.
Running a Linux distribution like Cumulus offers two very clear and very different paths to perform an upgrade. These paths are not unique to Cumulus and are consistent with many Linux distros:
In-place Upgrade (Package based upgrade)
Binary Upgrade (Clean Installation)
Binary upgrades are the cleanest option by far and we’re going to support that assertion with various data throughout this post.
But before we get to our comparison, let’s the stage and talk through our options.
In-Place Upgrades The ability to perform in-place upgrades is a pretty handy feature. With this method, as long as switches have access to the internet or a mirror of the Cumulus Linux repository, the switch can be upgraded to the most current release of software. Of course you’ll need to steer traffic away from the node prior to the upgrade by using BGP Graceful Shutdown, BGP AS Prepends, or perhaps max-metric but we’re focusing on the mechanics of the upgrade process in this blog so we won’t weigh ourselves down with the specifics of routing changes. We’ll save that instead for Part 2 of this article where we dig into the operational components.
Pros
Configurations do not need to be backed-up – Common sense says you should ALWAYS back up your configurations and we would never go against that mindset, especially when it is so easy. However, when performing the in-place upgrade, after the upgrade your configurations will still be present on the switch which makes putting the switch back into service a bit more simple.
Rollback to a previous version is easy – If things don’t go so hot on the new version of software the rollback process is extremely easy. Perform the snapper rollback command and reboot to go back to right where you used to be this is very convenient in a lab scenario for testing purposes.
Automation not required – Generally a reason that organizations choose this option is because they like the idea of not having to worry about restoring the configuration after the upgrade however when an organization is leveraging automation for configuration management, restoring a configuration is trivially easy so while this is a plus, I would say that choosing not to embrace automation is not a good long term strategy.
Cons
Requires a reboot – Every in-place upgrade requires a reboot without exception. The switch may not prompt you as much to tell you depending on your software versions however, a reboot is always implied and cannot be skipped- so don’t try.
Always takes you to current release in the current train – This is a more obvious pain-point if you’re upgrading using the Cumulus repository, there is no method to specify a version. The upgrade will always take you to the latest release in the train and not all organizations desire to be on the latest and greatest software for a number of reasons. This might then, require the organization to setup a mirror of the Cumulus repository which adds extra administrational challenges and skill-set requirements and should be avoided unless the team has some prior experience with this sort of thing.
Requires switches to have access to the Internet or to a mirror of the Cumulus Repository – Whether the switches are upgrading from the Internet or from a mirror of the repository, the packages must come from somewhere. In many organizations it is not possible, for security reasons, to allow switches direct access to the internet so they might have to configure the switches to use a proxy. If that is not possible the last option is to mirror the Cumulus repository which again requires more organizational effort.
Leaves you in an inconsistent state if something goes wrong in the upgrade – Let me preface this bullet with the caveat that this is pretty rare. As there is more complexity in the in-place upgrade process there is a higher likelihood that something could go wrong. If things should go sideways at any point during the installation of the 10-50 packages involved in most in-place upgrades, the switch would remain in an inconsistent state until it could be analyzed, likely by a human operator, and corrected. In my opinion, complexity should be avoided wherever possible in life.
Binary Upgrades With Cumulus, everytime we release a new version of software we also release a matching binary image download. The binary image allows the operator to move to whatever version is contained in the binary. The binary image will completely overwrite whatever version of Cumulus Linux may already be installed on the switch.
Pros
Flexibility to move to ANY release from any other release – Binaries are totally self-contained. They have everything needed to move from any release to any other release. They even include ASIC firmwares in some cases in order to support new features. Using binary images provides maximum flexibility in your release planning, including the ability to move between major versions which is another advantage not available to in-place upgrades at this time.
Clean installation process is easy to understand – Package-based upgrades like the in-place upgrade have a lot of moving parts as many packages are being upgraded serially one after another to take you from version to version. However, the moving parts in the binary upgrade are relatively simple. Clearing the disk and installing all packages (concurrently) will bring you to the desired release. This is inherently more simple which is generally good.
Takes about the same amount of time as an in-place upgrade – When you factor in that the packages can all be upgraded concurrently with the binary upgrade instead of serially for the in-place upgrade, also remember that a reboot must be performed with the in-place upgrade and the difference in time required between the binary upgrade and the in-place upgrade is typically a matter of a few minutes. In that case, since network software upgrades are performed more rarely, waiting a minute or two longer is not really a big deal.
Single workflow to harden- upgrades look just like new turn-ups – This is a subtle difference, however when an organization has committed to using both the in-place upgrade process for existing nodes and the binary process for newly deployed nodes, there are twice as many processes to harden and qualify in the environment. This is another complexity argument, why harden two processes when you could make spend your time working on other things like delivering value to the business.
Always leaves you in well-known state at the end of the upgrade – If a switch has an in-place upgrade performed ten times there is a higher likelihood that something will differ between the switch that has had a clean install at every pass. These kinds of insidious differences also tend to be hard to identify and troubleshoot so best to avoid them.
Cons
Requires a place to store the binary image which is accessible by the switch – There is a good bit of flexibility here as to where the image can be stored, a web server (preferred), a tftp server, an ftp server. Where the image is located does not matter so much as that it is accessible to the switches. You might already have a place you’re storing images like this for other vendors which makes this pretty easy to address.
Requires configurations to be put back in place after the upgrade – Alright, here is the elephant in the room. So you’ve done a clean install but after the install your switch needs to be reconfigured before it can be put back into service. This is an easy element to address if you’re using Zero Touch Provisioning (ZTP) and automation tools to put your configuration back in place, so if you’re not using these forms of automation… why not? Check out our last blog post on how to use ZTP and automation tools together to do just this.
Let’s Bring It Home With Cumulus Linux you have lots of upgrade options but the best option is the binary upgrade as it provides the simplest and most consistent workflow to move from release to release. The in-place upgrade options will likely change and become even more robust in future versions of Cumulus Linux however that won’t change the nature of the complexity involved in the process. Binary upgrades are never going to disappear and are a safe long term bet. In all cases, regardless of which upgrade mechanism is most appealing to you or your organization, there is a bit of setup you must do to your environment to get ready to perform the upgrade. Tune in for our next blog piece in this series on how to prepare for the upgrade, how to steer traffic away from the node, how to address multichassis link aggregation (MLAG/CLAG) and dual connected hosts.
This piece is co-written with the Cumulus Global Support Services (GSS) organization to leverage all the countless learnings in the trouble tickets seen by our support engineers.
The post Comparing Upgrade Strategies with Cumulus Linux appeared first on Cumulus Networks engineering blog.
Comparing Upgrade Strategies with Cumulus Linux published first on https://wdmsh.tumblr.com/
0 notes
clearskytest · 2 years ago
Link
Hyperscale testing automation tools are designed to help organizations maximize their testing efficiency and effectiveness. They allow teams to quickly and accurately test their applications and infrastructure at scale, while providing high levels of automation and flexibility. These tools can be used to automate the testing of multiple applications or services, ensuring that all components are tested with the same rigor and accuracy.
0 notes