#Hub and Spoke Data Mesh
Explore tagged Tumblr posts
Text
Implementing Data Mesh on Databricks: Harmonized and Hub & Spoke Approaches
Explore the Harmonized and Hub & Spoke Data Mesh models on Databricks. Enhance data management with autonomous yet integrated domains and central governance. Perfect for diverse organizational needs and scalable solutions. #DataMesh #Databricks
View On WordPress
#Autonomous Data Domains#Data Governance#Data Interoperability#Data Lakes and Warehouses#Data Management Strategies#Data Mesh Architecture#Data Privacy and Security#Data Product Development#Databricks Lakehouse#Decentralized Data Management#Delta Sharing#Enterprise Data Solutions#Harmonized Data Mesh#Hub and Spoke Data Mesh#Modern Data Ecosystems#Organizational Data Strategy#Real-time Data Sharing#Scalable Data Infrastructures#Unity Catalog
0 notes
Text
What Makes MPLS and SD-WAN-Based Office Networks Different
Multi-protocol label switching (MPLS) or software-defined wide area network (SD-WAN)? Most business owners in Saskatoon who must choose between the two want to know which will work better for their office network. To make the right choice, you first need to understand their differences.
In short, MPLS is a dedicated circuit, while SD-WAN is a virtual connection that does not depend on physical links. Because of this, MPLS is better at preventing packet loss. The virtual overlay in SD-WAN, on the other hand, allows users to utilize additional network links such as LTE, broadband, and MPLS, offering more flexibility.
Compare Costs
Companies that use MPLS networks typically employ a hub-and-spoke model with separate MPLS connections to link their remote offices to the main data center. As a result, all data, processes, and even cloud-based services require routing back to the data center for processing and redistribution. This setup is often not cost-effective.
SD-WAN reduces costs by connecting multiple locations through private, distributed data exchanges and control points. This configuration allows users improved and more direct access to the cloud and networking services they require.
Issues with Security
The internal backbone of the service provider acts as a secure and managed link between the company’s data center and branch offices, a benefit well-known to companies using MPLS. Most public internet services do not offer this level of security.
However, it's important to note that MPLS does not inherently inspect the data it carries. A firewall must be installed at one end of the link to scrutinize all traffic for malware and other security threats.
Additionally, this is a challenge that most SD-WAN systems face, necessitating enhanced security measures. Integrating security tools post-deployment complicates the process.
To prevent these issues, skilled network installation service providers incorporate SD-WAN connectivity directly into a next-generation firewall (NGFW) device. Each connection, therefore, features dynamic, meshed virtual private network (VPN) capabilities to secure data in transit. Several security tools, including firewalls, antivirus software, intrusion prevention systems, web filtering, and anti-malware, are employed to thoroughly scrutinize data.
How the Network Operates
In terms of performance, MPLS offers a consistent and fixed bandwidth level, which is advantageous for businesses. However, today's network traffic has variable speed demands. Thus, when businesses opt for an MPLS solution, there may be periods when expensive bandwidth remains underused, and times when MPLS may strain network connectivity due to the ever-increasing volume of data transmitted through new apps and devices.
SD-WAN can identify the types of apps in use and adjust bandwidth accordingly. It also sets up multiple parallel links to facilitate load sharing. Another advantage is failover, which involves switching to an alternate link if one connection fails. This feature ensures that latency-sensitive applications continue to perform optimally.
When MPLS May Be Preferable to SD-WAN
There are instances where MPLS might be a more suitable choice than SD-WAN, despite being considered an older network protocol by many.
MPLS remains an effective method for establishing secure and isolated connections for apps, transactions, and data that need to be kept separate from public internet networks. It offers a high level of data security. Both MPLS and SD-WAN can also be used together to manage various online activities.
If you are undecided between SD-WAN and MPLS, do not hesitate to consult network service providers. They understand how to leverage each technology in different scenarios and can help you choose the best option for your digital-first office.
Author Bio:
Layer 3 is a Managed IT company in Saskatoon that supports businesses with their IT needs. The team has extensive expertise in cloud solutions, network setup and security, emergency recovery, data backups, web design, and development. Please visit their website to learn more about the team.
0 notes
Text
can you setup a vpn over hamachi
🔒🌍✨ Get 3 Months FREE VPN - Secure & Private Internet Access Worldwide! Click Here ✨🌍🔒
can you setup a vpn over hamachi
Hamachi VPN setup
Title: A Comprehensive Guide to Setting Up Hamachi VPN
Hamachi VPN, developed by LogMeIn, is a popular choice for creating secure virtual private networks (VPNs) over the internet. Whether you're looking to enhance your online privacy, access region-restricted content, or securely connect remote devices, setting up Hamachi VPN is a straightforward process. Here's a step-by-step guide to help you get started:
Download and Install Hamachi: Begin by downloading the Hamachi software from the official website. Follow the installation instructions provided, ensuring that you have administrative privileges on your device.
Create a Network: Once installed, launch the Hamachi application and sign in with your credentials or create a new account if you don't have one. After logging in, click on the "Network" tab and select "Create a new network." Choose a name and password for your network, as these will be used to connect other devices.
Invite Members: With your network created, you can now invite others to join. Share the network ID and password with the individuals you wish to connect. They will need to download and install Hamachi on their devices and join the network using the provided credentials.
Configure Network Settings: Within the Hamachi interface, you have the option to customize various network settings, including network type (mesh or hub-and-spoke), network permissions, and more. Adjust these settings according to your specific requirements.
Connect and Secure: Once all desired members have joined the network, Hamachi will establish secure connections between the devices, allowing them to communicate privately over the internet. You can now enjoy the benefits of a secure VPN, including encrypted data transmission and enhanced privacy.
Manage and Monitor: Hamachi provides tools for managing and monitoring your VPN network. From the Hamachi dashboard, you can view network status, monitor connected devices, and troubleshoot any issues that may arise.
By following these simple steps, you can set up a Hamachi VPN quickly and easily, enabling secure communication and collaboration across your devices, regardless of their physical location.
Virtual Private Network configuration with Hamachi
Title: Setting Up a Virtual Private Network with Hamachi: A Comprehensive Guide
In today's digital landscape, ensuring the security and privacy of your online activities is paramount. One effective solution that individuals and businesses alike turn to is setting up a Virtual Private Network (VPN). Among the various options available, Hamachi stands out as a reliable choice due to its ease of use and versatility. In this guide, we'll walk you through the process of configuring a VPN with Hamachi step by step.
First and foremost, you'll need to download and install the Hamachi software on all the devices you intend to connect to the VPN. Hamachi supports multiple operating systems, including Windows, macOS, and Linux, making it accessible to a wide range of users.
Once installed, launch the Hamachi application and create a new network. Choose a name for your network and set a strong password to ensure security. Share these credentials with the individuals or devices you want to grant access to your VPN.
Next, invite members to join your network by sending them an invitation via email or directly through Hamachi using their IP addresses. They'll need to download and install Hamachi if they haven't already and then enter the network name and password you provided.
With all members connected to the VPN, you can now begin securely sharing files, accessing remote resources, or playing games over the internet as if you were on the same local network. Hamachi employs encryption to safeguard your data, ensuring that your online activities remain private and protected from prying eyes.
In conclusion, configuring a Virtual Private Network with Hamachi is a straightforward process that offers enhanced security and privacy for your online endeavors. By following the steps outlined in this guide, you can enjoy the benefits of a VPN without the complexity typically associated with setup and maintenance.
Hamachi VPN installation guide
Setting up a Hamachi VPN is a straightforward process that allows users to securely connect devices over the internet. Follow this step-by-step guide to install and configure Hamachi on your computer:
Download Hamachi: Visit the official website of Hamachi and download the software compatible with your operating system (Windows, macOS, or Linux).
Install Hamachi: Double-click on the downloaded file and follow the on-screen instructions to install the software on your computer.
Create a Network: Launch Hamachi and click on the 'Create a new network' button. Enter a network ID and password for secure access.
Join a Network: If you want to join an existing network, ask the network administrator for the network ID and password. Enter this information in Hamachi to join the network.
Configure Network Settings: Customize network settings such as network type, network description, and member permissions to meet your specific requirements.
Connect Devices: Install Hamachi on all devices you want to connect to the VPN network. Enter the network ID and password to establish a secure connection between the devices.
Test the Connection: Verify the connection by accessing shared files, printers, or playing games over the Hamachi VPN network.
By following these simple steps, you can easily set up and configure Hamachi VPN on your devices to enjoy secure communication and data transfer over the internet.
Setting up VPN using Hamachi
Setting up a VPN (Virtual Private Network) using Hamachi is a convenient way to ensure secure and private internet connections. Hamachi is a popular VPN application that allows users to create a virtual network that functions similarly to a local area network (LAN). This enables users to connect multiple devices over the internet and access shared files, printers, and other resources securely.
To set up a VPN using Hamachi, the first step is to download and install the Hamachi application on all the devices that will be part of the virtual network. Once installed, users need to create a network by either joining an existing network or creating a new one. Each network is protected by a unique password, ensuring that only authorized users can join.
After creating the network, users can easily add devices to the network by sharing the network ID and password. Once all devices are connected, users can securely share files, play games, or access remote desktop services as if they were on the same local network.
Hamachi offers a free version with limited features that are suitable for personal use. For businesses or users requiring more advanced features, there is a paid version available that offers enhanced security and network management options.
In conclusion, setting up a VPN using Hamachi is a simple and effective way to create a secure virtual network for personal or business use. With its easy-to-use interface and robust security features, Hamachi is an excellent choice for anyone looking to establish a private connection over the internet.
Hamachi VPN tutorial
Title: The Complete Guide to Setting Up Hamachi VPN: A Step-by-Step Tutorial
In today's digitally interconnected world, ensuring secure and private communication over the internet has become a top priority for individuals and businesses alike. Virtual Private Networks (VPNs) play a crucial role in achieving this by creating encrypted tunnels for data transmission. One such VPN solution that has gained popularity is Hamachi.
Hamachi, developed by LogMeIn, offers a simple yet effective way to establish a virtual private network over the internet. Whether you want to securely access files from your home computer while traveling or play games with friends over a secure network, Hamachi can fulfill your needs. Here's a step-by-step tutorial to help you set up Hamachi VPN:
Download and Install Hamachi: Start by downloading the Hamachi software from the official website. Follow the on-screen instructions to install it on your computer.
Create a Network: Launch the Hamachi application and sign in with your account credentials. Click on the "Network" menu and select "Create a new network." Enter a name and password for your network.
Invite Members: Share the network ID and password with the individuals you want to join your network. They'll need to install Hamachi and join the network using the provided credentials.
Configure Network Settings: Customize your network settings as per your requirements. You can adjust security settings, enable/disable chat, and manage member permissions.
Start Using Hamachi: Once all members have joined the network, you can start using Hamachi to securely communicate, share files, or play games over the encrypted connection.
Troubleshooting: In case you encounter any issues during setup or usage, refer to the Hamachi documentation or seek assistance from online forums and communities.
By following these simple steps, you can harness the power of Hamachi to create a secure virtual private network for various purposes. Stay connected and protected with Hamachi VPN.
0 notes
Text
What is a VPN (Virtual Private Network) and How Does It Work?
A virtual private network (VPN) is programming that creates a safe, encrypted connection over a less secure network, such as the public internet. A VPN uses tunneling protocols to encrypt data at the sending end and decrypt it at the receiving end. To provide additional security, the originating and receiving network addresses are also encrypted.
VPNs are used to provide remote corporate employees, gig economy freelance workers and business travelers with access to software applications hosted on proprietary networks. To gain access to a restricted resource through a VPN, the user must be authorized to use the VPN app and provide one or more authentication factors, such as a password, security token or biometric data.
VPN apps are often used by individuals who want to protect data transmissions on their mobile devices or visit web sites that are geographically restricted. Secure access to an isolated network or website through a mobile VPN should not be confused with private browsing, however. Private browsing does not involve encryption; it is simply an optional browser setting that prevents identifiable user data, such as cookies, from being collected and forwarded to a third-party server.
How a VPN works
At its most basic level, VPN tunneling creates a point-to-point connection that cannot be accessed by unauthorized users. To actually create the VPN tunnel, the endpoint device needs to be running a VPN client (software application) locally or in the cloud. The VPN client runs in the background and is not noticeable to the end user unless there are performance issues.
The performance of a VPN can be affected by a variety of factors, among them the speed of users' internet connections, the types of protocols an internet service provider may use and the type of encryption the VPN uses. In the enterprise, performance can also be affected by poor quality of service (QoS) outside the control of an organization's information technology (IT) department.
VPN protocols
VPN protocols ensure an appropriate level of security to connected systems when the underlying network infrastructure alone cannot provide it. There are several different protocols used to secure and encrypt users and corporate data. They include:
IP security (IPsec)
Secure Sockets Layer (SSL) and Transport Layer Security (TLS)
Point-To-Point Tunneling Protocol (PPTP)
Layer 2 Tunneling Protocol (L2TP)
OpenVPN
Types of VPNs
Network administrators have several options when it comes to deploying a VPN. They include:
Remote access VPN
Remote access VPN clients connect to a VPN gateway server on the organization's network. The gateway requires the device to authenticate its identity before granting access to internal network resources such as file servers, printers and intranets. This type of VPN usually relies on either IP Security (IPsec) or Secure Sockets Layer (SSL) to secure the connection.
Site-to-site VPN
In contrast, a site-to-site VPN uses a gateway device to connect an entire network in one location to a network in another location. End-node devices in the remote location do not need VPN clients because the gateway handles the connection. Most site-to-site VPNs connecting over the internet use IPsec. It is also common for them to use carrier MPLS clouds rather than the public internet as the transport for site-to-site VPNs. Here, too, it is possible to have either Layer 3 connectivity (MPLS IP VPN) or Layer 2 (virtual private LAN service) running across the base transport.
Mobile VPN
In a mobile VPN, a VPN server still sits at the edge of the company network, enabling secure tunneled access by authenticated, authorized VPN clients. Mobile VPN tunnels are not tied to physical IP addresses, however. Instead, each tunnel is bound to a logical IP address. That logical IP address sticks to the mobile device no matter where it may roam. An effective mobile VPN provides continuous service to users and can seamlessly switch across access technologies and multiple public and private networks.
Hardware VPN
Hardware VPNs offer a number of advantages over the software-based VPN. In addition to enhanced security, hardware VPNs can provide load balancing to handle large client loads. Administration is managed through a Web browser interface. A hardware VPN is more expensive than a software VPN. Because of the cost, hardware VPNs are a more realistic option for large businesses than for small businesses or branch offices. Several vendors, including Irish vendor InvizBox, offer devices that can function as hardware VPNs.
VPN appliance
A VPN appliance, also known as a VPN gateway appliance, is a network device equipped with enhanced security features. Also known as an SSL (Secure Sockets Layer) VPN appliance, it is in effect a router that provides protection, authorization, authentication and encryption for VPNs.
Dynamic multipoint virtual private network (DMVPN)
A dynamic multipoint virtual private network (DMVPN) is a secure network that exchanges data between sites without needing to pass traffic through an organization's headquarter virtual private network (VPN) server or router. A DMVPN essentially creates a mesh VPN service that runs on VPN routers and firewall concentrators. Each remote site has a router configured to connect to the company’s headquarters VPN device (hub), providing access to the resources available. When two spokes are required to exchange data between each other -- for a VoIP telephone call, for example -- the spoke will contact the hub, obtain the necessary information about the other end, and create a dynamic IPsec VPN tunnel directly between them.
VPN Reconnect
VPN Reconnect is a feature of Windows 7 and Windows Server 2008 R2 that allows a virtual private network connection to remain open during a brief interruption of Internet service. Usually, when a computing device using a VPN connection drops its Internet connection, the end user has to manually reconnect to the VPN. VPN Reconnect keeps the VPN tunnel open for a configurable amount of time so when Internet service is restored, the VPN connection is automatically restored as well. The feature was designed to improve usability for mobile employees.
Security limitations of a virtual private network explained
Any device that accesses an isolated network through a VPN presents a risk of bringing malware to that network environment unless there is a requirement in the VPN connection process to assesses the state of the connecting device. Without an inspection to determine whether the connecting device complies with an organization's security policies, attackers with stolen credentials can access network resources, including switches and routers.
Security experts recommend that network administrators consider adding software-defined perimeter (SDP) components to their VPN infrastructure in order to reduce potential attack surfaces. The addition of SDP programming gives medium and large organizations the ability to use a zero trust model for access to both on-premises and cloud network environments.
Source:
https://searchnetworking.techtarget.com/definition/virtual-private-network
1 note
·
View note
Text
Logmein hamachi download unmanaged

While LogMeIn Hamachi offers three network types (Mesh, Hub-and-Spoke, and Gateway), most people will want to use either the “Mesh” or the “Hub-and-Spoke”.This will allow you to establish the name and description of the network, as well as choose the infrastructure type of your network. On the screen that follows, you will want to select the “Start Creating Networks” button.It’s a pretty simple form, and should only take a few seconds to fill out. The next screen you will be presented with will ask for a bit of information to establish your account.To begin the process, click the “Create Account” button below the “Managed” product. This tutorial will cover the managed edition, as this version gives you more control over your VPN. The managed version has a bit more features, including an online control-panel. LogMeIn offers two versions of Hamachi a managed version, and an unmanaged version.Once you are logged into your server, navigate your web browser to, and select the “Get Started” button.The first step in configuring Hamachi is to access your server either physically or via Remote Desktop or VNC.However, with a zero-configuration VPN service such as LogMeIn’s Hamachi product, setting up a VPN and accessing your server’s contents from afar becomes a trivial task that can be completed within a matter of minutes. Traditionally, VPN’s take a great deal of time, effort, and resources to set up, and are not typically conceived as being reasonable for a home user to set up. The concept of VPN’s is not a new one in the least the concept has been around since the hay-day of dial-up internet, and is the basis for the technology that allows employees to “telecommute”. In basic, a VPN allows you to link multiple machines via the internet, and thus access all of your data and information from the server. This is where the concept of VPN’s, or Virtual Private Networks, comes into play. A home server can help you to make all of your data accessible from within your local network, but what about when you want to access your data from afar? As I’ve covered in my previous article about setting up a home server, a centralized location for your data can prove to be a huge benefit to yourself and the overall productivity and collaboration of your family.

0 notes
Text
Cyber Punk-y stuff
I watched the cyberpunk 2077 trailer and couldnt help getting hyped. Felt like writing this.
'Get the fuck behind that building,' Her eyes were frantic. Sclera with the cream of aged ivory contrasted with pitch black pupils. No irises. 'Move, you fuckwit,' She hissed. We moved, behind a concrete monolith. 'They didn't see us, and theres only two of them,' I shot back. We could take out two. Easily. We already had taken out many pairs. 'One of them has a e-synapse jammer,' 'Yeah, we hit him first,' 'Are you fucking retarded, its broad daylight,' 'You have those legs for reason,' Childish stubbornness on my part. 'He vaguely points that thing in our direction- both of us are fried!' She was right. Looking back, she was entirely in the right. In the moment, I was convinced we could take them on. But Ammi won by yanking me futher into the shadow. She pushed into the darkness, though alley ways and around corners. I followed, closely. The warm weather was sticky humid. The thin pants i wore were a good choice. Briskly, we walked further into darkness. We passed no one. This area had been emptied out. Evacuated. It was being cleaned. Some pretend plague had struck. Pest control, in reality. The silence was almost suffocating. The crunch of dirt on ashpalt under our shoes were the only things I could hear. We slunk around the dark, disturbing nothing other than the ground. Every alleyway intersection we reached we cleared, looking down the intersecting road then walking past. At first. Time was running thin. We needed to reach the old hub fast. Unexpected patrols had slowed us down. 'Hey!' Like this one. Ammi gasped and slunk behind a generator. I spun on the heels of my shoes. 'Hello,' I spoke slowly and shakily, raising my arms. It was only one 'cleaner', somewhat down the alley. I interlocked my fingers and rested them on the base of my skull. I flicked my thumb over a raised bump in the skin of my neck. 'No need to do that,' the patroller called back. The LED on his shoulder reflected off of the smooth black steel helmet he wore. 'Okay' I lowered my arms to my side. 'This suburb is restricted. Head back now,' He was still walking towards us. 'Oh, oh is it, a-a friend of mine said to meet him here,' 'He was lying. No one is supposed to be here,' the patrollers voice was deep, commanding, 'Virus,' He carried a heavy looking assault rifle. It sat, slightly bouncing on the kevlar mesh suit of the company soldier. Things were taking longer than they should have. He stopped in his tracks and rasied the rifle to eye level, quickly. 'Who's that,' He jerked the gun to where Ammi was hidden. 'She-she's my girlfriend,' I stammered. 'Look at me!' he shouted at Ammi. To pull attention back to me I yelped. 'Dont!' His rifle swung back to me, 'she- shes got anxiety, an attack will just make her sick, please,' The patrolman kept the rifle trained on my head. I could feel sweat dripping down my forhead. 'Walk,' the command was followed by Ammi and I. We turned back the way we came, Ammi being careful to not let the patroller see her clearly. he followed us, not too closely. he didnt take the exact route. Ammi kept slightly ahead of me and I was able to follow her lead. Until I felt a click in my brain. A tiniest switch. Couldnt explain how it felt if i tried. I tripped on a stone, on purpose. 'Walk!' the patroller wasnt coming closer. Fuck. Time to try this. From a belly down position, I pushed up, onto my feet which i then used to spin to face the patroller. He was startled, and had stopped aiming at me or Ammi whlie we walked. One chance. While he was raising his gun, I kicked the stone at him, missing completely. Ammi took the opportunity to pounce, using her metallic muscles to reach the patroller in one leap. In the air, she procured a shiv and gracefully glided towards the armed man with point outstretched. She landed on her mark, and stabbed him in the gut. Trying to miss anything vital, and holding him down. She smashed the helmet, exposing his face. I arrived at the patrolmans body moments later. With precision uncharacteristic of me, I yanked off the metal covering that sat behind the thin kevlar fiber of his suit. In a small port, just below the solar plexus, sat what i was looking for: company locator. A device, like the USBs of yesteryear, but on a right angle downwards to make the design more space-efficient and ergonomic. Quickly, i pulled it out of the plug and scrunched up my left sleeve. With no more than a split second, I had plugged the device back into a port, on my arm. A blinking strip of light on the locator didnt miss a beat. The patrolman gargled angrily, and Ammi retorted with a swift punch to the nose. She then quickly replaced the shiv into his neck. No more gargling. Panting, I stumbled back a building and sat down, resting on it. My head ached from the sudden movement and the new locator device. Ammi dragged the body to a skip dumpster and placed him in and closed it. 'That was much longer than before, Tauno,' Ammi was stern. 'Yeah, its getting overwhelmed. I don't think its going to be able to work next time.' The terminal chip in my arm was able to detect the locators path and predict where the soldier was going to be told where to go. Even if he didnt. It would then send false GPS info back into the system, and according to the monitor, nothing was up. But it was doing this hundreds of times a second, for three locators now. To maintain a steady speed of data falsification and transmition, the chip needed to slow all its other processes down. With no more time to spare we headed off.
The altercation had happened a lot closer to the hub than I had thought. We entered the old mall thorugh a back entrance. Had looting not occured, the glass of the door may not have been shattered, preventing our entrance. We just had to hope that the looters hadn't found what we were looking for. The smooth white ceramic tiles reflected the small amount of light bouncing in from the street. The sun was setting. We had been out here too long. We found the old staff door. It wasn't hidden, a deep green against the harsh white of the tiles. The door was dented and handle looked like someone had taken a few serious attempts at breaking it. Ammi walked up and gently tried the handle. Nothing. She looked back with a little smirk, 'Had to give it a shot,'. The door was fucked beyond a keycards use. But not beyond mechanical limbs. Looters rarely had metal arms or legs. Police and company soldiers would be swarming the hotspots - getting caught stealing AND being a 'borg? That was most certainly doom. While not illegal to be a cyborg, it was illegal to go to backstreet bodyshops. And nobody I or Ammi knew could afford half decent legal metal. Ammi made scrap metal of the door, giving it a hearty boot at the latch. It loudly clanged agaisnt a matte grey steel wall. We were deep enough in the mall that stealth wasnt a matter. Just had to be sure that we were quick was all. Down three doors, stairs on the left and second door on the right. This door was open anyway. That was worrying, but could also have meant nothing. We entered the old hub. My hand were held in tight fists, Ammi kept her shiv hidden, but still within a split second's grab. A soft buzzing from one of the corners, let us know the place was powered. Evacuated suburbs were always cut from the power grid. Soft blue light from screens washed over us as we searched the aparmtent sized area. It was the hub for all activities we used to do - illicit or not. Seeing it not busy with people was odd. 'I found the safe, would code be in here?' Ammi said. 'Surely not,' I snorted. 'now we need a "Construct",' That was the next item on our list that Ammi had written. She gave it a short description as well. It was a small box, with a few switches on a face with a small lens, sitting on one of the higher shelves on the wall. 'I got it,' I said Out of curiosity, I flicked one of the switches. Out of the lens, some light bled. Looking in, the box certainly seemed bigger than its physical form. A thin grid of blue lines seemed to be about half a metre away inside the box. In front of that was a flickering wireframe of a blank face. This was old tech for sure. I called Ammi over and stood her about half a metre away and looked through the box. Disappointingly it didn't scan her face. Ammi laughed when I told her what I tried. She grabbed the machine and flicked the other switches around and peered into box. Standing static, I noticed her finger sliding around on one of the other faces of the box. She looked up from the box and blinked a couple times at me, 'It's old tech, but check it,' She paused, proudly, 'I used some of these before,' I may have snatched it back, but I was engrossed with the idea of this old machinery. Instead of a blank face, it seemed to show a specific face, who's; I couldn't say. There was some log of speech to the right of the face. Ammi snatched it back before I could read and looked in for a few moments. 'It says if we hook it up, it knows the code to the safe,' 'How,' I was in disbelief 'It's kinda like an AI, it knows some stuff,' We hooked it up and it opened the safe. Inside sat our riches. A shit ton of cocaine, some MDMA and a small amount of heroin. But behind that was a small mountain of speed - street gold. We were rich. As long as no one else came looking for the goods.
1 note
·
View note
Text
Why You Need to Look Beyond Kafka for Operational Use Cases, Part 4: Streaming with Dynamic Event Routing
Preface
Over the last few years Apache Kafka has taken the world of analytics by storm, because it’s very good at its intended purpose of aggregating massive amounts of log data and streaming it to analytics engines and big data repositories. This popularity has led many developers to also use Kafka for operational use cases which have very different characteristics, and for which other tools are better suited.
This is the fourth in a series of four blog posts in which I’ll explain the ways in which an advanced event broker, specifically PubSub+, is better for use cases that involve the systems that run your business. The first three covered the need for filtration and in-order delivery, the importance of flexible filtering, and the finer points of security. My goal is not to dissuade you from using Kafka for the analytics uses cases it was designed for, but to help you understand the ways in which PubSub+ is a better fit for operational use cases that require the flexible, robust and secure distribution of events across cloud, on-premises and IoT environments.
Summary
One of the main goals of making applications event-driven is allowing data to move more freely between them. Decoupling producers and consumers, i.e. letting them interact in an asynchronous manner, frees data from the constraints of synchronous point-to-point connections so it can be used by multiple applications in parallel, no matter where those applications are deployed.
As organizations adopt event-driven architecture, there naturally comes a question on the desired scope of an event. How can they prevent silos of events that limit their usefulness while maintaining some governance, or visibility, into the flow of data throughout the organization? This is where an event mesh comes in.
Gartner says an event mesh “provides optimization and governance for distributed event interactions. Event-driven computing is central to the continuous agility of digital business. The distributed optimized network of event brokers facilitated by the event mesh infrastructure aims to enable a continuous digital business native experience.”1
With that in mind, let’s look at the technologies Solace and Kafka use to implement an event mesh. First, we need to break down the above definitions into a set of components or features that we can compare: support for open standards, security, scalability through flexible subscriptions, governance, WAN optimization, and dynamic message routing.
Analyst Intellyx describes the role an event mesh plays as follows: “Event mesh plays a critical role as an IT architecture layer that routes events from where they are produced to where they need to be consumed – regardless of the system, cloud, or protocols involved.“
Kafka’s answer to this list is event and event metadata replication using Apache MirrorMaker. MirrorMaker was developed as part of the Apache Kafka project to provide geo-replication between clusters. Other tools like Apache MirrorMaker2, Confluent Replicator and Linkedin Brooklin have evolved Kafka data replication to address shortcomings such as:
the need to restart to make changes to Whitelists or Blacklist for filtering.
the need for a complete stoppage in traffic flow on any rebalance due to topic or partition changes.
the lack of topic metadata replication like partition count and offset.
the lack of automated ability to prevent replication loops
All these tools rely on point-to-point data replication so fall short of what is required to implement an easily manageable, intelligent event mesh across your enterprise.
Solace, on the other hand, was designed to be the foundation of an enterprise-wide event mesh for demanding use cases such as those found in capital markets, and has a full set of built-in features that address all of the requirements including the dynamic routing of events or static bridging of events between clusters. From early on, Solace has been bringing the types of dynamic routing features that make the internet work to the event layer of your enterprise. This enterprise wide, event routing capability with dynamic subscription propagation is what makes Solace’s solution truly an event mesh.
Below we will compare and contrast the technical details of these two approaches and apply each technology to a series of use cases, comparing functionality and ease of use.
Technical Details
Kafka’s replication model is a lot like database replication, i.e. based on reading the transaction log of a source database and replaying it to a destination database. The foundational technology is a process that reads from a source database and writes to a destination. Kafka does the same thing; the replication process reads from a remote cluster and publishes to a local cluster.
Reference: https://blog. cloudera. com/a-look-inside-kafka-mirrormaker-2/
This approach relies on configuration files that dictate which events are replicated between source and destination clusters, and has no real concept of meshed brokers, just a series of broker point-to-point links.
Solace’s dynamic message routing (DMR) capability takes an approach much more like IP routing which is more flexible, hierarchical and scalable than database replication. Solace consumer subscriptions are propagated across the routing control links to attract events across the data links. In this model there are no intermediary replication processes and no external configuration files that need to keep in sync with application needs – event brokers connect directly to each other. By simply adding subscriptions, applications can receive filtered events from anywhere in the event mesh no matter where they are produced.
Reference: https://ift.tt/3jqsosX
Static vs Dynamic Routing
Kafka replication tools forward events between clusters based on static routes which are regex-based whitelists and blacklists. When replication tool providers mention “dynamic routes”, they are talking about being able to edit the whitelist or blacklist without restarting the replication tool, i. e. dynamically editing the static route lists, but manual configuration is still required. The effect of static routing is that data will be replicated to a remote cluster based on administratively set rules, not on the actual data needed within the remote cluster. Events matching the white list will be replicated, it does not matter if the end consumers have changed their subscription sets. Data might be replicated when it is no longer required at the remote site. Likewise, other data required might not be replicated if the white list is not kept up to date in real-time
The disadvantage is that there needs to be cross-functional coordination between the application teams and the replicator process administrator to ensure the correct data is being replicated from wherever it is being produced. It is difficult to tell if data is being replicated, which is an expensive function, but never consumed by any application. There is a place for static routes in simple topologies, for example hub and spoke topology where all events from a spoke event broker are sent to a single hub. These routes should be simple to implement and simple to understand.
Solace Dynamic Message Routing allows client subscriptions to be dynamically routed across the event mesh of Solace HA clusters. This is similar to IP routing and allows the injection of client subscriptions to determine if published events will be streamed to a remote HA cluster. So only the data that is required by the end consumers will be routed. If the consumers want additional events, the addition of new subscriptions locally will be propagated across the event mesh and attract events no matter where they are originally published. If the consumer no longer needs the events, they can remove their subscriptions and the removal will also be propagated throughout the brokers in the mesh.
This pruning back of subscriptions reduces the number of events flowing between clusters. This dynamic subscription propagation also supports message exchanges that require more dynamic event flows – such as request/reply, market data, gaming odds and some IoT event flow use cases. The dynamic nature of subscription propagation means there are no config files to keep in sync with client demands because the intelligent event mesh learns which subscribers are interested in what events and automatically propagates that throughout the event mesh. The advantage is that this allows the consumer demand to be the driver behind which events transit the event mesh.
Limitation of Coarse Topics & Filters
Beyond the limitation of file-based static routes in Kafka defining which events are routed between clusters, Kafka’s coarse topics limit the ability to filter out the exact events needed per cluster and makes it very difficult to rebalance loads across clusters. There is no way in Kafka to replicate a subset of a Topic without adding data filtering applications to filter and re-publish – which have the disadvantages explained in my previous blog <Flexible topic and subscriptions>.
With Solace there are many different options on how to filter and route topics that are based on best practices. The fine-grained filtering allows filtering from application domain all the way down to ranges of event senders or event objects. Filtering might be location based or even object types. The point is with the event topic being descriptive of the event type and the event properties, any of these values can be used to make filtering decisions so only the events the consumer wants will be sent across to the remote cluster, and without the need to write, deploy and manage data filtering applications.
Consistent Topics Throughout the Event Mesh
In order to keep events from looping between replicators, Kafka replication processes need to either prepend source cluster to the event topic (MirrorMaker2) or add additional headers (Confluent Replicator) to allow poison reverse blacklists so an event can’t be forwarded into a cluster from which it originated. In more complex topologies where events transit through an intermediary cluster, headers or prepended parts of the topic will be stacked. For example, for an event transiting clusters A -> B -> C, the record in C will have 2 provenance headers, one for B and one for A, or the topic will be: B. A. OriginalTopic. Depending on the replication tool it may be up to the consumer to deal with the modified topic.
Like IP routing, Solace’s dynamic message routing has a real set of routing protocols with rules on event streaming that would naturally prevent things like looping. These protocols understand mesh topologies and streamas they move events from source cluster to destination cluster, and onward to the end consumer. This means producers and consumers see the event mesh as a truly single event streaming system and do not need to consider source and destination cluster impacts or limitations on topics. Consumers can influence where events stream within an event mesh by simply adding subscriptions in a new location and removing like subscriptions from an existing location.
Visibility Into Where Data is Being Published and Consumed
With distribution events either based on a series of dispersed Kafka bridging configurations or with Solace dynamic message routing, it becomes difficult to understand exactly where events are being distributed. This visualization is a necessary step to gaining better data governance. Solace provides an event portal that offers this level of insight for both Solace and Kafka installations.
Detailed Use Case
Cloud first for new applications
A common application modernization strategy is to cap expansion of applications in private datacenters and look at new applications or use cases as candidates to lead the move to public cloud infrastructure. This strategy typically leads to the requirement for hybrid cloud event distribution as the new cloud-based applications will likely need some of the resources or data presently used by existing on-premise applications and systems of record. In modern event driven systems, events are used and re-used across several applications and are not stove-piped to specific applications or use-cases.
Let’s build on the example I started using in my last blog post and show how to extend the architecture by adding new use cases in a cloud-first manner.
In this scenario, a large online retailer generates a raw feed of product order events that carry relevant information about each order through statuses New, Reserved, Paid, and Shipped, and that data is accessible so developers can build downstream applications and services that tap into it.
The architecture looks like this:
Now that this system is implemented we want to extend the data use by the following new cloud apps:
Implement a real-time analysis for specific products. When flash sales are implemented on specific products, real-time analysis on those products is required to determine the effectiveness of the sale and how the sale affects buying trends. Call this service Product.
Implement a real-time analysis for specific purchasers. When trouble tickets are injected into the system previous activity is important but also the ability to track real-time activity as the customer tries to complete an order with assistance. Call this service Customer.
What this would look like from an architectural point of view:
How this would be done with Solace Dynamic Message Routing (DMR)
As you have seen from my flexible topic blog, the topic structure for such a case would look like this:
The public-cloud based services would add the subscriptions for the explicit customers or products that need to be analysed. This would allow for only the exact data required to be automatically drawn across the WAN and into the public cloud without needing to know where the publisher is. Also, it is very easy to have access controls to prevent certain products or customer info published across the WAN link. Any results could be published back on a new topic: ols/analysis/{verb}/{version}/{location}/{productId}/{customerId}.
Let’s look at how this would be implemented and maintained during the life cycle of the applications.
Solace dynamic message routing is freely bundled into the Solace PubSub+ Event Broker. Dynamic message routing needs to be configured once when setting up the inter-broker connections to form an event mesh, this video describes how this is done. There is simple configuration in one place to ensure the WAN links are secure and compressed. ACLs that would apply to who can produce which events and who can consume which events would remain the same. The ACLs would however have to be applied to the brokers where the producers and consumers connect.
From this point as each application subscribes to exactly the events it requires the events are transmitted over the WAN and delivered to them securely, in published order, without loss. There is no topic remapping as loop prevention is done for all topologies. This means the application would subscribe exactly as it would if it was in a shared datacenter.
As applications evolve, they can change subscriptions to attract more or fewer events as required. There is no intermediary application that needs to be kept in sync. The filtering and WAN transmission is done directly on the broker.
As new applications come online and add their own subscriptions. If they require the same events that are required by existing applications connected to this broker, then a single copy of the data will be sent across the . Inversely any results sent back from cloud-based application will also be a single copy across the WAN which will minimize cloud egress costs.
The cloud-based applications can publish results for consumption anywhere in the event mesh as dynamic message routing is inherently bi-directional.
There are two major steps or components.
Broker in private datacenter routes events to local applications based on locally added subscriptions and to the remote cloud broker based on subscriptions added by remote applications.
Cloud based broker streams events to cloud based applications and events from cloud applications to private data center based on subscriptions.
Expanding beyond simple point-to-point pair of sites
At this point we have simply expanded a solution that was in a private data center into one that has a single public cloud extension to allow things like burst absorption. But this could lead to new requirements that would require multi-cloud or multiple private site distribution of events to do things like move the customer connection point closer to the customers geo-location. This has been a growing trend; “93 percent of enterprises have a multi-cloud strategy; 87 percent have a hybrid cloud strategy” from the Flexera 2020 State of the Cloud Report.
As the complexity of the overall topology increases, the simplicity of the Solace solution really begins to show increased advantages.
In this example we split where the new Product Service application is running and added a third site that handles surge in customer connections.
There are now three major steps or components.
No significant change from (1) in the previous diagram.
Cloud based broker routes events to cloud based applications and events from cloud applications to the private data center. Note here that the Product Service is split across two sites. This means that a subset of events would need to be routed to each site based on specific topic filters.
As we add the fourth location that splits the consumers, say for Black Friday burst handling, you can see the power of fine-grained topics. All the Consumers 1…N send NEW purchase events into the event mesh, and the life cycle events for the specific consumer are sent back to that specific consumer. This means that Broker 1 and Broker 3 need to have the fine grain subscriptions in place so that when a back-end service publishes an event, it is routed to the correct place across the event mesh. There is no need for per-application or per-consumer configuration to maintain in the event mesh. No need to republish events and have the logic to know which events are needed in the remote location configured in the re-publish applications. Though this example was done with Customers it also applies to other Applications, for example we could split the “Inventory Service” across multiple locations in the event mesh and route a subset of the “NEW”, “APPROVED” and “SHIPPED” events to each scaled group of the application by changing the subscriptions the applications apply for an active-active deployment, for example.
How this would be done with Kafka MirrorMaker (event replication)
As seen in the filtering blog post, the options are to send the entire contents of all required coarse topics to the remote cloud for the newly added cloud-based services and have them filter out the data they do not need or have a data filtering application in the source cluster work as an intermediary.
Having all data sent to the new cloud application may not be feasible for security reasons, but even if it is possible complete data replication would cause larger then needed WAN costs and requirements to over provision the cloud Kafka cluster compared to the actual requirements. This over provision might increase the cloud provider costs up to the point where the costs out-weigh the benefits of moving to the cloud in the first place.
Alternatively, data filtering applications could do additional filtering and re-publish required data onto new topics. But the data-filtering pattern does have it’s own issues. As the republished data needs to be sent on new topics it causes a multiplier in the number of topics and partitions needed, which degrades Kafka broker performance. Next the problem is that this data-filtering pattern adds operational complexity to the overall solution, chain dependencies and is detrimental to application agility. Finally, creating a new stream as a subset of the original stream to compensate for lack of filtering increases data management complexity, creates “golden source of truth” issues and can reduce reuse. More is not always better.
Taking the diagram that showed data-filtering solution for my filtering blog post (below), let’s look at how this would be implemented and maintained during the life cycle on the applications.
Mirror Maker 2 or some other Kafka data replication tool will need to be installed, configured and engineered for performance. It will also have to be maintained to ensure white and blacklists are correct based on what applications are deployed where and what events they need from where.
Data-filtering service may need to be built, deployed and managed to ensure correct filtering is done. This might require security policy related filtering to ensure only the correct data is re-published to the cloud. A decision on whether to replicate or filter would need to be determined for each new application.
Applications and white-list/black-list will have to be updated to handle any topic prefixing or manipulations the Mirror Maker2 imposes to prevent looping of events. The tools used to coordinate, communicate and track this for deployment workflows would likely need to be developed.
As applications evolve, source publishers, data-filtering applications and end applications need to be coordinated to use data. As the original publisher enriches its data what happens on all the data-filter applications, do they need to be enhanced as well? What happens when the downstream applications need a change to the data they receive? They cannot simply add a new subscription they need to figure out if they now need new source data, enhanced republished data or a new data-filter republished. Likewise, simply removing a subscription will not prune back the data flow. This shows how much the data-filtering pattern causes chain dependencies and is detrimental to application agility.
As new applications are added to the cloud platform and new attempts to reuse data that exists, the cycle of determining if all events from a topic need to be sent to the cloud applications then filter in the application or if data-filtering is required before WAN transmission continues. Keep in mind, as discussed in the filtering and in order blog post the requirement to keep order across events means that all the related events need to be in the same Topic/Partition. This could cause a large amount of data being published across the WAN even if it is not all required. The additional data-filtering does have its challenges and can cause duplicate copies of events. Let’s say one application needs all data published for product1 which is republished to topic “product1”, and another application needs all data published related to consumer2 which is republished to topic “consumer2”. Any data that is related to both product1 and consumer2 would be republished to both topics and sent multiple times across the WAN.
Any results that need to be published back would require yet another managed cluster of MirrorMaker2 on the local datacenter.
There are five major steps or components:
Broker in private datacenter, (local broker), routes events to local applications based on locally added subscriptions and sends all events to re-publish application for further filtering before sending on the WAN.
(Optionally) Data-Filter application consumes events and re-publishes specific subsets of data needed by remote applications on new topics. As remote application data requirements change, the data-filter application needs to be updated to suit. Though complex, it is the preferred pattern to reduce costs of replicating all events to the cloud broker then simply discarding the events that are not required.
Remote cloud MirrorMaker consumes from remote Kafka Cluster and re-publishes yet again locally. This event is sent to the remote cloud Kafka cluster and consumed by the remote cloud applications. This means the path from Producer to consumer is:
Original event produced from Producer to Kafka, (replicated at least twice for redundancy, written to disk tree times). This is three copies of event on network
Original event consumed by Data-Filter application and re-published to Kafka, (replicated at least twice for redundancy, written to disk tree times). This is four more copies of event on network.
Republished event consumed by Mirror-Maker and published to remote Kafka cluster, (replicated at least twice for redundancy, written to disk tree times). This is four more copies of event on network.
Re-published event consumed by remote application. This is one more copy of the event on the network.
Total of twelve transmitted copies of the event consuming Network, CPU, Memory resources, and nine copies of the event written to disk.
Remote Broker receives any events produced by remote applications.
Local Mirror maker consumes remote events and re-publishes into local broker.
Expanding beyond simple point-to-point pair of sites
Again, we are going to expand past simple point-to-point pairing of Kafka clusters into a larger topology. Moving the Product Service application into multiple locations and adding surge capacity for Consumer connections.
There are now seven major steps or components.
No significant change from (1) in the previous diagram.
Data-Filter application consumes all events and re-publishes specific subsets of data needed by remote applications on new topics. As remote application data requirements change, the data-filter application needs to be updated to match. Here is where we see the first major issue, the Data-Filter application now needs to be aware of how the Product Service is distributed across the cloud and sub-divide the traffic to suit the instances of the application. It needs to publish part of the traffic on one topic that hits the Product Service on the left and the rest of the traffic on another topic that hits the Product Service on the right. This is a static configuration and needs to adjust as the distribution on the application adjusts.
No significant change from (3) in the previous diagram.
No significant change from (4) in the previous diagram.
The only significant change from (5) in the previous diagram is the number of mirror makers required. Remember that these MirrorMakers consume from one remote cluster and produce into one local cluster. This means that for every new remote cluster you need at least one remote MirrorMaker replication app to consume events, but you also need to add MirrorMaker replication application instances to every cluster that needs to consume events the new cluster produces. This breaks the requirement to cap resources in the original datacenter. As you expand into new regions or public clouds you need to add resources to the original datacenter to receive events.
As Customers connected to the new location (for burst handling) add NEW order events to the system an additional MirrorMaker is required to consume these events and re-publish to the original Kafka Cluster to be forwarded to the backend services.
Order life cycle events published back to the specific consumer via the remote MirrorMaker and broker. This points out the second major issue. If the broker cannot uniquely address individual clients because of topic limitations it makes it difficult to distribute clients across different entry points into the event system as shown on the diagram. This would require all lifecycle events be delivered to every “Data-Filter Protocol-Translate” process and then have it discard all events not intended for its connected client, or some way to pre-process these events in a data filter application that would deliver the correct subset of client events to the correct customer facing Protocol-Translation application. The former solution is inefficient use of WAN resources, the latter solution would require a type of client all connected to the same location in order to group clients. The division of clients into groups that have a single entry point into the eventing system limits .
Other use cases
Though this current blog focuses on cloud migration, geo proximity data deployment after analytics, there are several other reasons that replication is required for Kafka clusters, this Cloudera blog lists a few more.
Isolation or Legal and Compliance: The general idea is there may be a need to physically separate a group of consumers from sensitive data, but some amount of data is needed in the more secure and less secure clusters and therefore replicated occurs with tight policies to control what data is replicated. Solace has more fine grain policy controls on data access and also has VPN namespace like separation that allow fencing of data within the broker.
Disaster Recovery: One of the most common enterprise use cases for cross-cluster replication is for guaranteeing business continuity in the presence of cluster or data center-wide outages. This would require application and the producers and consumers of the Kafka cluster to failover to the replica cluster. Though not discussed in this blog because of the complexity of the subject, I may write a disaster recovery blog in the future.
Visibility is key to understanding the flow of data through your Streaming event system
No matter what technology you use to build distributed event interactions, the fundamental requirement to efficiently stream data to where it is required in a form it can be consumed is a difficult problem to solve without a toolset to design, create, discover, catalog, share, visualize, secure and manage all the events in your enterprise. “The top challenge in cloud migration is understanding application dependencies” Flexera: 2020 State of the Cloud Report. This obviously includes access to the data the application requires and is why the advent of the event mesh has led Solace to build an event portal to solve the problem of optimizing event re-use. The product, called PubSub+ Event Portal, fosters the re-use of events by making them discoverable and visible while ensuring good data governance across the enterprise even as the Event distribution grows and the topologies become more complex.
Conclusion
A simple “replicate everything” approach might economically satisfy all event distribution requirements moving forward, but if not then a better solution is leaving the problem of event distribution to individual application teams. Applications will work around inflexible event distribution by replicating and republishing data. This leads to waste of resources, frail and complex eventing systems and data lineage/management complexities so it is important to understand your event distribution requirements.
An event mesh is optimized to limit the consumption of network resources while offering the most flexible solution to foster agile application development and deployment. The goal is to let producers and consumers connect anywhere in the event mesh and produce and consume the entitled data they require.
As applications move to distributed event interactions, policies and management tools will be required to allow application teams to discover and understand the events they need to produce and consume in order to fulfill their distributed functions.
1 Source: Gartner “The Key Trends in PaaS and Platform Architecture”, 28 February 2019, Yefim Natis, Fabrizio Biscotti, Massimo Pezzini, Paul Vincent Source: Intellyx “Event Mesh: Event-driven Architecture (EDA) for the Real-time Enterprise”, Nov 2019, Jason
The post Why You Need to Look Beyond Kafka for Operational Use Cases, Part 4: Streaming with Dynamic Event Routing appeared first on Solace.
Why You Need to Look Beyond Kafka for Operational Use Cases, Part 4: Streaming with Dynamic Event Routing published first on https://jiohow.tumblr.com/
0 notes
Text
SD-WAN : les solutions à maturité
Le principe et les avantages du SD-WAN sont aujourd’hui mieux compris : le « Software Defined » appliqué aux réseaux longue distance (WAN, Wide Area Network) simplifie la gestion du réseau grâce à un mécanisme de priorisation dynamique des flux.
De façon automatique, il choisit les meilleurs liens disponibles, ce qui est intéressant notamment pour des accès directs à Internet sans passer par le datacenter de l’entreprise. Ces liens vers Internet peuvent être agrégés en fonction de critères prédéterminés et d’un compromis entre le meilleur coût et la meilleure qualité possible selon le type de données.
Contrairement à ce qui était affirmé il y a 3 ou 4 ans, le SD-WAN ne conduit pas nécessairement à faire baisser les coûts même s’il contribue à faire disparaitre des liaisons MPLS toujours coûteuses – lesquelles conservent souvent leur raison d’être, du moins pour certains flux.
Le « Software Defined » apporte de la flexibilité ; il combine souvent des lignes MPLS ou des liaisons louées, à débit garanti, avec des liaisons internet publiques.
Sélection des meilleurs liens radio
L’intérêt du SD-WAN est d’abord de sélectionner le ou les meilleurs canaux disponibles pour chaque site concerné et pour des débits satisfaisants.
La tendance actuelle, consiste à y ajouter l’option de liens radio, avec le LTE / 4 G (et bientôt 5G). Ces liens radio peuvent être utilisés au démarrage mais également en secours, là où les connexions ADSL peuvent être très médiocres.
Le principe d’une architecture type de SD WAN. Source : ETSI
L’agrégation de liens xDSL présente également un intérêt qui se vérifie sur le terrain : pouvoir gagner en haut débit en passant de 2 Mbps à plus de 10 Mbps, moyennant un surcoût modique.
« Cet avantage prend tout son sens lorsque l’on veut accéder au Cloud public – par exemple dans le cas où l’on a choisi Office 365 de Microsoft. Il reste possible de mixer ces liens avec des liaisons MPLS », observe Olivier Mercier, responsable SD-WAN chez Cisco France.
Peu à peu les réticences des managers réseaux sont tombées : il est possible de faire sortir des flux de données, bien maîtrisés, directement sur Internet, sans transiter par un datacenter et avec un niveau de sécurité suffisant.
Tous les fournisseurs de solutions SD-WAN proposent des solutions de pare-feu, des tunnels VPN chiffrés, des procédures d’authentification fortes, etc.
Une fois en place, il est toujours possible d’améliorer la qualité en ajoutant le paiement d’un service plus rapide de GTR ou GTI (garanties de temps d’intervention ou de rétablissement) selon les exigences de tels ou tels sites distants.
SD-WAN : simplifier et automatiser les déploiements
Comme le résume Gartner, il s’agit de simplifier et automatiser la gestion, la configuration et l’orchestration du réseau étendu, en réduisant le nombre d’appliances.
Les utilisateurs pionniers constatent également la rapidité de déploiement et de mise en œuvre. Autre avantage qui se vérifie : la promesse du « zero touch » – pas ou très peu d’interventions.
Le SD-WAN est donc particulièrement rentable pour des architectures de réseaux d’agence ou succursales. D’autant plus que les équipements mis en place – dits CPE – Customer Premisses Equipments- nécessitent très peu de maintenance.
Les équipes informatiques peuvent ainsi adapter les capacités du réseau site par site, de façon granulaire.
A partir d’un point central, le SD-WAN permet ainsi de répondre également aux besoins des réseaux de collecte IoT (internet des objets), notamment dans le secteur industriel, en y greffant des réseaux bas débits (LP-WAN).
Les principaux fournisseurs de SD-WAN. Source : Quadran Knowledge Solutions 2018_2023
La poussée du Cloud, selon… Velocloud
Les fonctions et l’orchestration du plan de données tendent à être assurées dans le Cloud, de façon à offrir un accès direct et optimisé aux ressources Cloud ou « on premise ». D’où le succès d’un précurseur comme Velocloud (acquis par VMware).
Quelques minutes suffisent pour déployer en activant son SD-WAN Edge depuis le Cloud. La détection et la surveillance automatiques des liens WAN facilite la configuration pour l’ensemble des sites distants.
Une architecture SD-WAN hybride sur le Cloud. Source : Velocloud VMware
Le SD-WAN de VeloCloud est intégré à NSX Data Center et NSX Cloud de VMware, qui dispose d’un orchestrateur VMware SD-WAN pour une gestion centralisée et VMware SD-WAN Edges pour les sites éloignés.
Cisco : deux offres avec Meraki et Viptela
Cisco bénéficie également de deux récentes acquisitions : Meraki et Viptela.
L’offre SD-WAN de Meraki vise en priorité les clients LAN/wifi : la même interface, pour créer graphiquement les configurations SD WAN et gérer l’ensemble. C’est une fonction des pare-feu MX. Comme chez VMware, l’outil est provisionné dans le Cloud où est hébergée l’orchestration.
L’autre offre de Cisco, Viptela, vise des environnements relativement complexes, avec de l’intégration et des fonctionnalités plus larges et avec plusieurs niveaux de routage, des topologies classiques ‘hub & spoke’ ou ’mesh’.
Les différentes options SD-WAN de Cisco
Il est désormais possible d’activer des fonctionnalités Viptela sur les routeurs ISR 1100 ou 4000.
Par ailleurs, Cisco propose une offre virtualisée similaire reposant sur des appliances (vEdge 100 et vEdge 1000), qui fonctionnent avec la même interface de management (vManage).
Et la gestion globale des sites SD-WAN pourra très prochainement être intégrée à la plateforme de contrôle et d’automatisation du réseau Cisco DNA Center.
Orange Business Services : la flexibilité d’abord
C’est cette offre Cisco Viptela que retient principalement Orange Business Services (OBS) – même s’il s’était initialement rapproché de Riverbed (pour des configurations peu complexes) et propose également la solution Cisco Meraki.
La nouvelle offre SD-WAN d’Orange Business Services (d’origine Cisco Viptela), est baptisée Flexible SD WAN.
Elle a reçu le prix du meilleur service de l’année « World Communication Awards » en octobre dernier à Londres. Elle compte une référence de taille : le groupe Siemens qui a choisi de migrer 1.500 sites jusqu’ici connectés en MPLS dans 94 pays.
Autre référence : le groupe néerlandais Weener Plastic, avec 24 sites dans une quinzaine de pays (ERP, Office 365 ; donc orientation Cloud).
Orange Business Services propose de rajouter une brique sécurité (Zscaler). Mais le client peut préférer Fortinet, Palo Alto ou Checkpoint.
Une solution de passerelles SD-WAN virtualisées à base d’équipements dits uCPE (‘u’ pour universel) est également proposée. « Ce sont des solutions très évolutives qui s’inspirent du Cloud », souligne Franck Morales, vice-président marketing et connectivité d’Orange Business Services.
Juniper Networks, en solo
Face à Cisco, on retrouve Juniper Networks qui a choisi de développer son offre SD-WAN en interne.
L’équipementier propose des passerelles vers beaucoup d’opérateurs et coopère avec des opérateurs intégrateurs. « Ils ont été bousculés dans leur modèle historique de services managés », constate Michael Melloul, responsable « systems engineering ».
Juniper Networks propose aussi du SD LAN, du SD-Branch, donc un focus sur le campus des entreprises, à partir d’un portail unifié donnant une visibilité jusqu’aux réseaux wifi.
« L’équipement CPE devient une plateforme d’infra où le client peut connecter toutes ses applications pour aller vers le réseau distant. Tous les modes de partenariats sont possibles ». Une solution ‘sur site’ est en cours, intégrant une orchestration centrale (CSO) en mode « as a service ».
HPE Aruba : tout consolider sur un seul CPE ?
Chez HPE Aruba, l’heure est également à la dimension « branch »(agences, succursales), « car les clients rêvent de pouvoir tout consolider sur un seul CPE : routeur, contrôleur wifi, pare-feu », observe Vincent Blavet, responsable technique chez HPE Aruba France.
Côté WAN, est proposé un orchestrateur pour automatiser : configuration de tunnels VPN chiffrés, paramétrage des profils, choix dynamique des routes, y compris les liens radio (LTE/4G). « C’est au client de déterminer : s’agit-il ou non de flux en temps réel ? » .
L’autre enjeu important est l’automatisation du déploiement (cf. le « zero touch provisionning »).
Sur les campus, la tendance est à l’approche « colorless » (indifférenciation des câbles donc des devices connectés) ; par le système de « profiling », on spécifie par avance les divers liens possibles, ce qui ouvre aussi la porte à la connexion IoT (objets connectés).
L’orchestration, avec un serveur de règles, est réalisée par ClearPath au niveau du campus.
Et vers le datacenter, les « gros » contrôleurs Aruba 7200 prennent le relais pour ouvrir au Cloud – AWS, Microsoft Azure… – avec la possibilité de mixer SD-WAN et MPLS.
Depuis mi-2018, les nouveaux contrôleurs 7800 sont devenus des passerelles très polyvalentes directement ouvertes au SD-WAN.
Dell-EMC : le choix d’équipements uCPE
Chez Dell-EMC, le SD-WAN est appréhendé comme une application parmi d’autres sur des sites distants, avec des options de sécurité, de partage de fichiers, d’équilibrage de charge, etc.
D’où la priorité donnée aux équipements fédérateurs, les uCPE. Ce sont donc des boitiers-serveurs universels, baptisés « VEP ».
Sous Linux, ils concentrent, avec une virtualisation VMware, toutes ces applications pour un coût échelonné de 2 à 5 k€.
La nouvelle génération de CPE de Dell -EMC pour SD -WAN
Le constructeur se dit agnostique : il coopère avec les éditeurs dont VersaNetworks (pour une cible plutôt opérateurs-intégrateurs) ou VMware Velocloud (cible entreprises) mais également Silver Peak. Pour la partie sécurité, le partenaire principal est Palo Alto.
Dell-EMC prépare une deuxième gamme d’uCPE, plus performante encore, capable de supporter plus d’une dizaine de VM (machines virtuelles).
Yves Pellemans's insight:
Le SD-WAN , le graal de l'agilité et de la réduction des couts ?!?
0 notes
Text
Ethereum Studio ConsenSys to Trim Workforce amid Bear Market Woes
Just as Steemit had done in the previous month, ConsenSys, the Ethereum-based studio, will be laying off about 13% of its total workforce as part of a move to ‘restructure’ the company.
The company, which was founded by billionaire Joseph Lubin, co-founder and early investor in Ethereum, is now looking to reorganize in a market that is currently witnessing all-time lows on major cryptocurrencies. Lubin sent a letter out to the employees at ConsenSys where he outlined his vision for the company, and a new path, which he coined “ConsenSys 2.0.”
The letter reads:
“ Excited as we are about ConsenSys 2.0, our first step in this direction has been a difficult one: we are streamlining several parts of the business including ConsenSys Solutions, spokes, and hub services, leading to a 13% reduction of mesh members. Projects will continue to be evaluated with rigor, as the cornerstone of ConsenSys 2.0 is technical excellence, coupled with innovative blockchain business models.”
Before the company decided to undergo this restructuring, it was involved in well over 50 projects, and its staff strength was about 1,200. The firm explained that this decision, as well as the roadmap for its next course of action, will put it in a strategic position for growth “as the blockchain community matures.”
There are also quarters where this restructuring is seen as nothing more than a means for the Ethereum production to stay afloat, as it is one of the hundreds of companies that have been, in one way or the other, affected by the effects of the current bear market. ConsenSys disclosed that it is changing its business scope to focus on more sustainable projects, while putting a hold on some of its more unproductive ones.
However, in spite of this reduction in its workforce, ConsenSys is still making big moves to grow its influence in the cryptocurrency market, pending when the bearish run ends. The firm joined venture capitalist firm Two Sigma to lead an $8 million investment round into Trustology, a London-based digital asset custody solution for institutional investors.
Trustology s majorly known for developing the TrustVault, a crypto-based management platform that stores private keys by keeping them in “tamperproof, programmable hardware security modules hosted in secure data centers, with encrypted backups in the cloud.”
With this funding, it is expected that Trustology will be able to develop new products, support a wide range of digital asset classes, and expand its products and services into international markets as well.
Featured image from Shutterstock.
Get Exclusive Crypto Analysis by Professional Traders and Investors on Hacked.com. Sign up now and get the first month for free. Click here.
Advertisement
source: https://ift.tt/2rvLvZ0
The post Ethereum Studio ConsenSys to Trim Workforce amid Bear Market Woes appeared first on BTC News Today.
from WordPress https://ift.tt/2SEQiTj via IFTTT
0 notes
Text
Best Mesh WiFi Routers 2018
https://www.netspotapp.com/best-mesh-wifi-routers-2018.htmlIf the WiFi signal in your home is a constant source of problems, a WiFi mesh network might just be the right solution for you. In this article, we take a closer look at what this latest wireless technology solution for home network offers, and we also recommend three mesh routers to help you create your own mesh network and solve your WiFi signal problems once and for good.
What Is a Mesh WiFi Network?
A traditional home WiFi network relies on a single wireless router to provide all devices on the network with Internet access. The problem with this network topology is the fact that even the fastest wireless router can only cover a limited area before its signal becomes so weak that it can no longer be considered usable.
A common solution is a WiFi extender, which is a device that receives a weak signal from a router, amplifies it, and creates a new WiFi network to broadcast the amplified signal. As impractical it is for home users to manually switch between multiple WiFi networks when moving from room to room, WiFi extenders are unusable for businesses and institutions that need to cover a large area or multiple floors with a single wireless network.
A mesh WiFi network overcomes the limits of WiFi extenders by dynamically and non-hierarchically organizing nodes on the network to cooperate with one another and deliver data from and to clients in the most efficient manner.
The typical WiFi mesh network consists of a single mesh router connected to the Internet with a wire and several satellite mesh routers, called nodes, that talk to one another to expand the wireless network to every corner of your home.
To achieve the best results when creating a mesh WiFi network, individual nodes in the network should be placed far enough from one another so they broaden the signal, but close enough so they can still communicate with at least one other node in the mesh WiFi network.
What Are the Best Mesh WiFi Routers?
Even the best WiFi routers that rely on the traditional spoke and hub network topology are unable to cover very large areas with a strong and stable wireless signal unless they are assisted by WiFi extenders, which significantly complicates things and often confuses users, who have to manually switch between wireless networks depending on their current location.
The best mesh WiFi routers, on the other hand, can cover as large area as necessary to provide all network users with a strong and stable Internet connection regardless of where they are. We have selected three mesh WiFi routers from the leaders in this market segment to help you overcome the restrictions of your existing WiFi router with a single purchase.
Google WiFi
The Google mesh WiFi system is a great way how to get started with mesh networking without any headaches. The system is available in several sizes to cover everything from small apartments to large mansions. According to Google, a single Google router should be able to cover 500-1500 square feet, depending on how many walls and other thick obstacles it has to deal with. Two Google mesh routers should comfortably cover 1500-3000 square feet, which is approximately the size of a regular home, and three Google mesh routers should cover up to 4500 square feet, or the size of a large home.
We have yet to see a Google WiFi review say that this mesh system is difficult to set up. Google has made the installation of the Google WiFi so easy that anyone can do it without any help. The companion app for mobile devices provides helpful guidance when you first plug in the router, and it also features a simple control center that allows you to prioritize devices on your network, see what’s connected, turn on parental controls, and more.
Eero Mesh WiFi
The Eero Mesh WiFi system is much more unobtrusive than most other similar systems currently available, but it’s just as capable. Eero doesn’t come with any limitation to how many connections it can support or how many nodes it can include. Thanks to its TrueMesh technology, your WiFi signal can hop between individual Eero nodes to travel down hallways or go around walls. This technology was developed using machine learning and data collected from real Eero users, and its benefits are noticeable, as just about any Eero reviewer would tell you.
Connecting the Eero system to the Internet could hardly be any simpler. The base unit has two Ethernet ports for wired connection to the unit and to one other device, such as a TV or computer. The base unit can be extended with an unlimited number of Eero Beacons, which plug directly into any outlet and easily expands your Eero mesh WiFi system to cover every corner of your home.
NETGEAR Orbi
The NETGEAR Orbi is a tri-band mesh WiFi system that can deliver a fast Internet experience from the attic to the basement. The Orbi WiFi system comes pre-configured and works right out of the box. You can start with a single Orbi node and expand according to your needs. We recommend you use a WiFi analyzer app such as NetSpot to measure how far your Orbi mesh network reaches. That way, you know exactly whether you really need to spend money on another Orbi.
Compared to Google WiFi and Eero, the Orbi system is up to 100 percent faster, delivering 222 Mbps compared to the average of just 71 Mbps of other mesh WiFi systems. NETGEAR also claims that the Orbi system is able to cover more square feet with two devices than other mesh systems cover with three. The superior performance of the Orbi system can be attributed to its patented tri-band WiFi technology, which establishes a dedicated backhaul connection that helps maximize internet speeds for 4K streaming and connections to multiple devices.
Source: https://www.netspotapp.com/best-mesh-wifi-routers-2018.html
0 notes
Link
Acutelearn is leading training company provides corporate, online and classroom training on various technologies like AWS, Azure, Blue prism, CCNA, CISCO UCS, CITRIX Netscaler,CITRIX Xendesktop, Devops chef, EMC Avamar, EMC Data Domain, EMC Networker, EMC VNX, Exchange Server 2016, Hyper-V, Lync server, Microsoft windows clustering, Netapp, Office 365, Openspan, RedHat openstack, RPA, SCCM, vmware nsx 6.0, vmware vrealize, vmware vsphere, windows powershell scripting. For more information reach us on +917702999361 / 371 www.acutelearn.com
CCNA Course content:
Cisco Certified Network Associate (200-125) Network Fundamentals Compare and contrast OSI and TCP/IP models Compare and contrast TCP and UDP protocols Describe the impact of infrastructure components in an enterprise network Firewalls Access points Wireless controllers
Describe the effects of cloud resources on enterprise network architecture Traffic path to internal and external cloud services Virtual services Basic virtual network infrastructure
Compare and contrast collapsed core and three-tier architectures Compare and contrast network topologies Star Mesh Hybrid Select the appropriate cabling type based on implementation requirements Apply troubleshooting methodologies to resolve problems Perform and document fault isolation Resolve or escalate Verify and monitor resolution Configure, verify, and troubleshoot IPv4 addressing and subnetting Compare and contrast IPv4 address types Unicast Broadcast Multicast Describe the need for private IPv4 addressing Identify the appropriate IPv6 addressing scheme to satisfy addressing requirements in a LAN/WAN environment Configure, verify, and troubleshoot IPv6 addressing Configure and verify IPv6 Stateless Address Auto Configuration Compare and contrast IPv6 address types Global unicast Unique local Link local Multicast Modified EUI 64 Autoconfiguration Anycast LAN Switching Technologies Describe and verify switching concepts MAC learning and aging Frame switching Frame flooding MAC address table Interpret Ethernet frame format Troubleshoot interface and cable issues (collisions, errors, duplex, speed) Configure, verify, and troubleshoot VLANs (normal/extended range) spanning multiple switches Access ports (data and voice) Default VLAN Configure, verify, and troubleshoot interswitch connectivity Trunk ports Add and remove VLANs on a trunk DTP, VTP (v1&v2), and 802.1Q Native VLAN Configure, verify, and troubleshoot STP protocols STP mode (PVST+ and RPVST+) STP root bridge selection Configure, verify and troubleshoot STP related optional features PortFast BPDU guard Configure and verify Layer 2 protocols Cisco Discovery Protocol LLDP Configure, verify, and troubleshoot (Layer 2/Layer 3) EtherChannel Static PAGP LACP Describe the benefits of switch stacking and chassis aggregation Routing Technologies Describe the routing concepts Packet handling along the path through a network Forwarding decision based on route lookup Frame rewrite Interpret the components of a routing table Prefix Network mask Next hop Routing protocol code Administrative distance Metric Gateway of last resort Describe how a routing table is populated by different routing information sources Admin distance Configure, verify, and troubleshoot inter-VLAN routing Router on a stick SVI Compare and contrast static routing and dynamic routing Compare and contrast distance vector and link state routing protocols Compare and contrast interior and exterior routing protocols Configure, verify, and troubleshoot IPv4 and IPv6 static routing Default route Network route Host route Floating static Configure, verify, and troubleshoot single area and multi-area OSPFv2 for IPv4 (excluding authentication, filtering, manual summarization, redistribution, stub, virtual-link, and LSAs) Configure, verify, and troubleshoot single area and multi-area OSPFv3 for IPv6 (excluding authentication, filtering, manual summarization, redistribution, stub, virtual-link, and LSAs) Configure, verify, and troubleshoot EIGRP for IPv4 (excluding authentication, filtering, manual summarization, redistribution, stub) Configure, verify, and troubleshoot EIGRP for IPv6 (excluding authentication, filtering, manual summarization, redistribution, stub) Configure, verify, and troubleshoot RIPv2 for IPv4 (excluding authentication, filtering, manual summarization, redistribution) Troubleshoot basic Layer 3 end-to-end connectivity issues WAN Technologies Configure and verify PPP and MLPPP on WAN interfaces using local authentication Configure, verify, and troubleshoot PPPoE client-side interfaces using local authentication Configure, verify, and troubleshoot GRE tunnel connectivity Describe WAN topology options Point-to-point Hub and spoke Full mesh Single vs dual-homed Describe WAN access connectivity options MPLS Metro Ethernet Broadband PPPoE Internet VPN (DMVPN, site-to-site VPN, client VPN) Configure and verify single-homed branch connectivity using eBGP IPv4 (limited to peering and route advertisement using Network command only) Describe basic QoS concepts Marking Device trust Prioritization Voice Video Data Shaping Policing Congestion management Infrastructure Services Describe DNS lookup operation Troubleshoot client connectivity issues involving DNS Configure and verify DHCP on a router (excluding static reservations) Server Relay Client TFTP, DNS, and gateway options Troubleshoot client- and router-based DHCP connectivity issues Configure, verify, and troubleshoot basic HSRP Priority Preemption Version Configure, verify, and troubleshoot inside source NAT Static Pool PAT Configure and verify NTP operating in a client/server mode Infrastructure Security Configure, verify, and troubleshoot port security Static Dynamic Sticky Max MAC addresses Violation actions Err- disable recovery Describe common access layer threat mitigation techniques 802.1x DHCP snooping Nondefault native VLAN Configure, verify, and troubleshoot IPv4 and IPv6 access list for traffic filtering Standard Extended Named Verify ACLs using the APIC-EM Path Trace ACL Analysis tool Configure, verify, and troubleshoot basic device hardening Local authentication Secure password Access to device Source address Telnet/SSH Login banner Describe device security using AAA with TACACS+ and RADIUS Infrastructure Management Configure and verify device-monitoring protocols SNMPv2 SNMPv3 Syslog Troubleshoot network connectivity issues using ICMP echo-based IP SLA Configure and verify device management Backup and restore device configuration Using Cisco Discovery Protocol or LLDP for device discovery Licensing Logging Timezone Loopback Configure and verify initial device configuration Perform device maintenance Cisco IOS upgrades and recovery (SCP, FTP, TFTP, and MD5 verify) Password recovery and configuration register File system management Use Cisco IOS tools to troubleshoot and resolve problems Ping and traceroute with extended option Terminal monitor Log events Local SPAN Describe network programmability in enterprise network architecture Function of a controller Separation of control plane and data plane Northbound and southbound APIs
Address: Acutelearn Technologies, Flat No 80 & 81, 4th floor, Above Federal Bank Building, Besides Cafe coffee day Lane, Madhapur, Hyderabad-500081
0 notes
Link
Dynamic Multipoint Virtual Private Network (DMVPN) is a network solution for those that have many sites that need access to either a hub site or to each other. It was designed by Cisco to help reduce the complexities in configuring and supporting a full mesh of VPNs between sites. There are other vendors that now support DMVPN, but Cisco is where it started.
Benefits of using DMVPN
The dynamic component of DMVPN is that a portion of the VPNs may not have to be pre-configured on all end points of the VPNs. DMVPN allows for the possibility of dynamic spoke-to-spoke communication, once the spokes have made contact with the hub or hubs.
It was intended to be used in a hub-and-spoke configuration (with the possibility of redundant hubs). DMVPN is based on RFC-based solutions: Generic Routing Encapsulation (GRE RFC 1701), Next Hop Resolution Protocol (NHRP RFC 2332) and Internet Protocol Security (IPSec, there are multiple RFCs and standards).
The main idea is to reduce the configuration on the hub(s) router and push some of the burden onto the spoke routers. Using the NHRP to register the spokes to the hub, the spoke can then use the hub as a resolution server to be able to build dynamic tunnels to other spokes.
What if it doesn’t work?
There are several moving parts here to look at: base configuration of the tunnel interfaces (and the basic connectivity), the registration of the spokes to the hub(s), and IPSec.
First off, the tunnel interfaces have to be configured with a source interface or address to create the tunnel, known as the public address relative to DMVPN. The addressing can be either IPv4 or IPv6, but the addressing for the source of the tunnel interfaces must be reachable by the other routers. Whether it’s spoke to hub or hub to spoke, using a ping or traceroute is the best way to verify connectivity.
The configuration may require IPSec, but try the tunnels without it. Ping the tunnel interface address, known as the private address. If the tunnels work without IPSec but don’t work with it, jump to troubleshooting IPSec. If the tunnels aren’t able to pass traffic without IPSec, then start looking at the basic configuration of the tunnel and the next hop resolution protocol.
The configuration of the hub is minimal, typically, relative to NHRP; most is done on the spoke routers. Make sure that mapping states are correct—ip(v6) nhrp map “private-address” “public-address.” Also, make sure the next hop server command is pointing to the public address of the hub—ip(v6) nhrp hns “public-address.”
interface Tunnel123 ip address 192.168.123.1 255.255.255.0 ip nhrp map 192.168.123.2 10.1.1.2 ip nhrp map multicast 10.1.1.2 ip nhrp network-id 1 ip nhrp nhs 192.168.123.2 tunnel source FastEthernet0/0 tunnel mode gre multipoint In this example, the 192.168.123.0/24 address space is the private addressing and 10.1.1.x is the public.
The router will let you misconfigure these commands with the incorrect addresses with no errors. You can use the show ip nhrp nhs detail to check if the spoke-to-server request is successful.
R1#show ip nhrp nhs detail Legend: E=Expecting replies, R=Responding, W=Waiting Tunnel123: 192.168.123.2 RE priority = 0 cluster = 0 req-sent 2 req-failed 0 repl-recv 2 (00:03:13 ago)
On the next hop server, you can verify that the registration was successful with the show ip nhrp command.
R2#show ip nhrp 192.168.123.1/32 via 192.168.123.1 Tunnel123 created 00:04:30, expire 01:55:29 Type: dynamic, Flags: unique registered NBMA address: 10.1.1.1 192.168.123.3/32 via 192.168.123.3 Tunnel123 created 00:04:30, expire 01:55:29 Type: dynamic, Flags: unique registered NBMA address: 10.1.1.3
If the intent is to allow the spoke to dynamically form tunnels, but they aren’t formed, check to ensure the shortcut forwarding is enabled on the spokes in question. The interface command ip nhrp shortcut is needed to enable this shortcut forwarding. Try doing a traceroute from one spoke to another and see if the hub shows up as an intermediate hop. If so, then shortcut forwarding is not enabled.
Another consideration with DMVPN is the registration process. This process is initiated by the spokes, not the hub. If the hub router reloads or if the tunnel interface goes down and comes back up, you may have to shut down or “no shutdown” the spoke routers interfaces. This issue has been resolved in 15.2 code and anything more recent.
If communication works without IPSec, but doesn’t with IPSec configured, it’s time to troubleshoot the IPSec configuration. The policies for phase 1 (key exchange) and phase 2 (transformation of the data) have to be the same between the hub router(s) and spokes. There can be different policies for specific spokes, but that would require different tunnel interfaces. Check the key exchange by using the show crypto isakmp policy command on the routers in question.
R1#sh crypto isakmp policy Global IKE policy Protection suite of priority 10 encryption algorithm: Three key triple DES hash algorithm: Secure Hash Standard 2 (256 bit) authentication method: Pre-Shared Key Diffie-Hellman group: #2 (1024 bit) lifetime: 86400 seconds, no volume limit
To verify that phase 1 is successful, use the show crypto isakmp sa command.
R1#sh crypto isakmp sa detail Codes: C - IKE configuration mode, D - Dead Peer Detection K - Keepalives, N - NAT-traversal T - cTCP encapsulation, X - IKE Extended Authentication psk - Preshared key, rsig - RSA signature renc - RSA encryption IPv4 Crypto ISAKMP SA C-id Local Remote I-VRF Status Encr Hash Auth DH Lifetime Cap. 1002 10.1.1.1 10.1.1.3 ACTIVE 3des sha256 psk 2 23:58:43 Engine-id:Conn-id = SW:2 1001 10.1.1.1 10.1.1.2 ACTIVE 3des sha256 psk 2 23:58:42 Engine-id:Conn-id = SW:1 IPv6 Crypto ISAKMP SA
If it looks like phase 1, check that the transform sets are consistent by comparing the output of the show crypto ipsec transform-set command on the hub and spoke routers.
R1#show crypto ipsec transform-set Transform set default: { esp-aes esp-sha-hmac } will negotiate = { Transport, }, Transform set MyTS: { ah-sha256-hmac } will negotiate = { Tunnel, }, { esp-3des } will negotiate = { Tunnel, },
To verify that the IPSec negotiation was successful, use the show crypto ipsec sa command. This can show you the packets that are being sent and whether they’re encrypted or not.
R1#sh crypto ipsec sa interface: Tunnel123 Crypto map tag: Tunnel123-head-0, local addr 10.1.1.1 protected vrf: (none) local ident (addr/mask/prot/port): (10.1.1.1/255.255.255.255/47/0) remote ident (addr/mask/prot/port): (10.1.1.2/255.255.255.255/47/0) current_peer 10.1.1.2 port 500 PERMIT, flags={origin_is_acl,} #pkts encaps: 55, #pkts encrypt: 55, #pkts digest: 55 #pkts decaps: 54, #pkts decrypt: 54, #pkts verify: 54 #pkts compressed: 0, #pkts decompressed: 0 #pkts not compressed: 0, #pkts compr. failed: 0 #pkts not decompressed: 0, #pkts decompress failed: 0 #send errors 0, #recv errors 0 local crypto endpt.: 10.1.1.1, remote crypto endpt.: 10.1.1.2 path mtu 1500, ip mtu 1500, ip mtu idb (none) current outbound spi: 0x51F10868(1374750824) PFS (Y/N): N, DH group: none inbound esp sas: spi: 0x59A9D043(1504301123) — - output omitted -
For troubleshooting DMVPN issues, the best thing is to break it down to its components—basic connectivity, basic tunnel function and then security. For more DMVPN troubleshooting options, visit http://ift.tt/2lxo1gR.
Related Courses CIERS1 – Cisco Expert-Level Training for CCIE Routing and Switching v5.0 CIERS2 – Cisco Expert-Level Training for CCIE Routing and Switching Advanced Workshop 2 v5.0
from CERTIVIEW #Certiview gives most #valuable and #in-demand #IT #certifications available in 2015. #Information #security, #cloud, #virtualization, #forensics and #more
0 notes
Text
It's fast enough you won't care
It’s fast enough you won’t care
[ad_1] Ignore Netgear’s advertising: Its Orbi Wi-Fi router is not a mesh network system. Orbi satellites don’t communicate with each other, they send and receive data to and from the Orbi router only. In networking parlance, that is a hub-and-spoke system, not a mesh. But mesh networking is what has everyone so excited this year, so that’s how Netgear is billing the Orbi. The company is doing…
View On WordPress
0 notes