#go ebpf
Explore tagged Tumblr posts
keploy12 · 2 years ago
Text
Go eBPF Unleashed: Amplifying Your Control Over Linux Kernel
Tumblr media
Introduction
In the dynamic landscape of software development, three key concerns reign supreme: performance optimization, in-depth system observation, and fortifying security. In the realm of Extended Berkeley Packet Filters (eBPF), Go is emerging as a powerhouse language, transforming how we analyze and manipulate network traffic, system calls, and other facets of application performance. Today, we embark on an exhilarating journey into the universe of Go eBPF, uncovering its vast potential and myriad applications.
Demystifying eBPF
eBPF, short for Extended Berkeley Packet Filter, is a virtual machine residing within the Linux kernel. This ingenious creation allows you to securely run custom programs within a confined, safeguarded environment. These eBPF programs can be attached to various hooks within the kernel, opening the gateway to powerful and efficient monitoring, analysis, and manipulation of critical events such as system calls, network packet handling, and beyond.
What makes eBPF particularly captivating is its ability to extend the capabilities of the Linux kernel without the need to write and load complex kernel modules, which can be cumbersome and error-prone. eBPF programs are penned in a restricted subset of C and are executed within the kernel's own virtual machine, offering a marriage of safety and efficiency that is crucial for low-level operations.
Go and eBPF: A Match Made in Developer Heaven
Go, colloquially referred to as Golang, is a statically typed, compiled language renowned for its elegance, efficiency, and rock-solid support for concurrency. The burgeoning synergy between Go and eBPF has not gone unnoticed. Here's why Go makes a compelling choice for eBPF development:
Safety First: Go is a memory-safe language, effectively guarding against common memory-related pitfalls that can otherwise lead to security vulnerabilities. This safety is an absolute necessity when writing code that operates within the kernel, where even minor mistakes can have catastrophic consequences.
Performance Par Excellence: Go's performance is right up there with languages like C and C++, making it an ideal candidate for crafting eBPF programs that need to execute swiftly and with the utmost efficiency.
Robust Ecosystem: The Go ecosystem is vast and vibrant, featuring an array of libraries that cater to network programming, an invaluable resource for those venturing into eBPF applications.
Developer-Friendly: Go's hallmark simplicity and readability mean that it's accessible to a broad spectrum of developers, including those who may not have extensive experience in systems programming.
Crafting Go eBPF Programs
To venture into the domain of Go eBPF, you'll need a few fundamental tools and components:
A Go Environment: Ensure that you have Go installed on your development machine.
The Power of libbpf: libbpf is a library that streamlines the interaction between Go and eBPF programs. It provides an array of helper functions and abstractions that simplify working with eBPF in Go. You can find libbpf on GitHub and install it to bolster your projects.
BPF Toolchain: This includes tools like Clang and LLVM, essential for compiling eBPF programs written in Go.
The Go eBPF Library: This gem of a library offers Go bindings for libbpf and facilitates the development of eBPF programs in Go.
Applications Galore
Now that you're all set up with Go and the necessary tools, let's delve into the captivating array of applications that Go eBPF opens up:
Network Wizardry: Go eBPF programs can capture and dissect network traffic like never before. This superpower is a game-changer for diagnosing network performance bottlenecks, conducting robust security monitoring, and performing deep packet analysis.
Guardian of Security: With Go eBPF, you can craft robust intrusion detection systems capable of real-time monitoring of system calls and network events, alerting you to potential threats and allowing you to take immediate action.
Profiling and Tracing Mastery: When it comes to profiling and tracing applications to pinpoint performance bottlenecks and optimize execution, Go eBPF shines like a beacon. It offers an insightful window into code execution, revealing avenues for significant performance enhancements.
System Call Firewall: By attaching eBPF programs to system call hooks, you can enforce security policies, control the behavior of specific processes, and fortify your system against malicious activities.
Conclusion
Go eBPF is more than just an innovative intersection of two powerful technologies. It is the gateway to secure, efficient, and developer-friendly expansion of Linux kernel capabilities. With its safety features, competitive performance, and extensive ecosystem, Go has rightfully earned its spot as a premier choice for crafting eBPF programs. As the eBPF ecosystem continues to evolve, Go eBPF is poised to play a pivotal role in redefining the future of system monitoring, security, and performance optimization in the dynamic world of software development. If you're passionate about maximizing performance, enhancing observability, and safeguarding systems, it's high time you embark on the mesmerizing journey into the world of Go eBPF and uncover its boundless potential.
0 notes
keployio · 2 years ago
Text
Empowering Keployment with Go eBPF: The Ultimate Guide
Tumblr media
Introduction
In today's fast-paced world of IT and cloud computing, deploying and managing applications is a crucial task. The ability to adapt to changing conditions and ensure top-notch performance and security is vital. Enter eBPF (extended Berkeley Packet Filter), a groundbreaking technology that, when paired with the Go programming language, opens up new frontiers for your deployment needs. In this article, we'll delve into the world of Go eBPF and explore how it can help you "keploy" your applications with unmatched confidence. We'll also provide practical examples of Go eBPF code to demonstrate its capabilities.
Understanding eBPF
eBPF, originally designed for packet filtering, has grown into a versatile framework that allows you to extend and customize the Linux kernel in unprecedented ways. It enables the attachment of small programs to various hooks within the kernel, enabling real-time inspection, modification, and filtering of network packets, system calls, and more. eBPF's flexibility has resulted in a wide range of applications, including monitoring, security, networking, and performance optimization.
The Power of Go
Go, often referred to as Golang, is a statically typed, compiled language developed by Google. Renowned for its simplicity, efficiency, and comprehensive standard library, Go is a popular choice for building scalable, high-performance applications. Its support for concurrent programming, combined with a strong focus on simplicity and efficiency, makes it an excellent language for developing networking tools and applications.
Go eBPF: A Potent Alliance
The synergy between Go and eBPF is a game-changer for creating, deploying, and managing applications. Here's how Go eBPF can revolutionize your deployment process:
Enhanced Performance: Go's efficiency and concurrent programming capabilities make it ideal for managing eBPF programs that analyze, optimize, and filter data in real-time. This ensures that your applications run smoothly and efficiently.
Security and Monitoring: eBPF offers powerful tools for network and system monitoring, and Go can be used to build user-friendly interfaces for visualizing the collected data. This is crucial for maintaining a secure and compliant deployment environment.
Real-time Responsiveness: eBPF enables real-time responses to network events and system issues. Go's speed and simplicity allow developers to build and deploy solutions that react to changing conditions, guaranteeing high availability and performance.
Cross-Platform Compatibility: Go's ability to compile code for multiple platforms and eBPF's integration with the Linux kernel make it possible to create cross-platform networking solutions that can be keployed across various cloud providers.
Keployment with Confidence
As a developer or system administrator, the concept of "keployment" encapsulates the idea of continuously deploying, managing, and optimizing your applications. Here's how Go eBPF empowers you to keploy your applications with confidence:
Dynamic Load Balancing: With Go eBPF, you can implement dynamic load balancing strategies that distribute incoming traffic evenly across multiple servers. This ensures high availability and optimal performance, while the dynamic nature allows you to adapt to changing traffic patterns.
Auto-Scaling: Go eBPF helps you build auto-scaling solutions that automatically adjust the number of server instances based on real-time demand. This means your deployment can handle fluctuations in user activity without manual intervention.
Distributed Monitoring: eBPF, when paired with Go, allows you to create distributed monitoring solutions that provide real-time insights into your infrastructure's health. Detect anomalies and address issues before they impact your users.
Security and Compliance: eBPF's capabilities for inspecting and filtering network traffic and system calls, along with Go's flexibility, enable you to build custom security monitoring and compliance tools. These tools help you ensure your application's security and adherence to regulatory requirements.
Customization: The flexibility of Go and eBPF empowers you to tailor your deployment to your specific needs. You can create custom modules and extensions that address the unique challenges of your application.
Practical Examples of Go eBPF
Let's dive into some practical examples of how Go eBPF can be applied to enhance your deployment strategy:
Dynamic Load Balancing:
package main
import "fmt"
func main() {
    // Go eBPF code to implement dynamic load balancing
    fmt.Println("Dynamic Load Balancing code goes here.")
}
Auto-Scaling:
package main
import "fmt"
func main() {
    // Go eBPF code for auto-scaling
    fmt.Println("Auto-Scaling code goes here.")
}
Distributed Monitoring:
package main
import "fmt"
func main() { // Go eBPF code for distributed monitoring fmt.Println("Distributed Monitoring code goes here.") }
Security and Compliance:
package main
import "fmt"
func main() { // Go eBPF code for security and compliance fmt.Println("Security and Compliance code goes here.") }
Custom Modules:
package main
import "fmt"
func main() { // Go eBPF code for creating custom modules fmt.Println("Custom Modules code goes here.") }
These code snippets serve as a starting point for implementing Go eBPF in your deployment strategy. You can tailor and expand these examples to meet the specific needs of your application.
Conclusion
In the rapidly evolving world of application deployment, Go eBPF emerges as a game-changer. It empowers developers and system administrators to "keploy" applications with confidence, leveraging dynamic load balancing, auto-scaling, distributed monitoring, security, and customization. The practical examples provided here demonstrate the power and flexibility of Go eBPF, offering a glimpse into the possibilities it unlocks for your deployment needs. As you continue to evolve your application infrastructure, consider the advantages of Go eBPF for seamless, efficient, and secure keployment.
0 notes
andmaybegayer · 2 years ago
Text
I had the idea to set up a tally light for my screen recorder, I was about to do it with external hardware when I remembered that for some bizzare reason the back of my motherboard has like four RGB LED's, and that's the side that faces me. It's supported in OpenRGB, so a moment of scripting and systemd timers later and it automatically flips to red when my screen recorder is on, and back to orange when it's off.
It not perfect system. I'm sure there's a better way to watch for the recorder process starting than just having a systemd timer run a script every couple seconds. There's probably some kind of internal notification i could hook into. eBPF probably has a solution to this. I don't know how to write eBPF. This is relevant to my dayjob too so I should probably learn eBPF.
Probably the correct-correct thing would be to have a service get inotified on a pidfile for the recorder. I don't know if it has a pidfile but I could patch it in easily. Or maybe watch /proc? Never thought about this before. If I'm patching the recorder I could just make it run openrgb but then if it crashes it might not flip back. I'd have to put actual work into teardown.
The developer does actually recommend setting the recorder up as a systemd service so I mean there's that option too, you can just get systemd to handle it then.
Systemd does some fuzzy matching with timers to try and batch executions together to reduce power consumption. It's quite a long period by default so the timer was taking forever to go off. You have to override it with AccuracySec=X in the timer to force it to execute within X seconds of the intended time. This is a desktop so the extra power is probably less than changing the brightness on my display.
Things like this also make me want to start using execline. sh is so much.
4 notes · View notes
newspagesonline · 5 months ago
Text
Yandex’s High-Performance Profiler Is Going Open Source
Perforator is a continuous profiling system developed by Yandex that is now open-source. Perforator fills a critical gap in open-source profiling, enabling simplified automatic program optimization. The source code is licensed under MIT (with GPL for eBPF programs) and runs on x86-64 Linux.  Read More
0 notes
aiwikiweb · 5 months ago
Text
Streamline Your API Testing with Keploy's AI-Powered Test Generation
Tumblr media
In modern software development, ensuring robust API functionality is crucial. Keploy is an open-source, AI-driven platform that automates the generation of API test cases and data mocks, enabling developers to achieve up to 90% test coverage within minutes. By capturing real user interactions, Keploy simplifies backend testing, accelerates release cycles, and enhances application reliability.
Main Content:
Keploy integrates seamlessly into your development workflow without requiring code changes. It records API calls and database queries during normal application usage, then generates test cases and mocks/stubs based on this data. During testing, Keploy replays these interactions, allowing you to validate API behavior and identify issues early in the development process.
Key Features:
Automated Test Generation: Creates test cases and data mocks from recorded API interactions, reducing manual effort.
High Test Coverage: Achieves up to 90% test coverage rapidly, ensuring comprehensive validation of your APIs.
Code-less Integration: Implements seamlessly without requiring modifications to your existing codebase.
eBPF Instrumentation: Utilizes eBPF technology to capture network interactions efficiently, supporting various programming languages and frameworks.
CI/CD Pipeline Integration: Works with testing libraries like JUnit, PyTest, Jest, and Go-Test, facilitating combined test coverage gating in CI/CD pipelines.
Benefits:
Enhanced Productivity: Automates the creation of test cases, allowing developers to focus on feature development and optimization.
Improved Reliability: Ensures APIs function as intended by validating behavior against real user interactions.
Accelerated Release Cycles: Speeds up testing processes, enabling quicker deployments and time-to-market.
Cost Efficiency: Reduces the need for extensive manual testing, lowering development costs.
Discover how Keploy can revolutionize your API testing process.
Visit: [ aiwikiweb.com/keploy/ ]
0 notes
fahrni · 10 months ago
Text
Saturday Morning Coffee
Good morning from Charlottesville, Virginia! ☕️
Tumblr media
I still get a bit lost in my new gig — at WillowTree — as a React Native/TypeScript dev. The syntax is making more sense and getting easier to follow, but, I do have a difficult time understanding the errors produced by yarn ts:check. It’s the same each time I learn a new language.
I’m also developing an interest in Rust. That’ll have to be a part time interest for a long time I suppose. I have more important business to attend to. 😃
Onward!
Filipe Espósito • 9to5Mac
Shareshot is an iOS app that transforms how you share iPhone and iPad screenshots
A friend of mine, Marc Palmer, is part of the duo who created Shareshot! It is, as always, absolutely beautiful, full featured, and stable.
If I’m not too lazy moving forward I should use it to make screenshots for Stream blog posts and the like.
Congratulations, Marc! 🥳
Andrew Carter • WillowTree Blog
Mobile app interactivity, multimodal voice technology, and AI are all converging with Apple Intelligence — Apple’s new artificial intelligence feature set announced at this year’s WWDC, coming soon with iOS 18 (maybe in October). And the secret sauce powering those awesome interactions is something called App Intents.
Andrew is pretty legendary in the halls of WillowTree. So damned smart and witty, and he plays a mean fiddle and banjo.
Anywho, go give his piece on App Intents a gander, you might learn a thing or two.
Kelly Crandall • Racer
Austin Dillon has been stripped of the NASCAR Cup Series playoff eligibility that came with his victory at Richmond Raceway.
Austin Dillion looked great all night. I don’t recall how many laps he lead but it was a lot. He was two laps short of victory when a late caution came out.
On the restart he was beat off the line by Joey Lagano and fell into second place.
I wanted to see Mr. Dillion win so badly. He hasn’t had a win in a couple years and Richard Childress Racing needed one but the way he did it was not great.
He kept the win but was stripped of his points and playoff berth. They should’ve disqualified him and given the win to Legano, if I’m being honest about my feelings.
Scharon Harding • Ars Technica
Sonos is laying off about 100 people, the company confirmed on Wednesday. The news comes as Sonos is expecting to spend $20 to $30 million in the short term to repair the damage from its poorly received app update.
It’s incredible how much an app redesign can make or break an application or company.
Another critically acclaimed podcasting app called Overcast was also redesigned and released recently. It too has had a very difficult time with its subscribers. Lots of one star reviews and hate.
Rewrites can kill companies. Don’t do it. Evolve your code over time. Think of it as a Ship of Theseus.
Tasha Robinson • Polygon
Ryan Reynolds had very specific tech (and humor) requirements for Wolverine’s corpse
I still haven’t see the new Deadpool but I really want to. Deadpool’s obsession with Wolverine is funny as heck and I’m here for it. Ryan Reynolds and Hugh Jackman are hysterical.
Juan José López Jaimez and Meador Inge • Google Bug Hunters
In a throwback to the past, this blog post takes us on a journey back to a time when eBPF was a focal point in the realm of kernel security research. In this update, we recount the discovery of CVE-2023-2163, a vulnerability within the eBPF verifier, what our root-cause analysis process looked like, and what we did to fix the issue.
Fresh off the heels of the Crowdstrike fiasco we get a story of how Google engineers found vulnerabilities in a Linux technology that allows for similar extensions to the OS. Similar in desired outcome, not in implementation.
Matthias Endler
Quite a few websites are unusable by now because they got “optimized for Chrome.” Microsoft Teams, for example, and the list is long. These websites fail for no good reason.
Chrome has definitely become the new Internet Explorer in a way. Devs have become lazy and don’t code for the open web, they’re coding against a specific browser. Not good. 🤦🏻‍♂️
Stan Alcorn • Rest of World
How Spotify started — and killed — Latin America’s podcast boom
What Spotify has done is not podcasting if it doesn’t allow any podcast player to subscribe to a feed. That’s part of what makes a podcast a podcast. What they’ve done is something that needs a new name.
Lately I’ve heard some podcasts announce ad free versions available on Apple Podcasts, which is also just as bad as Spotify’s locked up audio thing.
Please, don’t do this, keep your podcast a podcast and find a better way to create subscriptions. Others have done it. You can too.
Patreon
Apple is requiring that Patreon switch to their iOS in-app purchase system starting this November, or risk being removed from the App Store. Here’s what’s coming, and what you can do about it.
My opinion on this is simple.
If they really believe in creators Patreon should abandon their iOS App in favor of a really great mobile experience on their website.
Liam Proven • The Register
Before WordPerfect, the most popular work processor was WordStar. Now, the last ever DOS version has been bundled and set free by one of its biggest fans.
It’s not surprising how many fans of WordStar exist. Many of them are novelists and columnists. The best of the best writers in the world. Of course they’re most likely of a certain ventage, if you know what I mean? 😂
I started as a BASIC programmer and used WordStar as my editor until I discovered Brief. True story.
Tumblr media
David Edwards • Raw Story
Judge Chutkan faces call to seize Trump’s passport after threat to flee to Venezuela
Can Judge Chutkan do the opposite and encourage Trump to move to Venezuela, now? That would solve a lot of problems with the upcoming election and help preserve democracy.
It would be a great service to the country. 🇺🇸
Rex Huppke • USA TODAY
Trump rambles, slurs his way through Elon Musk interview. It was an unmitigated disaster.
I listened to it for a few minutes and the Orange Man sounded like Sylvester the cat!
Sufferin’ Suckatash! 😋
Tumblr media Tumblr media
0 notes
computingpostcom · 3 years ago
Text
If you’re running a Kubernetes Cluster in an AWS Cloud using Amazon EKS, the default Container Network Interface (CNI) plugin for Kubernetes is  amazon-vpc-cni-k8s. By using this CNI plugin your Kubernetes pods will have the same IP address inside the pod as they do on the VPC network. The problem with this CNI is the large number of VPC IP addresses required to run and manage huge clusters. This is the reason why other CNI plugins such as Calico is an option. Calico is a free to use and open source networking and network security plugin that supports a broad range of platforms including Docker EE, OpenShift, Kubernetes, OpenStack, and bare metal services. Calico offers true cloud-native scalability and delivers blazing fast performance. With Calico you have the options to use either Linux eBPF or the Linux kernel’s highly optimized standard networking pipeline to deliver high performance networking. For multi-tenant Kubernetes environments where isolation of tenants from each other is key, Calico network policy enforcement can be used to implement network segmentation and tenant isolation. You can easily create network ingress and egress rules to ensure proper network controls are applied to services. Install Calico CNI plugin on Amazon EKS Kubernetes Cluster These are the points to note before implementing the solution: If using Fargate with Amazon EKS Calico is not supported. If you have rules outside of Calico policy consider adding existing iptables rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico. If you’re using security groups for pods, traffic flow to pods on branch network interfaces is not subjected to Calico network policy enforcement and is limited to Amazon EC2 security group enforcement only Step 1: Setup EKS Cluster I assume you have a newly created EKS Kubernetes Cluster. Our guide can be used to deploy an EKS cluster as below. Easily Setup Kubernetes Cluster on AWS with EKS Once the cluster is running, confirm it is available with eksctl: $ eksctl get cluster -o yaml - name: My-EKS-Cluster region: eu-west-1 Step 2: Delete AWS VPC networking Pods Since in our EKS cluster we’re going to use Calico for networking, we must delete the aws-node daemon set to disable AWS VPC networking for pods. $ kubectl delete ds aws-node -n kube-system daemonset.apps "aws-node" deleted Confirm all aws-node Pods have been deleted. $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6987776bbd-4hj4v 1/1 Running 0 15h coredns-6987776bbd-qrgs8 1/1 Running 0 15h kube-proxy-mqrrk 1/1 Running 0 14h kube-proxy-xx28m 1/1 Running 0 14h Step 3: Install Calico CNI on EKS Kubernetes Cluster Download Calico Yaml manifest. wget https://docs.projectcalico.org/manifests/calico-vxlan.yaml Then apply the manifest yaml file to deploy Calico CNI on Amazon EKS cluster. kubectl apply -f calico-vxlan.yaml This is my deployment output showing all objects being created. configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Get list of DaemonSets deployed in the kube-system namespace. $ kubectl get ds calico-node --namespace kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE calico-node 2 2 0 2 0 kubernetes.io/os=linux 14s The calico-node DaemonSet should have the DESIRED number of pods in the READY state. $ kubectl get ds calico-node --namespace kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE calico-node 2 2 2 2 2 kubernetes.io/os=linux 48s Running pods can be checked with kubectl command as well. $ kubectl get pods -n kube-system | grep calico calico-node-bmshb 1/1 Running 0 4m7s calico-node-skfpt 1/1 Running 0 4m7s calico-typha-69f668897f-zfh56 1/1 Running 0 4m11s calico-typha-horizontal-autoscaler-869dbcdddb-6sx2h 1/1 Running 0 4m7s Step 4: Create new nodegroup and delete old one If you had nodes already added to your cluster, we’ll need to add another node group the remove the old node groups and the machines in it. To create an additional nodegroup, use: eksctl create nodegroup --cluster= [--name=] List your clusters to get clustername: $ eksctl get cluster Node group can be created from CLI or Config file. Create Node group from CLI eksctl create nodegroup --cluster --name --node-type --node-ami auto To change maximum number of Pods per node, add: --max-pods-per-node Example: eksctl create nodegroup --cluster my-eks-cluster --name eks-ng-02 --node-type t3.medium --node-ami auto --max-pods-per-node 150 Create from Configuration file – Update nodeGroups section. See be nodeGroups: - name: eks-ng-01 labels: role: workers instanceType: t3.medium desiredCapacity: 2 volumeSize: 80 minSize: 2 maxSize: 3 privateNetworking: true - name: eks-ng-02 labels: role: workers instanceType: t3.medium desiredCapacity: 2 volumeSize: 80 minSize: 2 maxSize: 3 privateNetworking: true For Managed replace nodeGroups with managedNodeGroups. When done apply the configuration to create Node group. eksctl create nodegroup --config-file=my-eks-cluster.yaml Once the new nodegroup is created, delete old one to cordon and migrate all pods. eksctl delete nodegroup --cluster= --name= Or from Config file: eksctl delete nodegroup --config-file=my-eks-cluster.yaml --include= --approve If you check the nodes in your cluster, at first scheduling is disabled: $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-255-101-100.eu-west-1.compute.internal Ready 3m57s v1.17.11-eks-cfdc40 ip-10-255-103-17.eu-west-1.compute.internal Ready,SchedulingDisabled 15h v1.17.11-eks-cfdc40
ip-10-255-96-32.eu-west-1.compute.internal Ready 4m5s v1.17.11-eks-cfdc40 ip-10-255-98-25.eu-west-1.compute.internal Ready,SchedulingDisabled 15h v1.17.11-eks-cfdc40 After few minutes they are deleted. $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-255-101-100.eu-west-1.compute.internal Ready 4m45s v1.17.11-eks-cfdc40 ip-10-255-96-32.eu-west-1.compute.internal Ready 4m53s v1.17.11-eks-cfdc40 If you describe new Pods you should notice a change in its IP address: $ kubectl describe pods coredns-6987776bbd-mvchx -n kube-system Name: coredns-6987776bbd-mvchx Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: ip-10-255-101-100.eu-west-1.compute.internal/10.255.101.100 Start Time: Mon, 26 Oct 2020 15:24:16 +0300 Labels: eks.amazonaws.com/component=coredns k8s-app=kube-dns pod-template-hash=6987776bbd Annotations: cni.projectcalico.org/podIP: 192.168.153.129/32 cni.projectcalico.org/podIPs: 192.168.153.129/32 eks.amazonaws.com/compute-type: ec2 kubernetes.io/psp: eks.privileged Status: Running IP: 192.168.153.129 IPs: IP: 192.168.153.129 Controlled By: ReplicaSet/coredns-6987776bbd .... Step 5: Install calicoctl command line tool The calicoctl enables cluster users to read, create, update, and delete Calico objects from the command line interface. Run the commands below to install calicoctl. Linux: curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep linux-amd64 | grep -v wait | cut -d '"' -f 4 | wget -i - chmod +x calicoctl-linux-amd64 sudo mv calicoctl-linux-amd64 /usr/local/bin/calicoctl macOS: curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep darwin-amd64| grep -v wait | cut -d '"' -f 4 | wget -i - chmod +x calicoctl-darwin-amd64 sudo mv calicoctl-darwin-amd64 /usr/local/bin/calicoctl Next read how Configure calicoctl to connect to your datastore.
0 notes
borgpsi · 3 years ago
Text
EXPLORING THE APPLICATIONS OF IOT WITH KUBERNETES
Kubernetes is a cloud-native application deployment service. Because cloud apps are related to our IoT devices and products, we need to design IoT applications with Kubernetes.
Because of security, latency, autonomy, and cost, IoT analytics is shifting from the cloud to the edge. However, spreading and controlling loads to hundreds of edge nodes can be a difficult process. As a result, a lightweight production-grade solution, such as Kubernetes, is required to distribute and control the loads on edge devices.
WHAT EXACTLY IS KUBERNETES?
Kubernetes, often known as K8s, is a container-management system that assists application developers in easily deploying, scaling, and maintaining cloud-native applications. Furthermore, containerization simplifies the lifespan of cloud-native apps.
Tumblr media
HOW DOES KUBERNETES WORK?
When we have a functional Kubernetes deployment, we usually refer to it as a cluster. A Kubernetes cluster may be divided into two parts: the control plane and the nodes.
Each node in Kubernetes is its own Linux environment. There is some leeway in that it might be a real or virtual machine. Kubernetes nodes operate pods, which are made up of containers.
The control plane is primarily responsible for managing the cluster’s needed state, such as the sorts of applications that are executing and which container images are being utilised by them. It is worth mentioning that computer machines are in charge of executing apps and workloads.
Kubernetes runs on top of an operating system, such as Linux, and communicates with pods of containers that operate on nodes.
The Kubernetes control plane receives instructions from an administrator (or DevOps team) and routes them to the computer devices.
This method works well with a variety of services to determine which node is most suited for the particular task. Following that, it allotted the required resources and delegated the job to the pods in that node.
Kubernetes is a Greek term that translates to “captain” or “sailing master,” and it inspired the container ship analogy. The captain commands the ship. As a result, Kubernetes is compared to a captain or orchestrator of containers in the information technology domain.
Consider Docker containers to be packing crates. Boxes that must go to the same location should be kept together and placed into the same shipping containers. Packing crates are Docker containers in this instance, while shipping containers are pods.
Kubernetes assigns IP addresses to Pods, whereas iptables allows users to regulate network traffic.
In Kubernetes, iptables is replaced by eBPF, as demonstrated in the creative.
WHY IS LOAD BALANCE REQUIRED IN IOT APPLICATIONS?
The efficient and systematic distribution of network or application traffic across several hosts in a server farm is known as load balancing. Each load balancer sits between the backed servers and client devices. It takes incoming requests and then distributes them to any available server that can handle the request/work.
The most basic type of load balancing in Kubernetes is load distribution, which is simple to implement at the dispatch level. Kubernetes has two load distribution mechanisms, both of which operate through a feature called Kube-proxy, which maintains the virtual IPs used by services.
THE FORCE DRIVING THE ADOPTION OF CLOUD-NATIVE PLATFORMS LIKE KUBERNETES
Many firms are currently undergoing digital transformation. During this phase, their major goal is to transform the way they interact with their customers, suppliers, and partners. These firms are modernising their enterprise IT and OT systems by using advancements provided by technologies such as IoT platforms, IoT data analytics, and machine learning. They recognise that the difficulty of developing and deploying new digital goods necessitates the adoption of new development procedures. As a result, they turn to agile development and infrastructure solutions like Kubernetes.
Kubernetes has recently emerged as the most widely used standard container orchestration platform for cloud-native deployments. Kubernetes has emerged as the top choice for development teams looking to help their transition to new microservices architecture. It also contributes to the DevOps ethos of continuous integration (CI) and continuous deployment (CD).
Indeed, Kubernetes addresses many of the complicated issues that development teams face while developing and delivering IoT solutions. As a result, designing IoT applications with microservices has become popular.
WHY IS IT NECESSARY TO DEVELOP IOT APPLICATIONS WITH KUBERNETES?
Tumblr media
KUBERNETES DEVOPS FOR IOT APP DEVELOPMENT
To fulfil the needs of consumers and the market, IoT systems must be able to offer new features and upgrades quickly. Kubernetes offers DevOps teams with a standardised deployment process that enables them to test and deploy new services quickly and autonomously. Kubernetes enables zero-downtime deployments with rolling updates. Mission-critical IoT solutions, such as those used in essential industrial operations, may now be upgraded without interrupting processes or having a negative impact on customers and end users.
IN IOT APPLICATIONS, SCALABILITY
Scalability, defined as a system’s capacity to manage an increasing quantity of work efficiently by leveraging additional resources, remains a challenge for IoT developers. As a result, scalability is a major issue for many IoT systems.
The capacity to manage and serve innumerable device connections, convey massive amounts of data, and deliver high-end services such as real-time analytics involves a deployment architecture that can dynamically scale up and down in response to IoT deployment demands. Kubernetes enables developers to autonomously scale up and down across various network clusters.
SYSTEM WITH HIGH AVAILABILITY
Many IoT solutions are classified as business/mission-critical systems that must be extremely dependable and available. As an example, an IoT solution crucial to a hospital’s emergency healthcare facility must be available at all times. Kubernetes provides developers with the tools they need to deploy highly available systems.
Kubernetes’ design also allows workloads to run independently of one another. They may also be restarted with little impact on end customers.
PROPER USE OF CLOUD RESOURCES
Kubernetes improves efficiency by making the most use of cloud resources. IoT cloud integration is often a set of linked services that handle, among other things, device connectivity and administration, data intake, data integration, analytics, and integration with IT and OT systems. These services will most likely be hosted on public cloud providers such as Amazon Web Services or Microsoft Azure.
As a result, when evaluating the total cost of maintaining and deploying these services, making the most use of cloud provider resources is crucial. Kubernetes puts an abstract layer on top of the underlying virtual machines. Instead of delivering a single service on a single VM, administrators may focus on spreading IoT services across the most suitable number of VMs.
DEPLOYMENT OF IOT EDGE
The deployment of IoT services to the edge network is a significant advancement in the IoT industry. To increase the responsiveness of a predictive maintenance solution, it may be more effective to install data analytics and machine learning services closer to the equipment being monitored. It may be more efficient to place data analytics and machine learning services closer to the monitored equipment.
When deploying distributed and federated IoT services, system administrators and developers face a new management challenge. Kubernetes, on the other hand, provides a unified platform for establishing edge IoT services. A new Kubernetes IoT Working Group is investigating how it may provide a uniform deployment architecture for IoT cloud and IoT Edge.
KUBERNETES FUTURE TRENDS FOR IOT APP DEVELOPMENT
KUBERNETES 2.0 IN PRODUCTION OPERATION
Companies in the production and manufacturing area are looking to further scale up workloads inside a Kubernetes cluster to satisfy different requirements after the success of deploying Kubernetes in production environments with impressive levels of agility and flexibility.
BOOM OF KUBERNETES-NATIVE SOFTWARE
The software that must be executed as part of the containers existed in the early days of Kubernetes, complete with functional purpose and architectural aspects. To properly exploit Kubernetes, however, we must modify and adjust it to our own needs. However, modifications are necessary to properly leverage Kubernetes’ benefits and better suit current operating models. Kubernetes has now progressed to the point where developers may create apps directly on the platform. As a result, in the next few years, Kubernetes will become increasingly important as a determinant of modern app architecture.
KUBERNETES AT THE CORNER
Kube Edge is a new project that will improve the administration and deployment capabilities of Kubernetes and Docker. It will also result in bundled apps running seamlessly on devices or at the edge.
As a result, we are already seeing the Kubernetes community grow and advance fast. These developments have allowed for the development of cloud-native IoT systems that are scalable and dependable, as well as easily deployable in the most difficult circumstances.
PsiBorg creates Kubernetes based IoT applications for a wide range of IoT technologies and products.
This article was originally published here: BUILDING IOT APPLICATIONS WITH KUBERNETES
1 note · View note
knowasiak · 3 years ago
Text
Notes on BPF and eBPF
Notes on BPF and eBPF
Notes on BPF and eBPF Today it was Papers We Love, my favorite meetup! Today Suchakra Sharma (@tuxology on twitter/github) gave a GREAT talk about the original BPF paper and recent work in Linux on eBPF. It really made me want to go write eBPF programs! The paper is The BSD Packet Filter: A New Architecture for User-level Packet Capture</a… Read the full story –…
View On WordPress
0 notes
masaa-ma · 6 years ago
Text
KubeCon NA 2019 個人的に面白かったセッションまとめ
from https://qiita.com/go_vargo/items/c456cdae919a22dfbf31?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
Tumblr media
KubeConで(自分が)面白かったと思ったセッションまとめ ※網羅しているわけではなくて抜粋
Multi Cluster / 巨大クラスタ / スケーリング系 / オートメーション
LyftのK8sクラスタのバージョン自動更新の仕組みの話 TriggerベースのOperatorをGoで書いていて、Rotateモード(バージョンアップを実行)・Freezeモード(バージョンアップを停止)の2種類をベースにしている 基本は自動実行、問題があればFreezeモードで手動対応といった形で対応する Rotateの方法は、新しいNodeを立てて古いNodeを削除する(Nodeの)RollingUpdate
Airbnbで、ビジネスが成長するにつれてNodeが450台 → 900台 → 1800台 → 2400台とK8sクラスタが肥大化していってMulti Clusterに分割した Multi Clusterを作るにあたって、Cluster Typeを定義して、そのインスタンスとしてClusterを払い出している 裏側はChefやTerraform, Helmなど様々な技術を組み合わせている 現在は22Cluster Type, 36Cluster, 7000+Nodeの規模
Multi Cloud, Multi Clusterを考える上で重要になるのがDNS DNSを露出して、Cluster間でService Discoveryできる仕組みを作る必要がある ただ、Kuberntes DNSではCluster Resourceで隔離できない・動的設定変更できない・Secure Queryに対応できていない、といった課題もある この発表ではEnvoyをDNS Layerの前にProxyとして置くことでその課題を解決することを提案している
現在の規模は60+Cluster, 2,000+Node, 160,000+Podと凄まじい規模 PVCも活用していて、StatefulなWorkloadも多く動いていて、その点を工夫している 例えば、検索サービス用に、Matrix deploymentというStatefulSetを払い出すCRDを用意している模様
現在の規模は、~2000Node, 6Control Plane Master, 25,000~30,000Podとこれも凄まじい規模 レガシー時のEC2 AutoScaling GroupからMicro Service + Kubernetesに移行していった 大規模特有の問題として、ARP・DNS・LoadBalancingで出た問題を共有している
タイトルにもあるように、無限のスケーリングのためには何が必要か、を考えるセッション ・クラスタの自動更新 ・Multi Clusterのデプロイ管理 を主軸に深掘りしている
障害を想定しろ! 障害は起きる 一つのクラスタは(もはや)Single Point of Failure! といった内容もあって、そうしたMulti Clusterが当たり前になる時代になっていくのかもしれない、と少し思った
JD.comが、デプロイ時の影響確認用にシミュレーター(JoySim)を開発した話 巨大K8sクラスタのスケーリングやパフォーマンスの推定に利用している
Service Mesh系
VMにIstioを入れるユースケースを説明 KubernetesクラスタとVM環境との繋ぎこみをケースに説明している レガシー環境に、ServiceMeshを入れて、ServiceDiscoveryの恩恵を得たい、といったケースにあっているかもしれない
SpotifyでNginx + HAProxyなどの古いアーキテクチャからEnvoyへ移行した話 古いアーキテクチャでService Discoveryや運用コストの悩みから移行した ServiceMeshで、Envoyを古い環境に入れたいときに参考になるかもしれない(可能ならMigrationに直面する場面を減らせとも言ってるが)
160Cluster, 6,600Node, 62,000Podの規模で、ServiceMeshを組み込んだ話 ServiceMeshの構築で、Shard Control Plane → Multiple Control Plane → Multiple + Admiral(自作OSS)といった統合を進めていった Cluster LayerでServiceMeshを実現できた、という非常に勇気をもらえる内容だった
Stateful Application
LyftでのStateful workloadのためのプラットフォームを構築した話 FlyteというML&Data用のオーケーストレータを簡単にする Operatorも統合している
CNCFプロジェクト系
IngressのGA(v1)とその次のAPI構想(v2)についての発表 v1: IngressClass, backend → defaultBackend, PathTypeなど v2: GatewayClass, Gateway, Route, ServiceにAPI Resourceを分割(未定)
Helm3のアーキテクチャの変更、移行などについて全て説明 Helm3はHelm2からアーキテクチャ・インフラ部分の変更のみなので、Helm2と共存可能 Helm2 → Helm3に移行するPluginも提供されている
Pluginの設定がメイン 個人的には「onlyone��Pluginが気になる
応用編として、DNSSECやDNS Over TLSなどについて説明。他にもトリッキーなテクニックを紹介 全く知らなかったが、MultiCluster Service DiscoveryでCoreDNS公式でリポジトリもあるらしい https://github.com/coredns/multicluster-dns
その他
zero-downtimeで(Web)サービスを更新するために必要な物をまとめている
entrypointはシグナルをハンドルすべき
STOPSIGNALが必要になるかもしれない
Liveness / Readiness Probeは異なる間隔を設定する
接続を受け付けないように、preStopではSleepを入れる
apps/v1 Deploymentを使う
RollingUpdateの間にウォームアップを行う
SideCarのシャットダウンを同期させる
eBPFとCilium(CNI)の説明
0 notes
getwebhosting · 5 years ago
Text
Centos 8.1 now Available to Install.
New Post has been published on https://www.getwebhosting.co.uk/blog/2020/03/25/centos-8-1-now-available-to-install/
Centos 8.1 now Available to Install.
Centos 8.1 is now available to install on your SSD VPS Hosting and Dedicated Servers.
Highlights include kernel live patching, a new routing protocol stack called FRR which supports multiple IPv4 and IPv6 routing protocols, an extended version of the Berkeley Packet Filter (eBPF) to help sysadmins troubleshoot complex network issues, support for re-encrypting block devices in LUKS2 while the devices are in use, as well as a new tool for generating SELinux policies for containers called udica.
“With udica, you can create a tailored security policy for better control of how a container accesses host system resources, such as storage, devices, and network. This enables you to harden your container deployments against security violations and it also simplifies achieving and maintaining regulatory compliance,
Updated components
CentOS Linux 8.1 also comes with additional FIPS-140 and Common Criteria certifications, XDP (eXpress Data Path) eBPF-based high performance data path as a Technology Preview, support for importing QCOW virtual images, and a new command-line tool in Identity Management called Healthcheck that helps users find issues, which could affect the reliability of their IdM environments.
Several packages and core components have received new version in CentOS Linux 8.1 (1911). Among these, we can mention the Tuned 2.12 system tuning tool, which brings support for CPU list negation, chrony 3.5 suite, which can now more accurately synchronize the system clock with hardware time stamping, as well as PHP 7.3, Ruby 2.6, Node.js 12, nginx 1.16, LLVM 8.0.1, Rust Toolset 1.37, and Go Toolset 1.12.8.
0 notes
powerfulbox · 5 years ago
Text
Centos 8.1 now Available to Install.
New Post has been published on https://www.powerfulbox.co.uk/blog/2020/03/22/centos-8-1-now-available-to-install/
Centos 8.1 now Available to Install.
Centos 8.1 is now available to install on your SSD VPS Hosting and Dedicated Servers.
Highlights include kernel live patching, a new routing protocol stack called FRR which supports multiple IPv4 and IPv6 routing protocols, an extended version of the Berkeley Packet Filter (eBPF) to help sysadmins troubleshoot complex network issues, support for re-encrypting block devices in LUKS2 while the devices are in use, as well as a new tool for generating SELinux policies for containers called udica.
“With udica, you can create a tailored security policy for better control of how a container accesses host system resources, such as storage, devices, and network. This enables you to harden your container deployments against security violations and it also simplifies achieving and maintaining regulatory compliance,
Updated components
CentOS Linux 8.1 also comes with additional FIPS-140 and Common Criteria certifications, XDP (eXpress Data Path) eBPF-based high performance data path as a Technology Preview, support for importing QCOW virtual images, and a new command-line tool in Identity Management called Healthcheck that helps users find issues, which could affect the reliability of their IdM environments.
Several packages and core components have received new version in CentOS Linux 8.1 (1911). Among these, we can mention the Tuned 2.12 system tuning tool, which brings support for CPU list negation, chrony 3.5 suite, which can now more accurately synchronize the system clock with hardware time stamping, as well as PHP 7.3, Ruby 2.6, Node.js 12, nginx 1.16, LLVM 8.0.1, Rust Toolset 1.37, and Go Toolset 1.12.8.
0 notes
stephenlibbyy · 5 years ago
Text
Linux Network Observability: Building Blocks
As developers, operators and devops people, we are all hungry for visibility and efficiency in our workflows. As Linux reigns the “Open-Distributed-Virtualized-Software-Driven-Cloud-Era”— understanding what is available within Linux in terms of observability is essential to our jobs and careers.
Linux Community and Ecosystem around Observability
More often than not and depending on the size of the company it’s hard to justify the cost of development of debug and tracing tools unless it’s for a product you are selling. Like any other Linux subsystem, the tracing and observability infrastructure and ecosystem continues to grow and advance due to mass innovation and the sheer advantage of distributed accelerated development. Naturally, bringing native Linux networking to the open networking world makes these technologies readily available for networking.
There are many books and other resources available on Linux system observability today…so this may seem no different. This is a starter blog discussing some of the building blocks that Linux provides for tracing and observability with a focus on networking. This blog is not meant to be an in-depth tutorial on observability infrastructure but a summary of all the subsystems that are available today and constantly being enhanced by the Linux networking community for networking. Cumulus Networks products are built on these technologies and tools, but are also available for others to build on top of Cumulus Linux.
1. Netlink for networking events
Netlink is the protocol behind the core API for Linux networking. Apart from it being an API to configure Linux networking, Netlink’s most important feature is its asynchronous channel for networking events.
And here’s where it can be used for network observability: Every networking subsystem that supports config via netlink, which they all do now including most recent ethtool in latest kernels, also exposes to userspace a way to listen to that subsystems networking events. These are called netlink groups (you will see an example here ).
You can use this asynchronous event bus to monitor link events, control plane, mac learn events and any and every networking event you want to chase!
Iproute2 project maintained by kernel developers is the best netlink based toolset for network config and visibility using netlink (packaged by all Linux distributions). There are also many open source Netlink libraries for programatically using NetLink in your network applications (eg: libnl, python nlmanager to name a few).
Though you can build your own tools using Netlink for observability, there are many already available on any Linux system (and of course Cumulus Linux). `ip monitor` and `bridge monitor` or look for monitor options in any of the tools in the iproute2 networking package.
Apart from the monitor commands, iproute2 also provides commands to lookup a networking table (and these are very handy for tracing and implementing tracing tools):
eg `ip route get` `ip route get fibmatch` , `bridge fdb get` ,`ip neigh get` .
2. Linux perf subsystem
Perf is a Linux kernel based subsystem that provides a framework for all things performance analysis. Apart from hardware level performance counters it covers software tracepoints. These software tracepoints are the ones interesting for network event observability—of course you will use perf for packet performance, system performance, debugging your network protocol performance and other performance problems you will see on your networking box.
Coming back to tracepoints for a bit, tracepoints are static tracing hooks in kernel or user-space. Not only does perf allow you to trace these statically placed kernel tracepoints, it also allows you to dynamically add probes into kernel and user-space functions to trace and observe data as it flows through the code. This is my go-to tool for debugging a networking event.
Kernel networking code has many tracepoints for you to use. Use `perf list` to list all the events and grep for strings like ‘neigh’, ‘fib’, ‘skb’, ‘net’ for networking tracepoints. We have added a few to trace E-VPN control events.
For example this is my go-to perf record line for E-VPN. This gives you visibility into life cycle of neighbour , bridge and vxlan fdb entries:
perf record -e neigh:* -e bridge:* -e vxlan:* And you can use this one to trace fib lookups in the kernel. perf record -e fib:* -e fib:*
Perf trace is also a good tool to trace a process or command execution. It is similar to everyone’s favorite strace . Tools like strace, perf trace are useful when you have something failing and you want to know which syscall is failing.
Note that perf also provides a python interface which greatly helps in extending its capabilities and adoption.
3. Function Tracer or ftrace
Though this is called a function tracer, it is a framework for several other tracing. As the ftrace documentation suggests, one of the most common uses of ftrace is event tracing. Similar to perf it allows you to dynamically trace kernel functions and is a very handy tool if you know your way around the networking stack (grepping for common Linux networking strings like net, bonding, bridge is good enough to get your way around using ftrace). There are many tutorials available on ftrace depending on how deep you want to go. There is also trace-cmd.
4. Linux auditing subsystem
Linux kernel auditing subsystem exists to track critical system events. Mostly related to security events but this system can act as a good way to observe and monitor your Linux system. For networking systems it allows you to trace network connections, changes in network configurations and syscalls.
Auditd a userspace daemon works with the kernel for policy management, filtering and streaming records. There are many blogs and documents that cover the details on how to use the audit system. It comes with very handy tools autrace, ausearch to trace and search for audit events. Autrace is similar to strace. Audit subsystem also has a iptables target that allows you to record networking packets as audit records.
5. Systemd
Systemd might seem odd here but I rely on systemd tools a lot to chase my networking problems. Journalctl is your friend when you want to look for critical errors from your networking protocol daemons/services.
Eg: Check if your networking service has not already given you hints about something failing: journactl -u networking
Systemd timers can help you setup periodic monitoring of your networking state and tools like systemd-analyze  can provide visibility into control plane protocol service dependencies at boot and shutdown for convergence times.
6. eBPF
Though eBPF is the most powerful of all the above, its listed last here just because you can use eBPF with all of the observability tools/subsystems above. Just google the name of your favorite Linux observability technology or tool with eBPF and you will find an answer :).
Linux kernel eBPF has brought programmability to all Linux kernel subsystems, and tracing and observability is no exception. There are several blogs, books and other resources available to get started with eBPF. The Linux eBPF maintainers have done an awesome job bringing eBPF mainstream (with Documentation, libraries, tutorials, helpers, examples etc).
There are many interesting readily available observability tools using eBPF available here. Bpftrace is everyone’s favorite.
All the events described in previous sections on tracing and observing networking events, eBPF takes it one step further by providing the ability to dynamically insert code at these trace events.
eBPF is also used for container network observability and policy enforcement (Cilium project is a great example).
A reminder in closing
All these tools are available at your disposal on Cumulus Linux by default or at Cumulus and Debian repos!
Linux Network Observability: Building Blocks published first on https://wdmsh.tumblr.com/
0 notes