#kubernetes network policy tutorial
Explore tagged Tumblr posts
Text
Boost Kubernetes Security with Network Policies & Service Mesh Integration
Securing Kubernetes Clusters with Network Policies and Service Mesh Introduction Securing Kubernetes clusters is a critical aspect of ensuring the reliability, scalability, and security of containerized applications. Network policies and service mesh are two key technologies that help achieve this goal by controlling network traffic and communication between pods and services. In this tutorial,…
0 notes
Text
Mastering OpenShift Clusters: A Comprehensive Guide for Streamlined Containerized Application Management
As organizations increasingly adopt containerization to enhance their application development and deployment processes, mastering tools like OpenShift becomes crucial. OpenShift, a Kubernetes-based platform, provides powerful capabilities for managing containerized applications. In this blog, we'll walk you through essential steps and best practices to effectively manage OpenShift clusters.
Introduction to OpenShift
OpenShift is a robust container application platform developed by Red Hat. It leverages Kubernetes for orchestration and adds developer-centric and enterprise-ready features. Understanding OpenShift’s architecture, including its components like the master node, worker nodes, and its integrated CI/CD pipeline, is foundational to mastering this platform.
Step-by-Step Tutorial
1. Setting Up Your OpenShift Cluster
Step 1: Prerequisites
Ensure you have a Red Hat OpenShift subscription.
Install oc, the OpenShift CLI tool.
Prepare your infrastructure (on-premise servers, cloud instances, etc.).
Step 2: Install OpenShift
Use the OpenShift Installer to deploy the cluster:openshift-install create cluster --dir=mycluster
Step 3: Configure Access
Log in to your cluster using the oc CLI:oc login -u kubeadmin -p $(cat mycluster/auth/kubeadmin-password) https://api.mycluster.example.com:6443
2. Deploying Applications on OpenShift
Step 1: Create a New Project
A project in OpenShift is similar to a namespace in Kubernetes:oc new-project myproject
Step 2: Deploy an Application
Deploy a sample application, such as an Nginx server:oc new-app nginx
Step 3: Expose the Application
Create a route to expose the application to external traffic:oc expose svc/nginx
3. Managing Resources and Scaling
Step 1: Resource Quotas and Limits
Define resource quotas to control the resource consumption within a project:apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-quota spec: hard: requests.cpu: "4" requests.memory: 8Gi Apply the quota:oc create -f quota.yaml
Step 2: Scaling Applications
Scale your deployment to handle increased load:oc scale deployment/nginx --replicas=3
Expert Best Practices
1. Security and Compliance
Role-Based Access Control (RBAC): Define roles and bind them to users or groups to enforce the principle of least privilege.apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: myproject name: developer rules: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list", "watch", "create", "update", "delete"]oc create -f role.yaml oc create rolebinding developer-binding --role=developer [email protected] -n myproject
Network Policies: Implement network policies to control traffic flow between pods.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-same-namespace namespace: myproject spec: podSelector: matchLabels: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} oc create -f networkpolicy.yaml
2. Monitoring and Logging
Prometheus and Grafana: Use Prometheus for monitoring and Grafana for visualizing metrics.oc new-project monitoring oc adm policy add-cluster-role-to-user cluster-monitoring-view -z default -n monitoring oc apply -f https://raw.githubusercontent.com/coreos/kube-prometheus/main/manifests/setup oc apply -f https://raw.githubusercontent.com/coreos/kube-prometheus/main/manifests/
ELK Stack: Deploy Elasticsearch, Logstash, and Kibana for centralized logging.oc new-project logging oc new-app elasticsearch oc new-app logstash oc new-app kibana
3. Automation and CI/CD
Jenkins Pipeline: Integrate Jenkins for CI/CD to automate the build, test, and deployment processes.oc new-app jenkins-ephemeral oc create -f jenkins-pipeline.yaml
OpenShift Pipelines: Use OpenShift Pipelines, which is based on Tekton, for advanced CI/CD capabilities.oc apply -f https://raw.githubusercontent.com/tektoncd/pipeline/main/release.yaml
Conclusion
Mastering OpenShift clusters involves understanding the platform's architecture, deploying and managing applications, and implementing best practices for security, monitoring, and automation. By following this comprehensive guide, you'll be well on your way to efficiently managing containerized applications with OpenShift.
For more details click www.qcsdclabs.com
#redhatcourses#information technology#docker#container#linux#kubernetes#containerorchestration#containersecurity#dockerswarm#aws
0 notes
Text
Kubernetes Network Policies Tutorial for Devops Engineers Beginners and Students
Hi, a new #video on #kubernetes #networkpolicy is published on #codeonedigest #youtube channel. Learn #kubernetesnetworkpolicy #node #docker #container #cloud #aws #azure #programming #coding with #codeonedigest @java #java #awscloud @awscloud @AWSCloudI
In kubernetes cluster, by default, any pod can talk to any other pod with no restriction hence we need Network Policy to control the traffic flow. Network policy resource allows us to restrict the ingress and egress traffic to/from the pods. Network Policy is a standardized Kubernetes object to control the network traffic between Kubernetes pods, namespaces and the cluster. However, Kubernetes…

View On WordPress
#aws#aws cloud#azure#azure cloud#container network interface#container network interface aws#container network interface azure#container network interface Kubernetes#container network interface plugin#kubernetes#kubernetes explained#kubernetes installation#kubernetes interview questions#kubernetes network policy#kubernetes network policy example#kubernetes network policy explained#kubernetes network policy tutorial#kubernetes operator#kubernetes orchestration tutorial#kubernetes overview#kubernetes tutorial#kubernetes tutorial for beginners#orchestration
0 notes
Text
A Service Mesh provides a uniform way to connect, secure, and monitor microservice applications in your OpenShift / Kubernetes container environment. A mesh can be described as a network of microservices that make up applications in a distributed microservice architecture. This tutorial will walk you through steps for installing Istio Service Mesh on OpenShift 4.x Cluster. Red Hat OpenShift Service Mesh is based on the open source Istio project. It makes it easy to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. Features of Istio Service Mesh Traffic Management – Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security – Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement – Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry – Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. Components of Istio Service Mesh The Istio service mesh is split into control plane and data plane. Control plane components: Pilot – It configures the Envoy sidecar proxies at runtime. Mixer – It enforces access control and usage policies. It is also responsible for collection of telemetry data from the Envoy proxy and other services. Citadel – For certificates management – issuing and rotation. Galley – This ingests the service mesh configuration, then validates, processes, and distributes the configuration. Data plane: The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices. They also collect and report telemetry on all mesh traffic. Envoy built-in features include: Dynamic service discovery Load balancing TLS termination HTTP/2 and gRPC proxies Circuit breakers Health checks Staged rollouts with %-based traffic split Fault injection Rich metrics Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Rate limiting Access control End-to-end authentication Install Istio Service Mesh on OpenShift 4.x Now follow the next few steps to install and configure Red Hat OpenShift Service Mesh – Based on Istio. The istio-operator will be used to manage the installation of the Istio control plane. Step 1: Install Elasticsearch Operator The Elasticsearch operator enables you to configure and manage an Elasticsearch cluster for tracing and logging with Jaeger. Log in to the OpenShift Container Platform web console and navigate to Operators > OperatorHub > Search Elasticsearch Operator Click “Install“. Select All namespaces on the cluster (default) for installation mode and automatic approval strategy. Click Subscribe to initiate installation. Step 2: Install Jaeger Operator Jaeger lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems. Navigate to Operators > OperatorHub > Search Jaeger Operator Click “Continue” to and select other settings as below to Subscribe. Step 3: Install Kiali Operator Kiali enables you to view configurations, monitor traffic, and view and analyze traces in a single console. To install it search for “Kiali Operator” on OperatorHub. Select installation mode, update channel and approval strategy. All three operators should now be installed.
Step 4: Install the Red Hat OpenShift Service Mesh Operator Once Jaeger, Kiali and Elasticsearch operators are installed, proceed to install Istio Service Mesh Operator provided by Red Hat. Navigate to Operators > OperatorHub > Red Hat OpenShift Service Mesh Select All namespaces on the cluster (default) to install the Service Mesh Operator in the openshift-operators project. Click Install and stable Update Channel with Automatic Approval Strategy. The operator should be visible in the openshift-operators project. Step 5: Configure Service Mesh control plane We can now deploy the Service Mesh control plane which defines the configuration to for Control plane installation. Create a new project: Home > Projects > Create Project Name the project istio-system Creation of project automatically switch to new project in OpenShift. Navigate to Operators > Installed Operators > Istio Service Mesh Control Plane Click Create ServiceMeshControlPlane A default ServiceMeshControlPlane template is provided in YAML format. Modify these to fit your use case. You can refer to Customization guide for more details. I customized my configuration to look like below. NOTE: Please don’t COPY PASTE this configuration – it includes tolerations for running Istio services on infra nodes with taints. It may not work for you!!. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: full-install namespace: istio-system spec: istio: global: proxy: accessLogFile: "/dev/stdout" mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false outboundTrafficPolicy: mode: "REGISTRY_ONLY" gateways: istio-ingressgateway: autoscaleEnabled: true ior_enabled: true istio-egressgateway: autoscaleEnabled: true nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: infra value: reserved effect: NoSchedule - key: infra value: reserved effect: NoExecute mixer: enabled: true nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: infra value: reserved effect: NoSchedule - key: infra value: reserved effect: NoExecute kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: infra value: reserved effect: NoSchedule - key: infra value: reserved effect: NoExecute grafana: enabled: true nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: infra value: reserved effect: NoSchedule - key: infra value: reserved effect: NoExecute tracing: enabled: true jaeger: template: all-in-one Click “Create” and control plane should start installing. You can check the status of the control plane installation from CLI $ oc get smcp -n istio-system You can watch the progress of the Pods as they are created. $ oc get pods -n istio-system -w Step 6: Configure Service Mesh member roll The Projects that belong to the control plane are listed in ServiceMeshMemberRoll. You need to create a ServiceMeshMemberRoll resource named default in the istio-system project. Switch to istio-system project: Home > Projects > istio-system The navigate to Operators > Installed Operators > Red Hat OpenShift Service Mesh > Istio Service Mesh Member Roll Under ServiceMeshMemberRolls click Create ServiceMeshMemberRoll. Add the projects you want to be part of Istio service mesh and click “Create“. From CLI, the ServiceMeshMemberRoll resource can be updated after creation.
$ oc edit smmr -n istio-system Step 7: Deploy applications with Automatic sidecar injection To deploy your applications into the Service Mesh, you must opt in to injection by specifying the sidecar.istio.io/inject annotation with a value of "true". See example below. apiVersion: apps/v1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "true" labels: app: sleep spec: containers: - name: sleep image: tutum/curl command: ["/bin/sleep","infinity"] imagePullPolicy: IfNotPresent For pre-existing applications in a project added as member to control plane, you can update the pod template in the deployment by adding or modifying an annotation: $ oc patch deployment/ -p '"spec":"template":"metadata":"annotations":"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"' You can learn more by going through the Deploy Bookinfo scenario.
0 notes
Text
Performing analytics on Amazon Managed Blockchain
Data analytics is critical in making strategic decisions in your organization. You use data analytics in forecasting inventory levels, quarterly sales predictions, risk modelling, and more. With Amazon Managed Blockchain, blockchain goes further by providing trusted analytics, in which user activity is verifiable, secure, and immutable for all permissioned users on a network. Managed Blockchain follows an event-driven architecture. We can open up a wide range of analytic approaches by streaming events to Amazon Kinesis. For instance, we could analyze events in near-real time with Kinesis Data Analytics, perform petabyte scale data warehousing with Amazon RedShift, or use the Hadoop ecosystem with Amazon EMR. This allows us to use the right approach for every blockchain analytics use case. In this post, we show you one approach that uses Amazon Kinesis Data Firehose to capture, monitor, and aggregate events into a dataset, and analyze it with Amazon Athena using standard SQL. After setting up the system, we show a SQL query in a simple banking use case to generate a ranking of accounts whose transaction exceeds $1M, but with an exclusion policy that cross-references an allowed “off-chain” database. Solution overview The following diagram illustrates an event-driven architecture that transforms data from a peer node that prepares a series of services for analytics. Prerequisites This post assumes that you’ve already built and deployed a Managed Blockchain application, and want to extend its capabilities to include analytics. If you’re just getting started, our Track-and-Trace Blockchain Workshop or tutorial Build and deploy an application for Hyperledger Fabric on Amazon Managed Blockchain can teach you how to build a Managed Blockchain application step by step. This post focuses on work that’s critical in preparing a production application before you launch and go to market. To complete this walkthrough, you need the following prerequisites: An AWS account An existing Managed Blockchain application configured with Hyperledger Fabric The following node.js libraries: aws-sdk (>= 2.580), fabric-client (^ 1.4) Basic knowledge of node.js and JavaScript, writing chaincode, setting up users with fabric-ca-client, and setting up any one of the Amazon compute services, such as the following: Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) Defining and sending events Blockchain can be viewed as a new type of shared database without a central authority. Users interact via programs called smart contracts (or chaincode). You can query or invoke a chaincode on each call, or subscribe and receive streaming data under a single connection. Blockchain networks send events for external programs to consume when activity occurs on the network. In Fabric, events are produced as new blocks are added into a channel’s ledger. You can process new blocks as one of three event types: block, transaction, or chaincode. The event types compose each other. A block event is a deeply nested structure that contains a set of transactions and metadata. The transaction event composes a user-defined payload that makes up the chaincode event. Chaincode events are useful in analytics application because it can include arbitrary information. This enables us to expose specific behaviors or values that we can use in off-chain analytics applications. The following example code shows how user-defined payload is included in a chaincode event and sent out in a simple banking application involving payment: const shim = require('fabric-shim'); const LocalBank = class { // ...details omitted... async Invoke(stub) { const { fcn, params } = stub.getFunctionAndParameters(); if (fcn === 'Pay') { const resp = this.Pay(stub, ...params); return shim.success(resp); } return shim.Error(`Unable to call function, ${fcn}, because not found.`); } async Pay(stub, sender, recipient, fromBank, toBank, amount) { // ...payment logic here... const eventName = 'bank-payments'; const payload = { sender, recipient, fromBank, toBank, amount }; const serialized = Buffer.from(JSON.stringify(payload)); stub.sendEvent(eventName, serialized); } } shim.start(new LocalBank()); After the contract is deployed, you can run any compute service (such as Amazon EC2, Amazon ECS, or Amazon EKS) to register your external programs to process events coming in. These programs act as event listeners. Unlike what the name may suggest, it isn’t actively polling any events; it’s merely a function or method that is subscribed to an event. When the event occurs, the listener method gets called. This way, there’s no cost until the event actually occurs and we begin processing. In this post, we show an example of this using a node.js program. Processing events To process events, we can use the libraries available under the Fabric SDK. The Fabric SDK for Node.js has three libraries to interact with a peer node on a channel: fabric-network – The recommended API for applications where only submitting transactions and querying a peer node is enough. fabric-ca-client – Used to manage users and their certificates. fabric-client – Used to manage peer nodes on the channel and monitor events. We use this library to process events. To batch events into a dataset for analytics, we use the AWS SDK to access the Kinesis Data Firehose APIs for aggregation into Amazon S3. Each library provides object interfaces—ChannelEventHub and Firehose, respectively—to perform the operation. In the next few sections, we show you sample code that walks through three major steps. When complete, your architecture should look like the following diagram. In this diagram, main represents our node.js program that’s receiving chaincode events from peer nodes on the channel. We use two libraries, aws-sdk and fabric-client, to import and instantiate two classes, Firehose and ChannelEventHub, to do the heavy lifting. Initializing APIs from existing libraries To access the ChannelEventHub, your node.js client must be able to interact with the network channel. You can do so through the Fabric client by loading up a connection profile. You can either use your existing connection profile (the one used for your end-user application) or create a new one that specifies which peer nodes to listen on. The following code demonstrates how to set up your program with a logical gateway and extract the ChannelEventHub. To authorize our client, you need to register and enroll a Fabric user with the permissions to access the network channel, otherwise getUserContext returns empty. You can do so with the fabric-ca-client CLI or use the fabric-client in the following code to do so: const client = require('fabric-client'); function getChannelEventHub() { client.loadFromConfig('path/to/connection-profile.yaml'); await client.initCredentialStores(); await client.getUserContext(username, true); const channel = client.getChannel(); const eventHub = channel.getChannelEventHubsForOrg()[0]; return eventHub; } In your connection profile, you must ensure that there are peers defined that can perform the eventSource role. See the following code: channels: my-channel: peers: peer1.localbank.example.com: # # [Optional]. Is this peer used as an event hub? All peers can produce # events. Default: true eventSource: true In addition, we need access to Kinesis Data Firehose to deliver event streams to Amazon S3. We create another helper function to provide the API: const aws = require('aws-sdk'); function getKinesisDataFirehose() { return aws.Firehose({ apiVersion: '2015-08-04', region: 'us-east-1' }); } Now, on to the main function. Establishing connection In the next code example, we call on the ChannelEventHub to connect with the peer nodes on our network channel and specify what data we want to ingest. In Fabric, the client must specify whether to receive full or filtered blocks, which informs the peer node whether to drop or keep the payload field. For our use case, we want to ingest full blocks to process the payment details. See the following code: async function main() { const events = getChannelEventHub(); const kinesis = getKinesisDataFirehose(); events.connect({ full_block: true }); // ...step 3... } Registering a chaincode event listener Next, we transform the chaincode event into our desired input format. For this use case, we store the data in JSON format in Amazon S3, which Athena accepts as input. See the following code: async function main() { // ...step 2... const ccname = 'LocalBank'; const eventname = 'bank-payments'; events.registerChaincodeEvent(ccname, eventname, (event, blocknum, txid, status) => { const serialized = event['payload']; const payload = JSON.parse(serialized); const input = { DeliveryStreamName: eventname, Record: { Data: JSON.stringify(payload) + "n", } }; kinesis.putRecord(input, (err, data) => { if (err) { console.log(`Err: ${err}`); return; } console.log(`Ok: ${data}`); }); }, (err) => { console.log(`Failed to register: ${err}.`); }); } Using ChannelEventHub, we register an event listener that fires a callback function when a bank-payments event is sent from the LocalBank chaincode. If your listener experiences downtime and you want to process missed events, you can replay blocks by reconfiguring the connection parameters. For more information, see fabric-client: How to use the channel-based event service. The chaincode event structure looks like the following code: { chaincode_id: 'LocalBank', tx_id: 'a87efa9723fb967d60b7258445873355cfd6695d2ee5240d6d6cd9ea843fcb0d', event_name: 'bank-payments', payload: // omitted if a 'filtered block' } To process the payload, we need to reverse the operations it came in. Then we format the data into an input for Kinesis Data Firehose to ingest. Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, we use a newline (n) as the delimiter. Finally, we send records as they come using putRecord() and provide success and failure callbacks. Creating a Kinesis Data Firehose delivery stream We can use Kinesis Data Firehose to deliver streaming events to an Amazon S3 destination without needing to write applications or manage resources. You can complete these steps either via the Kinesis console or AWS Command Line Interface (AWS CLI). For this post, we show you the process on the console. On the Kinesis console, under Data Firehose, choose Create delivery stream. For Delivery stream name, enter bank-payments. For Source, select Direct PUT or other sources (we use the PUT APIs from the SDK). As an optional choice, enable server-side encryption for source records in the delivery stream. Choose Next. As an optional choice, enable data transformation or record format conversion, if necessary. Choose Next. For Destination, select Amazon S3. For S3 bucket, enter or create a new S3 bucket to deliver our data to (for this post, we use bank-payments-analytics). As an optional choice, demarcate your records with prefixes. Choose Next. Configure buffering, compression, logging, and AWS Identity and Access Management (IAM) role settings for your delivery stream. Choose Next. Choose Create delivery stream. If all goes well, you should be able to invoke Pay transactions and see the traffic on the Kinesis Data Firehose console (choose the stream and look on the Monitoring tab). In addition, you should see the objects stored in your Amazon S3 destination. Analyzing data with Athena With data to work with, we can now analyze it using Athena by running ANSI SQL queries. SQL compatibility allows us to use a well-understood language that integrates with a wide range of tools, such as business intelligence (BI) and reporting. With Athena, we can also join the data we obtained from Managed Blockchain with off-chain data to gain insights. To begin, we need to register Amazon S3 as our data source and specify the connection details. On the Athena console, choose Data sources. Choose Query data in Amazon S3 for where the data is stored. Choose AWS Glue Data Catalog as the metadata catalog. Choose Next. For Connection Details, choose Add a table. Enter your schema information. For large-scale data, it’s worth considering setting up a crawler instead. For Database, choose Create a new database. Name your database LocalBank. For Table Name, enter bank_payments. For Location of Input Data Set, enter s3://bank-payments-analytics. Choose Next. For Data Format, choose JSON. For Column Name and Column type, define the schema for your data. For this post, we create the following columns with the string column type: sender recipient fromBank As an optional choice, add a partition (or virtual column) to your database. Choose Create table. Before you run your first query, you may need to set up a query result location in Amazon S3. For instructions, see Working with Query Results, Output Files, and Query History. When that’s complete, on the Athena Query Editor, you can write SQL queries and run them against your Amazon S3 data source. To show an example, let’s imagine we have an off-chain relational database (connected to Athena) called whitelist that contains an accounts table with column id matching those found in our bank_payments table. Our goal is to generate a ranking of accounts whose transaction exceeds $1M, but that also excludes those accounts listed in whitelist. The following code is the example query: SELECT B.sender AS account, B.amount FROM localbank.bank_payments AS B LEFT OUTER JOIN whitelist.accounts AS W ON B.sender = W.id WHERE W.id IS null AND B.amount > 1000000 ORDER BY B.amount DESC; To produce the collection of records only in localbank.bank_payments but not in whitelist.accounts, we perform a left outer join, then exclude the records we don’t want from the right side with a WHERE clause. In a left outer join, a complete set of records is produced from bank_payments, with matching records (where available) in accounts. If there is no match, the right side contains null. The following screenshot shows our results. Summary This post demonstrated how to build analytics for your Managed Blockchain data. We can easily capture, monitor, and aggregate the event stream with Kinesis Data Firehose, and use Athena to analyze the dataset using standard SQL. Realistically, you need the ability to merge multiple data sources into a consolidated stream or single query. Kinesis Data Firehose provides additional features to run data transformations via Amazon Lambda, where additional calls can be made (and with error-handling). An analyst can also use federated queries in Athena to query multiple data sources (other than Amazon S3) with a single query. For more information, see Query any data source with Amazon Athena’s new federated query. To visualize data, Amazon QuickSight also provides easy integration with Athena that you can access from any device and embed into your application, portals, and websites. The following screenshot shows an example of a QuickSight dashboard. Please share your experiences of building on Managed Blockchain and any questions or feature requests in the comments section. About the authors Kel Kanhirun is a Blockchain Architect at AWS based in Seattle, Washington. He’s passionate about helping creators build products that people love and has been in the software industry for over 5 years. Dr. Jonathan Shapiro-Ward is a Principal Solutions Architect at AWS based in Toronto. Jonathan has been with AWS for 3.5 years and in that time has worked helping customers solve problems including petabyte scale analytics and the adoption of ML, serverless, and blockchain. He has spoken at events across the world where he focused on areas of emerging technology. Jonathan has previously worked in a number of industries including gaming and fintech and has a background in academic research. He holds a PhD in distributed systems from the University of St Andrews. https://aws.amazon.com/blogs/database/performing-analytics-on-amazon-managed-blockchain/
0 notes
Text
Tutorial: Calico Network Policies with Azure Kubernetes Service
http://bit.ly/2VNe9mM
0 notes
Text
Enabling Multi-Tenancy in Kubernetes Using Namespaces and Network Policies
Introduction Enabling Multi-Tenancy in Kubernetes Using Namespaces and Network Policies is a crucial aspect of deploying scalable and secure applications in a cloud-native environment. Multi-tenancy allows multiple independent applications or tenants to share the same Kubernetes cluster, each with their own set of resources and networking policies. In this tutorial, we will guide you through…
0 notes
Text
Simplifying Network Policies with Kubernetes Calico and OpenShift
Introduction Simplifying Network Policies with Kubernetes Calico and OpenShift is a crucial topic for developers and administrators working with containerized applications. As containerization becomes increasingly popular, ensuring network security, isolation, and compliance has become a significant challenge. This tutorial will guide you through the process of setting up and using Kubernetes…
0 notes