#install jenkins on openshift
Explore tagged Tumblr posts
Text
Enterprise Kubernetes Storage with Red Hat OpenShift Data Foundation (DO370)
In the era of cloud-native transformation, data is the fuel powering everything from mission-critical enterprise apps to real-time analytics platforms. However, as Kubernetes adoption grows, many organizations face a new set of challenges: how to manage persistent storage efficiently, reliably, and securely across distributed environments.
To solve this, Red Hat OpenShift Data Foundation (ODF) emerges as a powerful solution — and the DO370 training course is designed to equip professionals with the skills to deploy and manage this enterprise-grade storage platform.
🔍 What is Red Hat OpenShift Data Foundation?
OpenShift Data Foundation is an integrated, software-defined storage solution that delivers scalable, resilient, and cloud-native storage for Kubernetes workloads. Built on Ceph and Rook, ODF supports block, file, and object storage within OpenShift, making it an ideal choice for stateful applications like databases, CI/CD systems, AI/ML pipelines, and analytics engines.
🎯 Why Learn DO370?
The DO370: Red Hat OpenShift Data Foundation course is specifically designed for storage administrators, infrastructure architects, and OpenShift professionals who want to:
✅ Deploy ODF on OpenShift clusters using best practices.
✅ Understand the architecture and internal components of Ceph-based storage.
✅ Manage persistent volumes (PVs), storage classes, and dynamic provisioning.
✅ Monitor, scale, and secure Kubernetes storage environments.
✅ Troubleshoot common storage-related issues in production.
🛠️ Key Features of ODF for Enterprise Workloads
1. Unified Storage (Block, File, Object)
Eliminate silos with a single platform that supports diverse workloads.
2. High Availability & Resilience
ODF is designed for fault tolerance and self-healing, ensuring business continuity.
3. Integrated with OpenShift
Full integration with the OpenShift Console, Operators, and CLI for seamless Day 1 and Day 2 operations.
4. Dynamic Provisioning
Simplifies persistent storage allocation, reducing manual intervention.
5. Multi-Cloud & Hybrid Cloud Ready
Store and manage data across on-prem, public cloud, and edge environments.
📘 What You Will Learn in DO370
Installing and configuring ODF in an OpenShift environment.
Creating and managing storage resources using the OpenShift Console and CLI.
Implementing security and encryption for data at rest.
Monitoring ODF health with Prometheus and Grafana.
Scaling the storage cluster to meet growing demands.
🧠 Real-World Use Cases
Databases: PostgreSQL, MySQL, MongoDB with persistent volumes.
CI/CD: Jenkins with persistent pipelines and storage for artifacts.
AI/ML: Store and manage large datasets for training models.
Kafka & Logging: High-throughput storage for real-time data ingestion.
👨🏫 Who Should Enroll?
This course is ideal for:
Storage Administrators
Kubernetes Engineers
DevOps & SRE teams
Enterprise Architects
OpenShift Administrators aiming to become RHCA in Infrastructure or OpenShift
🚀 Takeaway
If you’re serious about building resilient, performant, and scalable storage for your Kubernetes applications, DO370 is the must-have training. With ODF becoming a core component of modern OpenShift deployments, understanding it deeply positions you as a valuable asset in any hybrid cloud team.
🧭 Ready to transform your Kubernetes storage strategy? Enroll in DO370 and master Red Hat OpenShift Data Foundation today with HawkStack Technologies – your trusted Red Hat Certified Training Partner. For more details www.hawkstack.com
0 notes
Text
Enterprise Kubernetes Storage With Red Hat OpenShift Data Foundation (DO370)
Introduction
As enterprises embrace Kubernetes to power their digital transformation, one challenge stands out — persistent storage for dynamic, containerized workloads. While Kubernetes excels at orchestration, it lacks built-in storage capabilities for stateful applications. That’s where Red Hat OpenShift Data Foundation (ODF) comes in.
In this blog, we’ll explore how OpenShift Data Foundation provides enterprise-grade, Kubernetes-native storage that scales seamlessly across hybrid and multi-cloud environments.
🔍 What is OpenShift Data Foundation (ODF)?
OpenShift Data Foundation (formerly known as OpenShift Container Storage) is Red Hat’s software-defined storage solution built for Kubernetes. It’s deeply integrated into the OpenShift Container Platform and enables block, file, and object storage for stateful container workloads.
Powered by Ceph and NooBaa, ODF offers a unified data platform that handles everything from databases and CI/CD pipelines to AI/ML workloads — all with cloud-native agility.
🚀 How OpenShift Data Foundation Empowers Enterprise Workloads
ODF isn’t just a storage solution — it's a strategic enabler for enterprise innovation.
1. 🔄 Persistent Storage for Stateful Applications
Containerized workloads like PostgreSQL, Jenkins, MongoDB, and Elasticsearch require storage that persists across restarts and deployments. ODF offers dynamic provisioning of persistent volumes using standard Kubernetes APIs — no manual intervention required.
2. 🔐 Enterprise-Grade Security and Compliance
ODF ensures your data is always protected:
Encryption at rest and in transit
Role-based access control (RBAC)
Integration with Kubernetes secrets
These features help meet compliance requirements such as HIPAA, GDPR, and SOC 2.
3. ⚙️ Automation and Scalability at Core
OpenShift Data Foundation supports automated storage scaling, self-healing, and distributed storage pools. This makes it easy for DevOps teams to scale storage with workload demands without reconfiguring the infrastructure.
4. 🌐 True Hybrid and Multi-Cloud Experience
ODF provides a consistent storage layer whether you're on-premises, in the cloud, or at the edge. You can deploy it across AWS, Azure, GCP, or bare metal environments — ensuring portability and resilience across any architecture.
5. Developer and DevOps Friendly
ODF integrates natively with Kubernetes and OpenShift:
Developers can request storage via PersistentVolumeClaims (PVCs)
DevOps teams get centralized visibility through Prometheus metrics and OpenShift Console
Built-in support for CSI drivers enhances compatibility with modern workloads
Real-World Use Cases
Databases: MySQL, MongoDB, Cassandra
CI/CD Pipelines: Jenkins, GitLab Runners
Monitoring & Logging: Prometheus, Grafana, Elasticsearch
AI/ML Pipelines: Model training and artifact storage
Hybrid Cloud DR: Backup and replicate data across regions or clouds
How to Get Started with ODF in OpenShift
Prepare Your OpenShift Cluster Ensure a compatible OpenShift 4.x cluster is up and running.
Install the ODF Operator Use the OperatorHub inside the OpenShift Console.
Create a Storage Cluster Configure your StorageClass, backing stores, and nodes.
Deploy Stateful Apps Define PersistentVolumeClaims (PVCs) in your Kubernetes manifests.
Monitor Performance and Usage Use OpenShift Console and Prometheus for real-time visibility.
📌 Final Thoughts
In today’s enterprise IT landscape, storage must evolve with applications — and OpenShift Data Foundation makes that possible. It bridges the gap between traditional storage needs and modern, container-native environments. Whether you’re running AI/ML pipelines, databases, or CI/CD workflows, ODF ensures high availability, scalability, and security for your data.
For DevOps engineers, architects, and platform teams, mastering ODF means unlocking reliable Kubernetes-native storage that supports your journey to hybrid cloud excellence.
🔗 Ready to Build Enterprise-Ready Kubernetes Storage?
👉 Explore more on OpenShift Data Foundation:
Hawkstack Technologies
0 notes
Text
Mastering OpenShift Clusters: A Comprehensive Guide for Streamlined Containerized Application Management
As organizations increasingly adopt containerization to enhance their application development and deployment processes, mastering tools like OpenShift becomes crucial. OpenShift, a Kubernetes-based platform, provides powerful capabilities for managing containerized applications. In this blog, we'll walk you through essential steps and best practices to effectively manage OpenShift clusters.
Introduction to OpenShift
OpenShift is a robust container application platform developed by Red Hat. It leverages Kubernetes for orchestration and adds developer-centric and enterprise-ready features. Understanding OpenShift’s architecture, including its components like the master node, worker nodes, and its integrated CI/CD pipeline, is foundational to mastering this platform.
Step-by-Step Tutorial
1. Setting Up Your OpenShift Cluster
Step 1: Prerequisites
Ensure you have a Red Hat OpenShift subscription.
Install oc, the OpenShift CLI tool.
Prepare your infrastructure (on-premise servers, cloud instances, etc.).
Step 2: Install OpenShift
Use the OpenShift Installer to deploy the cluster:openshift-install create cluster --dir=mycluster
Step 3: Configure Access
Log in to your cluster using the oc CLI:oc login -u kubeadmin -p $(cat mycluster/auth/kubeadmin-password) https://api.mycluster.example.com:6443
2. Deploying Applications on OpenShift
Step 1: Create a New Project
A project in OpenShift is similar to a namespace in Kubernetes:oc new-project myproject
Step 2: Deploy an Application
Deploy a sample application, such as an Nginx server:oc new-app nginx
Step 3: Expose the Application
Create a route to expose the application to external traffic:oc expose svc/nginx
3. Managing Resources and Scaling
Step 1: Resource Quotas and Limits
Define resource quotas to control the resource consumption within a project:apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-quota spec: hard: requests.cpu: "4" requests.memory: 8Gi Apply the quota:oc create -f quota.yaml
Step 2: Scaling Applications
Scale your deployment to handle increased load:oc scale deployment/nginx --replicas=3
Expert Best Practices
1. Security and Compliance
Role-Based Access Control (RBAC): Define roles and bind them to users or groups to enforce the principle of least privilege.apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: myproject name: developer rules: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list", "watch", "create", "update", "delete"]oc create -f role.yaml oc create rolebinding developer-binding --role=developer [email protected] -n myproject
Network Policies: Implement network policies to control traffic flow between pods.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-same-namespace namespace: myproject spec: podSelector: matchLabels: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} oc create -f networkpolicy.yaml
2. Monitoring and Logging
Prometheus and Grafana: Use Prometheus for monitoring and Grafana for visualizing metrics.oc new-project monitoring oc adm policy add-cluster-role-to-user cluster-monitoring-view -z default -n monitoring oc apply -f https://raw.githubusercontent.com/coreos/kube-prometheus/main/manifests/setup oc apply -f https://raw.githubusercontent.com/coreos/kube-prometheus/main/manifests/
ELK Stack: Deploy Elasticsearch, Logstash, and Kibana for centralized logging.oc new-project logging oc new-app elasticsearch oc new-app logstash oc new-app kibana
3. Automation and CI/CD
Jenkins Pipeline: Integrate Jenkins for CI/CD to automate the build, test, and deployment processes.oc new-app jenkins-ephemeral oc create -f jenkins-pipeline.yaml
OpenShift Pipelines: Use OpenShift Pipelines, which is based on Tekton, for advanced CI/CD capabilities.oc apply -f https://raw.githubusercontent.com/tektoncd/pipeline/main/release.yaml
Conclusion
Mastering OpenShift clusters involves understanding the platform's architecture, deploying and managing applications, and implementing best practices for security, monitoring, and automation. By following this comprehensive guide, you'll be well on your way to efficiently managing containerized applications with OpenShift.
For more details click www.qcsdclabs.com
#redhatcourses#information technology#docker#container#linux#kubernetes#containerorchestration#containersecurity#dockerswarm#aws
0 notes
Video
youtube
Deploy jenkins on openshift cluster - deploy jenkins on openshift | openshift#deploy #jenkins #openshift #deployjenkinsonopenshift #jenkinsonopenshift deploy jenkins on openshift,deploy jenkins x on openshift,install jenkins on openshift,deploying jenkins on openshift part 2,deploy jenkins on openshift origin,deploy jenkins on openshift cluster,demo jenkins ci cd on openshift,how to deploy jenkins in openshift,jenkins pipeline tutorial for beginners,openshift,jenkins,fedora,cloud,deployments,pipeline,openshift origin,redhat,container platform,redhat container platform,docker,container https://www.youtube.com/channel/UCnIp4tLcBJ0XbtKbE2ITrwA?sub_confirmation=1&app=desktop About: 00:00 Deploy Jenkins on Openshift cluster - deploy jenkins on openshift - Install Jenkins on Openshift In this course we will learn about deploy jenkins on openshift cluster. How to access jenkins installed on openshift cluster. deploy jenkins on openshift cluster - Red Hat is the world's leading provider of enterprise open source solutions, including high-performing Linux, cloud, container, and Kubernetes technologies. deploy jenkins on openshift origin - Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day Openshift/ Openshift4 a cloud based container to build deploy test our application on cloud. In the next videos we will explore Openshift4 in detail. https://www.facebook.com/codecraftshop/ https://t.me/codecraftshop/ Please do like and subscribe to my you tube channel "CODECRAFTSHOP" Follow us on facebook | instagram | twitter at @CODECRAFTSHOP .
#deploy jenkins on openshift#deploy jenkins x on openshift#install jenkins on openshift#deploying jenkins on openshift part 2#deploy jenkins on openshift origin#deploy jenkins on openshift cluster#demo jenkins ci cd on openshift#how to deploy jenkins in openshift#jenkins pipeline tutorial for beginners#openshift#jenkins#fedora#cloud#deployments#pipeline#openshift origin#redhat#container platform#redhat container platform#docker#container
0 notes
Text
In the previous article, we discussed how to Setup self-hosted Gitea private repository on a Kubernetes cluster. This article will discuss how to install Jenkins server on a Kubernetes/OpenShift cluster. Kubernetes/OpenShift adds an additional automation layer on Jenkins server which in turn makes sure that the resources allocated to the Jenkins deployment are efficiently utilized. This means that with the use of an orchestration layer such as Kubernetes/OpenShift, resources can be scaled up/down depending on consumption/usage. This article will discuss the available methods of setting up Jenkins server on a Kubernetes/OpenShift cluster. Install Jenkins on Kubernetes using Helm3 In our guide we assume that you have a fully fuctional Kubernetes/OpenShift cluster. You also need access to the control plane either natively or remote. Helm is a package manager for Kubernetes/OpenShift that packages deployments in a format called chat. The installation of Helm3 is covered in the article below Install and Use Helm 3 on Kubernetes Cluster Step 1. Create Jenkins namespace Create the jenkins namespace that will be used for this deployment. kubectl create ns jenkins Once you have Helm3 installed, add Jenkins repo to your environment $ helm repo add jenkinsci https://charts.jenkins.io $ helm repo update Step 2. Create a Persistent Volume Once the Jenkins repo is added, we need to configure a persistent volume since Jenkins is a stateful application and needs to store persistent data on a volume. Creating Persistent Volume from Host Path Create a PersistentVolume on your Kubernetes cluster: $ vim jenkins-localpv.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: jenkins-sc provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv labels: type: local spec: storageClassName: jenkins-sc capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc spec: storageClassName: jenkins-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Apply the configuration kubectl apply -f jenkins-localpv.yaml The above command creates a persistent volume and persistentVolumeClaim using hostPath. The volume will be saved at /mnt path of the node. Dynamic Persistent Volume creation using StorageClass If you have any StorageClass provided by a custom storage solution, create new file called jenkins-pvc.yaml vim jenkins-pvc.yaml Modify below configuration to provide StorageClass name: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc spec: accessModes: - ReadWriteOnce storageClassName: storage-class-name resources: requests: storage: 10Gi Then apply the configuration after modification: kubectl apply -f jenkins-pvc.yaml Using openEBS Storage You can use dynamic storage provisioning using tools such as openEBS and provision storageClasses. For dynamic storage, create a storageClass config: $ vim jenkins-sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: jenkins-sc annotations: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: ReplicaCount value: "2" - name: StoragePool value: gpdpool provisioner: openebs.io/provisioner-iscsi Apply the configuration kubectl apply -f jenkins-sc.yaml Create a persistenVolume and PersistenVolumeClaim based on the above storageClass $ vim dynamic-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv labels: type: local spec: storageClassName: jenkins-sc capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/data/jenkins-volume" --- apiVersion: v1 kind: PersistentVolumeClaim metadata:
name: jenkins-pvc spec: storageClassName: jenkins-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Apply the configuration kubectl apply -f dynamic-pv.yaml More about persistent volumes on Kubernetes/OpenShift has been covered on the article below: Deploy and Use OpenEBS Container Storage on Kubernetes Step 3. Create a service account Create a service account for the pods to communicate with ther API server. We will also create the ClusterRole and the permissions. kubectl apply -f -
0 notes
Text
A brief overview of Jenkins X
What is Jenkins X?
Jenkins X is an open-source solution that provides automatic seamless integration and continuous distribution (CI / CD) and automated testing tools for cloud-native applications in Cubernet. It supports all major cloud platforms such as AWS, Google Cloud, IBM Cloud, Microsoft Azure, Red Hat OpenShift, and Pivotal. Jenkins X is a Jenkins sub-project (more on this later) and employs automation, DevOps best practices and tooling to accelerate development and improve overall CI / CD.
Features of Jenkins X
Automated CI /CD:
Jenkins X offers a sleek jx command-line tool, which allows Jenkins X to be installed inside an existing or new Kubernetes cluster, import projects, and bootstrap new applications. Additionally, Jenkins X creates pipelines for the project automatically.
Environment Promotion via GitOps:
Jenkins X allows for the creation of different virtual environments for development, staging, and production, etc. using the Kubernetes Namespaces. Every environment gets its specific configuration, list of versioned applications and configurations stored in the Git repository. You can automatically promote new versions of applications between these environments if you follow GitOps practices. Moreover, you can also promote code from one environment to another manually and change or configure new environments as needed.
Extensions:
It is quite possible to create extensions to Jenkins X. An extension is nothing but a code that runs at specific times in the CI/CD process. You can also provide code through an extension that runs when the extension is installed, uninstalled, as well as before and after each pipeline.
Serverless Jenkins:
Instead of running the Jenkins web application, which continually consumes a lot of CPU and memory resources, you can run Jenkins only when you need it. During the past year, the Jenkins community has created a version of Jenkins that can run classic Jenkins pipelines via the command line with the configuration defined by code instead of the usual HTML forms.
Preview Environments:
Though the preview environment can be created manually, Jenkins X automatically creates Preview Environments for each pull request. This provides a chance to see the effect of changes before merging them. Also, Jenkins X adds a comment to the Pull Request with a link for the preview for team members.
How Jenkins X works?
The developer commits and pushes the change to the project’s Git repository.
JX is notified and runs the project’s pipeline in a Docker image. This includes the project’s language and supporting frameworks.
The project pipeline builds, tests, and pushes the project’s Helm chart to Chart Museum and its Docker image to the registry.
The project pipeline creates a PR with changes needed to add the project to the staging environment.
Jenkins X automatically merges the PR to Master.
Jenkins X is notified and runs the staging pipeline.
The staging pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project’s resources, typically a pod, service, and ingress.
0 notes
Link
About the Bank
The Customer is an international banking group, with around 86,000 employees and a 150-year history in some of the world’s most dynamic markets. Although they are based in London, the vast majority of their customers and employees are in Asia, Africa and the Middle East. The company is a leader in the personal, consumer, corporate, institutional and treasury segments.
Challenge: To Provide an Uninterrupted Customer Experience
The Bank wanted to stay ahead of the competition. The only way to succeed in today’s digital world is to deliver services faster to customers, so they needed to modernize their IT infrastructure. As part of a business expansion, entering eight additional markets in Africa and providing virtual banking services in Hong Kong, they needed to roll out new retail banking services. The new services would enhance customer experience, improve efficiency, and build a “future proof” retail bank.
Deploying these new services created challenges that needed to be overcome quickly, or risk delaying the entry into the new markets.
Sluggish Deployments for Monolithic Applications
The bank was running monolithic applications on old Oracle servers, located in HK and the UK, that served Africa, the Middle East, and South Asia Grade. Each upgrade forced a significant downtime across all regions that prevented customers from accessing their accounts. This was not true for the bank’s competitors, and it threatened to become a major source of customer churn.
Need for Secured Continuous Delivery Platform
As part of the bank’s digital transformation, they decided to move many services to a container-based infrastructure. They chose Kubernetes and Red Hat OpenShift as their container environment. To take advantage of the ability to update containers quickly, they also decided to move to a continuous delivery (CD) model, enabling updates without downtime. Their existing deployment tool was unsuitable for the new environment.
Of course, strict security of the platform and the CD process was an absolute requirement. Additionally, the bank required easy integration to support a broad range of development and CI tools and a high performance solution capable of scaling to the bank’s long term needs.
Lack of Continuous Delivery Expertise
The bank’s IT team, operating on multiple continents, was stretched thin with the migration to OpenShift and containers. Further, their background in software deployment simply did not include experience with continuous delivery. The bank needed a trusted partner who could provide a complete solution – software and services – to reduce the risk of delays or problems that could hobble the planned business expansion.
Solution: A Secured CD Platform to Deploy Containerised Applications
After a thorough evaluation, the bank chose OpsMx Enterprise for Spinnaker (OES) as their CD solution. They chose OES for its ability to scale, high security, and integration with other tools. They chose OpsMx because of their expertise with Spinnaker and continuous delivery and their deep expertise in delivering a secure environment.
Correcting Security Vulnerabilities
There are four main security requirements not available in the default OSS Spinnaker which are satisfied by OpsMx.
Validated releases: Spinnaker is updated frequently due to the active participation of the open source community. However, the bank required that each release be scanned for vulnerabilities and hardened before installation in the bank’s environment. OpsMx delivers this as part of the base system, so OpsMx customers know that the base platform has not been compromised.
Air gapped environment: The bank, like many security-conscious organizations, isolates key environments from the public internet to increase security. OES fully supports air gapped environments.
Encryption: Another key requirement was the ability to encrypt all data communication between the Spinnaker services and between Spinnaker and integrated tools, offered by OES.
Authorization and authentication: OpsMx Enterprise for Spinnaker supports LDAP and Active Directory (AD), fully integrating with the bank’s standards for authorization and authentication.
Simplifying the Software Delivery Process
The bank quickly completed the secure implementation and deployed pipelines for services. The bank is now able to deploy updates on-demand rather than grouping them together in a “big-bang” release that forces application downtime. The new CD process enabled by OpsMx made the process of requesting downtime unnecessary. Deployments are made into OpenShift with the help of templates available for developers.
OpsMx Enterprise for Spinnaker now controls the overall software delivery pipeline. The application team at the bank uses Bitbucket to commit the new piece of code, then OES triggers Jenkins to initiate the build.
After a successful build, the package is pushed into an external repository – either Jfrog Artifactory or BitBucket. . OES fetches these images and deploys them into the target environment. This provides an end-to-end continuous delivery system without the use of scripts.
Self Service Onboarding
Development teams, such as the team responsible for the Retail Banking applications, are able to create and manage their own pipelines using OES. This reduces demand on the central team and speeds the creation and enhancements of new services.
Results: Software Delivery Automated with Zero- downtime
Code to Production in Hours
Since the deployment of OES, the retail application development team has seen significant improvements in software delivery velocity. The code flow time has been reduced from days to few hours. OES seamlessly integrated with their existing Build and cloud environment avoid rework cost and time.
Automated Software Delivery for Global Operations
From a Traditional Software delivery process the bank was able to move towards a modern Continuous Delivery framework. OpsMx enabled a total of 120 different pipelines to serve twenty two different countries. In addition a standard template for each country was also set up that allowed the developers to quickly set up further pipelines with ease. These templates ensured that the initialization errors were reduced to nil.
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs from Latest listings added - JobsAggregation http://jobsaggregation.com/jobs/technology/devops-architect-with-net_i10429
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs Source: http://jobrealtime.com/jobs/technology/devops-architect-with-net_i11143
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs from Latest listings added - JobRealTime http://jobrealtime.com/jobs/technology/devops-architect-with-net_i11143
0 notes
Text
🔧 Migrating from Jenkins to OpenShift Pipelines: 8 Steps to Success
As organizations modernize their CI/CD workflows, many are moving away from Jenkins towards Kubernetes-native solutions like OpenShift Pipelines (based on Tekton). This transition offers better scalability, security, and integration with GitOps practices. Here's a streamlined 8-step guide to help you succeed in this migration:
✅ Step 1: Audit Your Current Jenkins Pipelines
Begin by reviewing your existing Jenkins jobs. Understand the structure, stages, integrations, and any custom scripts in use. This audit helps identify reusable components and areas that need rework in the new pipeline architecture.
✅ Step 2: Deploy the OpenShift Pipelines Operator
Install the OpenShift Pipelines Operator from the OperatorHub. This provides Tekton capabilities within your OpenShift cluster, enabling you to create pipelines natively using Kubernetes CRDs.
✅ Step 3: Convert Jenkins Stages to Tekton Tasks
Each stage in Jenkins (e.g., build, test, deploy) should be mapped to individual Tekton Tasks. These tasks are containerized and isolated, aligning with Kubernetes-native principles.
✅ Step 4: Define Tekton Pipelines
Group your tasks logically using Tekton Pipelines. These act as orchestrators, defining the execution flow and data transfer between tasks, ensuring modularity and reusability.
✅ Step 5: Store Pipelines in Git (GitOps Approach)
Adopt GitOps by storing all pipeline definitions in Git repositories. This ensures version control, traceability, and easy rollback of CI/CD configurations.
✅ Step 6: Configure Triggers for Automation
Use Tekton Triggers or EventListeners to automate pipeline runs. These can respond to Git push events, pull requests, or custom webhooks to maintain a continuous delivery workflow.
✅ Step 7: Integrate with Secrets and ServiceAccounts
Securely manage credentials using Secrets, access control with ServiceAccounts, and runtime configs with ConfigMaps. These integrations bring Kubernetes-native security and flexibility to your pipelines.
✅ Step 8: Validate the CI/CD Flow and Sunset Jenkins
Thoroughly test your OpenShift Pipelines. Validate all build, test, and deploy stages across environments. Once stable, gradually decommission legacy Jenkins jobs to complete the migration.
🚀 Ready for Cloud-Native CI/CD
Migrating from Jenkins to OpenShift Pipelines is a strategic move toward a scalable and cloud-native DevOps ecosystem. With Tekton’s modular design and OpenShift’s robust platform, you’re set for faster, more reliable software delivery.
Need help with migration or pipeline design? HawkStack Technologies specializes in Red Hat and OpenShift consulting. Reach out for expert guidance! For more details www.hawkstack.com
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs from Latest listings added - LinkHello http://linkhello.com/jobs/technology/devops-architect-with-net_i11247
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs from Latest listings added - LinkHello http://linkhello.com/jobs/technology/devops-architect-with-net_i11247
0 notes
Text
Deploy jenkins on openshift cluster - deploy jenkins on openshift | openshift
Deploy jenkins on openshift cluster – deploy jenkins on openshift | openshift
#deploy #jenkins #openshift #deployjenkinsonopenshift #jenkinsonopenshift
deploy jenkins on openshift,deploy jenkins x on openshift,install jenkins on openshift,deploying jenkins on openshift part 2,deploy jenkins on openshift origin,deploy jenkins on openshift cluster,demo jenkins ci cd on openshift,how to deploy jenkins in…
View On WordPress
#cloud#container#container platform#demo jenkins ci cd on openshift#deploy jenkins on openshift#deploy jenkins on openshift cluster#deploy jenkins on openshift origin#deploy jenkins x on openshift#deploying jenkins on openshift part 2#deployments#docker#fedora#how to deploy jenkins in openshift#install jenkins on openshift#jenkins#jenkins pipeline tutorial for beginners#openshift#openshift origin#pipeline#redhat#redhat container platform
0 notes
Text
DevOps architect with .NET
Sr DevOps Engineer Proven ability to thrive in a fast-past development operations role - 5+ years of relevant experience ideal · Ability to communicate and collaborate cross-functionally, and work well in a team-oriented environment · Ability to build and implement continuous integration (CI) and continuous deployment (CD) environments using tools such as Jenkins, Artifactory, SonarQube, · Experience creating CI/CD for applications in NodeJS, Java micro services, along deployment to AEM(Adobe Experience Manager), Container platforms like Docker, cloud foundary and IBM WebSphere application Servers · Experience with OpenShift, Docker and Kubernetes · Experience creating and supporting piplines with Azure and Azure DevOps · Well versed in Canary and Blue/Green deployment patterns · Experience building installation, configuration, administration, reporting of Atlassian tools Jira, Confluence, BitBucket/Stash/GitHub, Crowd · Ability to document and train on CI/CD best practices to our IT & Engineering organizations · Extensive experience using scripting languages such as Shell Scripting, Groovy DSL · Working experience of Agile/SCRUM/Kanban techniques · Experience with some of the following preferred: Vagrant, MySQL, NoSQL, SonarQube, Splunk or Graylog, Nagios or New Relic, Docker, Chef, Ansible · Ability to troubleshooting network services and protocols such as TCP/IP, DNS, AD, LDAP, SSH, SMTP, SSL, HTTP, IIS and Apache · Bachelor Degree in Computer Science, Information Technology or related field (or equivalent experience) Reference : DevOps architect with .NET jobs from Latest listings added - cvwing http://cvwing.com/jobs/technology/devops-architect-with-net_i14169
0 notes
Text
Fwd: Urgent requirements of below positions
New Post has been published on https://www.hireindian.in/fwd-urgent-requirements-of-below-positions-56/
Fwd: Urgent requirements of below positions
Please find the Job description below, if you are available and interested, please send us your word copy of your resume with following detail to [email protected] or please call me on 703-349-3271 to discuss more about this position.
Job Title
Location
Sharepoint Developer
Seattle, WA
Java Full Stack Developer
San Jose, CA
Java Full Stack Developer with DevOps
San Jose, CA
Sr. Workday Developer
Tempe, AZ
DLP- CASB Analyst/Engineer
Frisco, TX
Job Description
Job Title: Sharepoint Developer
Location: Seattle, WA
Duration: 6 Months
Job description:
Minimum 5+ years implementing SharePoint applications with knowledge of new SharePoint 2013 features
Skills
In-depth knowledge of SharePoint development
In-depth knowledge of SharePoint Object model, Search and SharePoint workflows
In depth knowledge of C#, ASP.NET, JQuery, HTML and CSS
In-depth Knowledge of SharePoint Designer
In-depth knowledge of customizing SharePoint UI
In-depth knowledge of Microsoft technology and software including Windows, IIS, SQL, ASP/ASP .NET, SharePoint 2007 / 2010
Good knowledge on UI/UX standards & processes
Knowledge of software lifecycle methodology
Roles & Responsibilities
Person will be responsible for developing a reporting application on SharePoint 2013.
The responsibilities include customizing the look and feel of SharePoint site, building web parts etc.
Position: Java Full Stack Developer
Location: San Jose, CA
Duration: 6 months
Experience: 5-7 years
JD
J2EE full stack developer
Mandatory Skills: Strong in Java/J2ee
Position: Java Full Stack Developer with DevOPS
Location: San Jose, CA
Duration: 6 months
Experience: 7-10 years
Job Description:
J2EE full stack
DevOps with Jenkins, Docker and OpenShift
Mandatory Skills: DevOps with Jenkins, Docker and OpenShift
Position: Sr. Workday Developer
Location: Tempe, AZ
Duration: 5-6 months
Job Description:
Should have been involved in at least 1 full Workday Implementation project as an implementer developing integrations.
Experience designing and developing both outbound and inbound integrations using all of the Workday Integration types including EIB, Core Connectors, Cloud Connect and Workday Studio.
Experience with document transformation, XSLT, XPath and MVEL.
Experience creating Workday custom reports and calculated fields.
General knowledge of 2-3 Workday functional areas.
Good understanding of Workday security.
Demonstrated ability to work and communicate effectively with all levels of Business and IT management
Excellent organizational skills with the ability to manage multiple projects effectively
Experience working with Agile and Waterfall methodologies
Position: DLP- CASB Analyst/Engineer
Location: Frisco, TX
Duration: Contract
Job Description:
Responsibilities:
Demonstrate working knowledge of Data Loss Prevention (DLP) (Ex: Symantec, MacAfee) and CASB (Ex: Netskope/MacAfee)
Provide guidance configuring, implementing and upgrading DLP and CASB tool policy
Demonstrate knowledge on endpoint DLP, Network DLP, email DLP, and CASB installation, configuration, and maintenance.
Provide guidance, recommendations, best practices, for DLP/CASB operations. Stabilize and optimize system performance, including rules and reports. Recommend, plan and implement tool upgrades and patch updates.
Policy fine tuning
Perform Data discovery using DLP discovery modules
Apply Data Classification policy including user access levels on least privileged, need-to-know basis and associated encryption needs and integrity controls
Perform data labelling for data classification and verifying access controls, data encryption
Develop and manage a comprehensive data classification scheme, adhering to procedures for data protection, back-up, and recovery
Prepare technical standard operating procedures for DLP/CASB.
Develop policies and procedure around DLP/CASB.
Maintain ongoing project management and relationship development to ensure the highest level of customer service
Responsible for day-to-day operations, ensuring that the implementation is in compliance with the agreed objective.
Conduct workshops highlighting project status and gathering updated customer expectations.
Perform data security domain security assessments, identify areas of continuous improvement, and present recommendation to the client.
Ensure SLAs/SLOs/OLAs related to incident, change and service request are met.
Experience:
Candidate should have overall experience of 5+ years with DLP, 1+ years with CASB
Certification in DLP methods, or DLP vendor product certification
Familiar with regulatory requirements
Project Management
Good English speaking and writing skills
Technical Sills
Expert level knowledge of Data Loss Prevention tools such as Symantec and MacAfee
Hands on experience with CASB
Expert level knowledge on SQL and scripting.
Experience working in a mid to large scale environments
Working level knowledge of mainframe, Unix, RHEL and Windows operating environments
Good understanding of DLP/CASB policy creation
Excellent team skills in a professional environment
Thanks, Steve Hunt Talent Acquisition Team – North America Vinsys Information Technology Inc SBA 8(a) Certified, MBE/DBE/EDGE Certified Virginia Department of Minority Business Enterprise(SWAM) 703-349-3271 www.vinsysinfo.com
To unsubscribe from future emails or to update your email preferences click here .
0 notes