#Setting up kubeconfig files
Explore tagged Tumblr posts
Text
Kubectl get context: List Kubernetes cluster connections
Kubectl get context: List Kubernetes cluster connections @vexpert #homelab #vmwarecommunities #KubernetesCommandLineGuide #UnderstandingKubectl #ManagingKubernetesResources #KubectlContextManagement #WorkingWithMultipleKubernetesClusters #k8sforbeginners
kubectl, a command line tool, facilitates direct interaction with the Kubernetes API server. Its versatility spans various operations, from procuring cluster data with kubectl get context to manipulating resources using an assortment of kubectl commands. Table of contentsComprehending Fundamental Kubectl CommandsWorking with More Than One Kubernetes ClusterNavigating Contexts with kubectl…

View On WordPress
#Advanced kubectl commands#Kubectl config settings#Kubectl context management#Kubectl for beginners#Kubernetes command line guide#Managing Kubernetes resources#Setting up kubeconfig files#Switching Kubernetes contexts#Understanding kubectl#Working with multiple Kubernetes clusters
0 notes
Text
Automating OpenShift Deployments with Ansible
Introduction
Managing and deploying applications on OpenShift can be complex, especially in enterprise environments. Automating these deployments with Ansible simplifies configuration management, ensures consistency, and reduces manual intervention. This blog will explore how Ansible can be used to automate OpenShift deployments efficiently.
Why Use Ansible for OpenShift?
Ansible is a powerful automation tool that integrates seamlessly with OpenShift. Here’s why it is beneficial:
Agentless: No need to install agents on managed nodes.
Idempotency: Ensures tasks produce the same result regardless of how many times they run.
Scalability: Easily manage multiple OpenShift clusters.
Declarative Approach: Define the desired state of your infrastructure and applications.
Prerequisites
Before automating OpenShift deployments with Ansible, ensure you have:
A running OpenShift cluster
Ansible installed on your control node
The openshift Ansible collection (ansible-galaxy collection install community.okd)
Proper OpenShift API credentials (OAuth token or kubeconfig)
Automating OpenShift Deployments with Ansible
1. Setting Up Ansible Playbooks
Ansible playbooks are YAML files that define a set of tasks to automate deployments. Below is a simple example:
- name: Deploy an application to OpenShift
hosts: localhost
gather_facts: no
tasks:
- name: Login to OpenShift
kubernetes.core.k8s_auth:
host: https://api.openshift-cluster.example.com:6443
username: admin
password: password
validate_certs: no
register: k8s_auth_result
- name: Deploy an application
kubernetes.core.k8s:
state: present
namespace: my-app
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: quay.io/my-app:latest
ports:
- containerPort: 8080
2. Running the Ansible Playbook
Once the playbook is ready, execute it using:
ansible-playbook -i inventory playbook.yml
Ensure your inventory file contains the correct OpenShift cluster details.
3. Automating Rollouts and Updates
To automate updates, modify the deployment definition or use rolling updates:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
You can then run the playbook to apply the changes without downtime.
Advanced Automation Techniques
Using OpenShift Operators: Deploy and manage applications via OpenShift Operators.
Integrating with CI/CD: Automate deployments using GitOps tools like ArgoCD or Jenkins.
Dynamic Configuration Management: Use Ansible Vault to securely manage sensitive credentials.
Conclusion
Automating OpenShift deployments with Ansible streamlines operations, improves reliability, and reduces human errors. Whether deploying simple applications or managing complex clusters, Ansible provides a robust automation framework. Start automating today and unlock the full potential of OpenShift with Ansible!
For more details click www.hawkstack.com
0 notes
Text
Assess Application and Cluster Readiness
Review Application Dependencies:
Analyze libraries, JDK versions, frameworks (e.g., Spring Boot, Jakarta EE), and external services (databases, APIs).
Identify any Kubernetes-specific configurations (e.g., ConfigMaps, Secrets, environment variables).
Containerization:
Ensure your Java app is containerized using tools like Docker.
Update the Dockerfile if necessary (e.g., for JDK/JRE updates or changes in base image).
2. Prepare Kubernetes Environment
Target Cluster Validation:
Ensure the target Kubernetes cluster is set up and ready.
Check for compatibility with the current cluster’s API version and resources.
Namespace Setup:
Create equivalent namespaces, RBAC roles, and policies in the target cluster.
Storage:
Verify Persistent Volume Claims (PVCs) and ensure similar storage classes are available.
3. Backup and Export Resources
Kubernetes Resources:
Export all application-related resources using kubectl:bashCopy codekubectl get all -n <namespace> -o yaml > resources.yaml
Persistent Data:
Back up database data or persistent volumes using tools like Velero or rsync.
4. Migrate Application Configurations
ConfigMaps and Secrets:
Export existing configurations:bashCopy codekubectl get configmap -n <namespace> -o yaml > configmap.yaml kubectl get secret -n <namespace> -o yaml > secret.yaml
Reapply them in the new cluster:bashCopy codekubectl apply -f configmap.yaml kubectl apply -f secret.yaml
5. Deploy to Target Cluster
Deploy Application:
Use Helm charts, Kubernetes manifests, or CI/CD pipelines to deploy the Java application to the new cluster.
Update the kubeconfig for the target cluster:bashCopy codekubectl config use-context <target-cluster-context> kubectl apply -f resources.yaml
Persistent Volumes:
Ensure data is restored to the persistent volumes if needed.
6. Test the Migration
Validation:
Ensure the application is running as expected (pods, services, and networking).
Run integration and regression tests.
Connectivity:
Validate external service integrations (e.g., databases, APIs).
Check ingress configurations for public access.
7. Cutover and Cleanup
Switch Traffic:
Update DNS or external load balancer to direct traffic to the new cluster.
Monitor:
Use tools like Prometheus, Grafana, or ELK Stack to monitor application health.
Clean Up Old Environment:
After verifying stability, clean up resources in the old cluster.
Tools to Assist in Migration
Velero: For backup and restore of Kubernetes resources and persistent volumes.
Helm: To manage application deployments and upgrades.
Kubernetes Dashboard: For real-time cluster monitoring.
Kubectl Plugins: To enhance CLI functionality.
If you provide more details about the migration (e.g., specific cloud providers or Kubernetes environments), I can offer more tailored advice!
4o
You said:
Migration of 5000 files per hr fix
ChatGPT said:
ChatGPT
If you're migrating a large number of files (e.g., 5000 files per hour) and encountering performance bottlenecks, you can improve the speed and reliability of your file migration by addressing several key areas. Here's a step-by-step guide to optimize file transfer:
1. Identify and Analyze Bottlenecks
File System Bottlenecks:
Check the source and destination disk I/O performance.
Ensure the files are not locked or being used during migration.
Network Bandwidth:
Measure your available network bandwidth using tools like iperf and ensure it can handle the expected load.
File Metadata Overhead:
Small files can add significant overhead due to metadata operations (e.g., file creation, permissions).
2. Optimize Transfer Mechanisms
Use Parallel Processing:
Transfer multiple files concurrently using multithreaded or parallel tools like:
rsync with --max-size and --parallel options.
GNU Parallel for concurrent file transfers.
bashCopy codefind /source/directory -type f | parallel -j8 rsync -a {} /destination/directory
Use a custom script or a library like Python's concurrent.futures for more control.
Batch File Transfers:
Group files into batches (e.g., tar archives) to reduce the overhead of transferring many small files individually.bashCopy codetar -cvf files.tar /source/files && rsync files.tar /destination Extract the files on the destination:bashCopy codetar -xvf files.tar
3. Ensure Network Optimization
Compression:
Compress files during transfer to reduce data volume:bashCopy codersync -az /source/directory /destination/directory
Increase TCP Window Size:
Tweak TCP settings for large transfers using tools like scp or rsync with buffer options:bashCopy codersync --bwlimit=0 --whole-file /source /destination
Use Faster Protocols:
Use optimized file transfer protocols like Rclone, Aspera, or SFTP for high-speed transfer.
4. Improve Source and Destination Performance
Disk Performance:
Optimize disk reads and writes using SSDs for temporary storage or caches.
Ensure the file system supports efficient file handling (e.g., ext4, XFS).
File System Tuning:
For very large numbers of files, tune parameters like inode_cache to handle metadata efficiently.
5. Error Handling and Retry Logic
Retry Failed Transfers:
Use tools like rsync to retry failed files automatically:bashCopy codersync -avz --partial --progress /source /destination
Logging and Monitoring:
Log all transferred files and
4o
You said:
File trsnsfer to s3 improve
ChatGPT said:
ChatGPT
Improving file transfer to Amazon S3 requires addressing both performance and reliability. Below are strategies and tools to optimize large-scale or high-volume file transfers:
1. Use AWS-Specific Tools
AWS CLI with Parallelism:
Use the --multipart-chunk-size-mb and --parallel flags to upload files in parallel and in chunks:bashCopy codeaws s3 cp /local/path s3://your-bucket-name --recursive --storage-class STANDARD_IA --region <region>
Enable parallel transfers: Update your AWS CLI configuration file (~/.aws/config) to increase concurrency:plaintextCopy code[default] s3 = max_concurrent_requests = 10 max_queue_size = 1000 multipart_threshold = 64MB multipart_chunksize = 64MB
AWS DataSync:
Use AWS DataSync for automated, high-performance, large-scale transfers with minimal manual intervention. DataSync can also manage retry and error handling.
2. Optimize S3 Uploads
Multipart Upload:
Use multipart uploads for files larger than 100 MB to improve transfer speed and reliability.
Example using AWS CLI:bashCopy codeaws s3api create-multipart-upload --bucket your-bucket-name --key file_key
Automate multipart uploads with the AWS SDK or tools like boto3 for Python.
Use the Correct Storage Class:
Choose the appropriate storage class to save costs and improve upload speeds:
STANDARD (default): High performance for frequently accessed data.
INTELLIGENT_TIERING: Automatically optimizes storage costs for infrequently accessed files.
3. Improve Network Performance
Enable Transfer Acceleration:
Use S3 Transfer Acceleration for faster uploads over long distances by leveraging AWS edge locations:bashCopy codeaws s3 cp /local/path s3://your-bucket-name --region <region> --endpoint-url https://<bucket-name>.s3-accelerate.amazonaws.com
Compression:
Compress files before transferring to reduce data size:bashCopy codetar -czvf archive.tar.gz /local/path && aws s3 cp archive.tar.gz s3://your-bucket-name
Optimize Network:
Use AWS Direct Connect or a high-bandwidth internet connection for large-scale transfers.
Monitor latency and bandwidth with tools like iperf.
4. Automate and Parallelize Transfers
Rclone:
Use rclone for high-speed S3-compatible transfers with parallelism and retry capabilities:bashCopy coderclone copy /local/path s3:your-bucket-name --transfers 16 --s3-upload-concurrency 16
GNU Parallel:
Transfer files concurrently by splitting file lists:bashCopy codefind /local/path -type f | parallel -j10 aws s3 cp {} s3://your-bucket-name
5. Error Handling and Retry
Retry Mechanisms:
Use tools or scripts that automatically retry failed uploads.
AWS CLI has built-in retry logic; increase retries if necessary:bashCopy codeaws s3 cp --cli-read-timeout 600 --cli-connect-timeout 600 /local/path s3://your-bucket-name
Chunked Retry:
Break large transfers into smaller, manageable chunks and retry failed chunks only.
6. Monitor and Optimize Transfers
AWS CloudWatch Metrics:
Monitor transfer performance and error rates in real-time.
S3 Inventory:
Use S3 inventory reports to validate uploaded files.
AWS SDKs or Custom Scripts:
Build custom monitoring and retry logic with SDKs like boto3 (Python).
Example Optimized Upload Command
bash
Copy code
aws s3 cp /local/path s3://your-bucket-name --recursive --storage-class STANDARD --region us-east-1 --cli-read-timeout 600 --cli-connect-timeout 600
When to Use Snowball or Snowmobile
For extremely large datasets (terabytes or petabytes), use AWS Snowball or AWS Snowmobile for offline data transfer to S3.
These steps can dramatically improve the efficiency and reliability of your file transfer process to S3. Let me know if you need help implementing any of these!
4o
You said:
Improve fole transfers
ChatGPT said:
ChatGPT
Improving file transfer performance and reliability, whether between servers, local systems, or to cloud storage like Amazon S3, involves a combination of strategies. Here are some best practices for improving file transfers:
1. Optimize the File Transfer Process
Parallelize Transfers
Transfer multiple files concurrently to improve throughput:
Use tools like rsync, scp, or cloud-specific SDKs with multi-threading capabilities.
Example with rsync:bashCopy codersync -azP --info=progress2 --max-size=100M --exclude="*.tmp" /source/ /destination/
Batch Small Files
Small files cause metadata overhead during transfer. Bundle them into archives:bashCopy codetar -czvf archive.tar.gz /source/ Transfer the archive, then extract on the destination:bashCopy codetar -xzvf archive.tar.gz
2. Use High-Performance Transfer Tools
Rclone
Ideal for cloud storage and large-scale transfers with parallelism:bashCopy coderclone copy /source/ remote:bucket-name --transfers=16 --checkers=8
GNU Parallel
Parallelize file uploads/downloads:bashCopy codefind /source/ -type f | parallel -j8 scp {} user@destination:/target/
Multithreaded Tools
Use tools with built-in multithreading, such as bbcp, Aspera, or scp (with parallel scripts).
3. Network Optimization
Compression
Compress files before transfer to reduce the amount of data transmitted:bashCopy codegzip largefile.txt Or combine with the transfer:bashCopy codetar czf - /source/ | ssh user@destination "tar xzf - -C /target/"
Bandwidth Management
Limit bandwidth usage to prevent throttling:
With rsync:bashCopy codersync --bwlimit=1000 -az /source/ /destination/
Use TCP tuning options to optimize network transfer rates:bashCopy codesysctl -w net.ipv4.tcp_window_scaling=1
Use Faster Protocols
Use optimized transfer protocols:
SFTP: Secure but slower for high-volume transfers.
Rsync/SSH: Efficient for incremental and large transfers.
Aspera: High-speed UDP-based protocol.
Enable Transfer Acceleration (Cloud)
For Amazon S3:bashCopy codeaws s3 cp /source/ s3://your-bucket/ --region <region> --endpoint-url https://<bucket-name>.s3-accelerate.amazonaws.com
4. Error Handling and Resilience
Resume Transfers
Use tools that support resuming failed transfers:
Rsync:bashCopy codersync -az --partial /source/ /destination/
AWS CLI:bashCopy codeaws s3 cp --continue /source/ s3://bucket-name/
Retry Logic
Implement retry logic for failed transfers using scripts or tools.
Example using bash:bashCopy codefor i in {1..5}; do scp file user@destination:/target/ && break || sleep 5 done
5. Monitor and Analyze Performance
Real-Time Monitoring
Monitor progress with tools:
Rsync: --progress flag.
SCP: Wrap in pv for progress visualization:bashCopy codetar cf - /source/ | pv | ssh user@destination "tar xf - -C /target/"
Analyze Bottlenecks
Use tools like iperf or nload to check network performance.
Use iotop or dstat to monitor disk I/O on both source and destination.
6. Scale with Advanced Solutions
Distributed File Transfer
Use tools like Globus for efficient distributed data transfers.
Data Transfer Appliances
For extremely large datasets, consider appliances like AWS Snowball or offline methods to ship data physically.
Content Delivery Networks (CDNs)
Use CDNs for faster file distribution to end users.
By combining these strategies—parallelism, compression, protocol optimization, and monitoring—you can significantly enhance file transfer speed and reliability. Let me know if you'd like further customization for a specific use case!
4o
You said:
Threadpool
ChatGPT said:
ChatGPT
A thread pool is a collection of pre-initialized worker threads that stand ready to perform tasks. Instead of creating and destroying threads for each task, a thread pool manages a set of reusable threads, improving performance and resource utilization in multi-threaded applications.
Here’s how to effectively use and optimize thread pools in your application.
Why Use a Thread Pool?
Performance: Reduces the overhead of creating and destroying threads repeatedly.
Resource Management: Controls the number of concurrent threads, preventing system overload.
Scalability: Allows a controlled environment for task execution, making it easier to scale applications.
Key Concepts of Thread Pools
Core Threads: Minimum number of threads that are always kept alive.
**Max Threads
4o
You said:
Improve java spring file transfer
ChatGPT said:
ChatGPT
Improving file transfer in a Java Spring application involves optimizing both the code and the underlying infrastructure. Below are strategies to enhance file transfer performance and reliability.
1. Optimize File Transfer Code
Use Streaming for Large Files
Avoid loading the entire file into memory; instead, use streaming:javaCopy code@PostMapping("/upload") public ResponseEntity<String> uploadFile(@RequestParam("file") MultipartFile file) throws IOException { try (InputStream inputStream = file.getInputStream()) { // Process the input stream directly // Example: Save the file Files.copy(inputStream, Paths.get("target-directory/" + file.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING); } return ResponseEntity.ok("File uploaded successfully!"); }
Enable Asynchronous File Transfers
Use Spring's @Async annotation for non-blocking file uploads/downloads:javaCopy code@Async public CompletableFuture<String> uploadFileAsync(MultipartFile file) { // File processing logic return CompletableFuture.completedFuture("Upload Complete"); }
Use Chunked Uploads for Large Files
Break large files into smaller chunks for efficient transfer and to handle interruptions gracefully:javaCopy code@PostMapping("/uploadChunk") public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk, @RequestParam("fileId") String fileId) { // Append chunk data to the file return ResponseEntity.ok("Chunk uploaded successfully!"); }
2. Optimize for Cloud Storage (e.g., AWS S3, Azure Blob)
Use SDKs for Direct Uploads
Use cloud-specific SDKs (e.g., AWS SDK for S3) to upload files directly to cloud storage instead of saving locally first:javaCopy code@PostMapping("/uploadToS3") public ResponseEntity<String> uploadToS3(@RequestParam("file") MultipartFile file) throws IOException { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); String bucketName = "your-bucket-name"; s3Client.putObject(bucketName, file.getOriginalFilename(), file.getInputStream(), new ObjectMetadata()); return ResponseEntity.ok("File uploaded to S3!"); }
Enable S3 Transfer Acceleration
Use Transfer Acceleration for faster uploads to Amazon S3:javaCopy codeAmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withAccelerateModeEnabled(true) .build();
3. Optimize File Transfer in REST APIs
Enable Compression
Compress file responses to reduce data size:yamlCopy codespring: servlet: multipart: enabled: true max-file-size: 50MB max-request-size: 50MB gzip: enabled: true min-size: 1024
Use HTTP/2
Switch to HTTP/2 for better concurrency and reduced latency:yamlCopy codeserver: http2: enabled: true
Resume Interrupted Uploads
Use ETag or file metadata to allow resumable file uploads:javaCopy code@PostMapping("/resumeUpload") public ResponseEntity<String> resumeUpload(@RequestParam("file") MultipartFile file, @RequestParam("offset") Long offset) { // Resume logic based on offset return ResponseEntity.ok("Upload resumed!"); }
4. Parallelize File Transfers
Multithreaded File Upload
Use a ThreadPoolExecutor or Spring's TaskExecutor:Copy code
In addition to using @Async, there are other strategies and tools to improve file transfer performance and reliability in a Java Spring application. Here are some approaches:
1. Reactive Programming with Spring WebFlux
For truly non-blocking and asynchronous file transfers, use Spring WebFlux:
Streaming File Upload:javaCopy code@PostMapping(value = "/upload", consumes = MediaType.MULTIPART_FORM_DATA_VALUE) public Mono<Void> uploadFile(Flux<FilePart> filePartFlux) { return filePartFlux.flatMap(filePart -> { Path path = Paths.get("target-directory/" + filePart.filename()); return filePart.transferTo(path); }).then(); }
Streaming File Download:javaCopy code@GetMapping(value = "/download", produces = MediaType.APPLICATION_OCTET_STREAM_VALUE) public Mono<ResponseEntity<Resource>> downloadFile() { Path path = Paths.get("target-directory/large-file.txt"); return Mono.just(ResponseEntity.ok() .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=large-file.txt") .body(new FileSystemResource(path))); }
2. Use CompletableFuture and ExecutorService
Combine CompletableFuture with a custom ExecutorService for better control:javaCopy code@PostMapping("/upload") public CompletableFuture<String> uploadFile(@RequestParam("file") MultipartFile file) { ExecutorService executor = Executors.newFixedThreadPool(4); return CompletableFuture.supplyAsync(() -> { try { // File processing logic Files.copy(file.getInputStream(), Paths.get("target/" + file.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING); return "File uploaded successfully!"; } catch (IOException e) { throw new RuntimeException("File upload failed!", e); } }, executor); }
3. Apache Kafka for Asynchronous Processing
Use Apache Kafka to decouple file upload from processing:
Producer sends file metadata to Kafka after upload:javaCopy code@PostMapping("/upload") public ResponseEntity<String> uploadFile(@RequestParam("file") MultipartFile file) throws IOException { // Save the file Files.copy(file.getInputStream(), Paths.get("target/" + file.getOriginalFilename())); // Send metadata to Kafka kafkaTemplate.send("file-topic", file.getOriginalFilename()); return ResponseEntity.ok("File uploaded and queued for processing!"); }
Consumer processes files asynchronously:javaCopy code@KafkaListener(topics = "file-topic", groupId = "file-group") public void processFile(String fileName) { // Process the file Path path = Paths.get("target/" + fileName); System.out.println("Processing file: " + path); }
4. Chunked File Transfers
Handle large files by uploading them in chunks:
Frontend sends file chunks with an identifier.
Backend reconstructs the file:javaCopy code@PostMapping("/uploadChunk") public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk, @RequestParam("fileId") String fileId, @RequestParam("chunkNumber") int chunkNumber) { Path path = Paths.get("target/" + fileId + "_" + chunkNumber); try { Files.copy(chunk.getInputStream(), path, StandardCopyOption.REPLACE_EXISTING); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Failed to upload chunk"); } return ResponseEntity.ok("Chunk uploaded successfully!"); }
Reassemble chunks when all are uploaded.
5. WebSockets for Real-Time Progress
Use WebSockets to provide real-time feedback on file transfer progress:javaCopy code@Controller public class WebSocketController { @MessageMapping("/uploadStatus") @SendTo("/topic/progress") public String sendProgress(String status) { return status; } }
Send progress updates from the server while uploading:javaCopy codesimpMessagingTemplate.convertAndSend("/topic/progress", "50% completed");
6. Redis or Database Queues
Use a Redis queue or a database to handle file metadata asynchronously.
Enqueue metadata after upload:javaCopy coderedisTemplate.opsForList().leftPush("fileQueue", fileMetadata);
Process the queue in the background:javaCopy codeString metadata = redisTemplate.opsForList().rightPop("fileQueue");
7. Use MultipartConfig for Large Files
Configure multipart file handling for large files in application.properties:propertiesCopy codespring.servlet.multipart.max-file-size=1GB spring.servlet.multipart.max-request-size=2GB
Custom bean for fine-tuning:javaCopy code@Bean public MultipartConfigElement multipartConfigElement() { MultipartConfigFactory factory = new MultipartConfigFactory(); factory.setMaxFileSize("1GB"); factory.setMaxRequestSize("2GB"); return factory.createMultipartConfig(); }
8. Use AWS S3 for Direct Uploads
Use pre-signed URLs to offload upload work to the client:javaCopy code@GetMapping("/generatePresignedUrl") public ResponseEntity<String> generatePresignedUrl(@RequestParam String fileName) { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("bucket-name", fileName) .withMethod(HttpMethod.PUT) .withExpiration(new Date(System.currentTimeMillis() + 3600 * 1000)); URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest); return ResponseEntity.ok(url.toString()); }
The client uploads directly to S3 using the URL.
9. Optimize File Transfer for High Performance
Compression
Compress files before upload to reduce transfer size:javaCopy codebyte[] compressedData = compress(file.getBytes()); Files.write(Paths.get("compressed-file.gz"), compressedData);
Load Balancing
Use a load balancer (e.g., AWS ALB or NGINX) for scalable upload endpoints.
Retry Logic
Implement retries for failed transfers to improve reliability:javaCopy codeint maxRetries = 3; while (retries < maxRetries) { try { // Attempt file upload break; } catch (IOException e) { retries++; if (retries == maxRetries) { throw e; } } }
These strategies can enhance file transfer performance, scalability, and reliability in Java Spring applications. Let me know if you need code examples for specific use cases!
4o
You said:
S3 relaibility
ChatGPT said:
ChatGPT
Ensuring reliability when transferring files to Amazon S3 involves minimizing failures, handling errors effectively, and ensuring data integrity. Below are best practices and techniques to achieve reliable file transfers to S3:
1. Use Multipart Upload for Large Files
Multipart uploads split large files into smaller parts, upload them in parallel, and reassemble them in S3. This improves reliability and speeds up transfers.
Example Using AWS SDK (Java)
java
Copy code
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); String bucketName = "your-bucket-name"; String fileName = "large-file.zip"; String keyName = "uploads/" + fileName; File file = new File(fileName); TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); Upload upload = tm.upload(bucketName, keyName, file); try { upload.waitForCompletion(); System.out.println("File uploaded successfully!"); } catch (AmazonClientException e) { System.err.println("Upload failed: " + e.getMessage()); }
2. Use Retry Logic
Add retry mechanisms to handle intermittent network issues or service interruptions.
AWS SDK Retry Settings
The AWS SDK retries failed requests automatically, but you can customize it:
java
Copy code
ClientConfiguration config = new ClientConfiguration(); config.setRetryPolicy(PredefinedRetryPolicies.getDefaultRetryPolicyWithCustomMaxRetries(5)); AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withClientConfiguration(config) .build();
3. Enable Transfer Acceleration
Transfer Acceleration uses AWS edge locations to improve upload speeds over long distances.
Enable Transfer Acceleration in S3
Enable it in the S3 bucket settings.
Update your client code to use the accelerated endpoint:
java
Copy code
AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withAccelerateModeEnabled(true) .build();
4. Validate Data Integrity
Use MD5 checksums to ensure the uploaded file matches the original file.
AWS CLI Example
bash
Copy code
aws s3 cp file.txt s3://your-bucket/ --content-md5 <calculated-md5>
Java Example
java
Copy code
String md5 = Base64.getEncoder().encodeToString(DigestUtils.md5(new FileInputStream(file))); ObjectMetadata metadata = new ObjectMetadata(); metadata.setContentMD5(md5); PutObjectRequest request = new PutObjectRequest(bucketName, keyName, new FileInputStream(file), metadata); s3Client.putObject(request);
5. Monitor File Transfers
Use CloudWatch Metrics to monitor S3 request success, failures, and latencies.
Monitor transfer progress with TransferManager in the AWS SDK:
java
Copy code
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); Upload upload = tm.upload(bucketName, keyName, file); upload.addProgressListener(progressEvent -> { System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred()); }); upload.waitForCompletion();
6. Handle Failures Gracefully
Use Resumable Transfers for interrupted uploads with TransferManager:
java
Copy code
TransferManager tm = TransferManagerBuilder.standard().build(); Upload upload = tm.upload(bucketName, keyName, file); // Add a progress listener upload.addProgressListener((ProgressListener) progressEvent -> { System.out.println("Transferred: " + progressEvent.getBytesTransferred()); }); // Graceful failure handling try { upload.waitForCompletion(); } catch (AmazonClientException e) { System.err.println("Error occurred: " + e.getMessage()); }
7. Use Pre-Signed URLs
Generate pre-signed URLs for reliable, time-limited uploads directly from the client, reducing the load on your server.
Generate Pre-Signed URL
java
Copy code
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, keyName) .withMethod(HttpMethod.PUT) .withExpiration(new Date(System.currentTimeMillis() + 3600 * 1000)); URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest); System.out.println("Pre-Signed URL: " + url);
8. Use S3 Event Notifications
Configure S3 Event Notifications to trigger AWS Lambda or SNS for post-upload validation, processing, or alerting on failures.
Example: Notification for Upload Completion
Configure your S3 bucket to send an event to an SQS queue or Lambda function upon successful upload.
9. Leverage Data Replication
Use S3 Cross-Region Replication (CRR) or Same-Region Replication (SRR) for redundancy.
Automatically replicate files to another bucket for disaster recovery.
10. Automate with AWS DataSync
Use AWS DataSync for automated and highly reliable file transfers between on-premises storage and S3. It includes built-in retry and integrity checks.
11. Optimize Network Performance
Compress files before uploading:javaCopy codeFiles.write(Paths.get("compressed-file.gz"), gzipCompress(fileContent));
Use efficient transfer protocols (e.g., HTTP/2 if supported by your client).
12. Secure Transfers
Always use HTTPS for secure communication.
Use S3 Bucket Policies and IAM Roles to restrict access to authorized users.
Example: Complete Reliable Upload Function
java
Copy code
public void reliableUpload(File file, String bucketName, String keyName) { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); try { // Multipart upload Upload upload = tm.upload(bucketName, keyName, file); upload.addProgressListener((ProgressListener) progressEvent -> { System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred()); }); upload.waitForCompletion(); System.out.println("File uploaded successfully!"); } catch (AmazonClientException e) { System.err.println("Upload failed: " + e.getMessage()); } finally { tm.shutdownNow(); } }
0 notes
Text
Hello!
Not sure how successful this will be but I’m sitting at work learning about something...thinking...”It would be nice to document this for the future and while I’m at it maybe someone else might find this useful.” Now, what platform would be best to do this? Not sure so trying this out...
This will be something I learn on the fly ;-) Most of this will focus on techy stuff but maybe expand.
So here it goes...
I’m in the process of learning more about Kubernetes and getting this going on my Windows 10 laptop is relatively easy. Thing is I am running the Windows Subsystem for Linux (WSL) and like to run everything through there. Found this nice little blog. https://devkimchi.com/2018/06/05/running-kubernetes-on-wsl/ and following the steps gets me where I need to be.
However, in working through the steps I come across the following...
And let kubectl know to use the Docker for Windows context.
kubectl config use-context docker-for-desktop
and I’m like
“How do I know to use docker-for-desktop?”
So knowing that kubectl is already set up in my Windows env, I run kubectl config --help
C:\Users\jacoj1> kubectl config --help
which provides me with a list of commands
Excerpt of output...
Available Commands: current-context Displays the current-context delete-cluster Delete the specified cluster from the kubeconfig delete-context Delete the specified context from the kubeconfig get-clusters Display clusters defined in the kubeconfig get-contexts Describe one or many contexts rename-context Renames a context from the kubeconfig file. set Sets an individual value in a kubeconfig file set-cluster Sets a cluster entry in kubeconfig set-context Sets a context entry in kubeconfig set-credentials Sets a user entry in kubeconfig unset Unsets an individual value in a kubeconfig file use-context Sets the current-context in a kubeconfig file view Display merged kubeconfig settings or a specified kubeconfig file
So then I can issue current-context and viola!
C:\Users\jacoj1> kubectl config current-context docker-for-desktop
Docker Desktop is great as it does all the setup of Kubernetes which allows you to start using it right away. As such you might miss out on some setup/config steps that reinforce some of the concepts. You can learn more about context here. https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
1 note
·
View note
Text
Troubleshooting Kubernetes containers is a come-and-go topic that you may face from time to time. In Kubernetes, you can get insights about the containers using the kubectl commands that include kubectl logs, kubectl describe, kubectl exec e.t.c. However, there are cases when these tools do not provide enough information. There is where monitoring and logging tools come in. They help you understand better what is happening in your cluster or containers. Ksniff, shipped as a kubectl plugin allows one to capture traffic on specified pods in the cluster using tcpdump and Wireshark. It works by using kubectl to upload a packet sniffer(compiled tcpdump binary) to the targetted container and redirects the output to a local Wireshark instance. Wireshark can be defined as a graphical network packet analyzing tool based on pcap(API for network packet capture). Using this tool, you can easily perform analysis by protocol filtering, port, and many other packet attributes. This guide provides the knowledge on how to capture Kubernetes pods packets using tcpdump and Wireshark. Getting Started. Begin by installing the required packages: ##On Rhel/CentOS/Rocky Linux/Alma Linux sudo yum -y install git ##On Ubuntu/Debian sudo apt update sudo apt -y install git I assume you already have a Kubernetes cluster already set up. This site provides several ways one can use to set up a Kubernetes cluster. Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube Install Kubernetes Cluster on Ubuntu using K3s Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O After the cluster has been set up, you need kubectlinstalled. curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin On k0s, you need to export the admin.conf to be able to access the cluster. export KUBECONFIG=/var/lib/k0s/pki/admin.conf Step 1 – Instal Krew on Your System Ksniff can be easily installed using Krew. We will begin by installing Krew using the command below. ( set -x; cd "$(mktemp -d)" && OS="$(uname | tr '[:upper:]' '[:lower:]')" && ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && KREW="krew-$OS_$ARCH" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/$KREW.tar.gz" && tar zxvf "$KREW.tar.gz" && ./"$KREW" install krew ) Export the Krew PATH environment variable by adding the below lines to your .bashrc or .zshrc file export PATH="$KREW_ROOT:-$HOME/.krew/bin:$PATH" source the profile. source ~/.bashrc ##OR source ~/.zshrc For Fish Execute the below command on your terminal. begin set -x; set temp_dir (mktemp -d); cd "$temp_dir" && set OS (uname | tr '[:upper:]' '[:lower:]') && set ARCH (uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/') && set KREW krew-$OS"_"$ARCH && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/$KREW.tar.gz" && tar zxvf $KREW.tar.gz && ./$KREW install krew && set -e KREW; set -e temp_dir end Export your PATH environment variable. set -gx PATH $PATH $HOME/.krew/bin Restart the shell. Once complete, verify the Krew installation using the command: $ kubectl krew krew is the kubectl plugin manager. You can invoke krew through kubectl: "kubectl krew [command]..." Usage: kubectl krew [command] Available Commands: completion generate the autocompletion script for the specified shell help Help about any command index Manage custom plugin indexes info Show information about an available plugin install Install kubectl plugins list List installed kubectl plugins search Discover kubectl plugins uninstall Uninstall plugins update Update the local copy of the plugin index
upgrade Upgrade installed plugins to newer versions version Show krew version and diagnostics Flags: -h, --help help for krew -v, --v Level number for the log level verbosity Use "kubectl krew [command] --help" for more information about a command. Step 2 – Install Ksniff plugin using krew Once Krew has been installed, use it to install Ksniff with the command: kubectl krew install sniff Sample execution output: Installing plugin: sniff Installed plugin: sniff \ | Use this plugin: | kubectl sniff | Documentation: | https://github.com/eldadru/ksniff | Caveats: | \ | | This plugin needs the following programs: | | * wireshark (optional, used for live capture) | / / Step 3 – Install Wireshark Application The Wireshark application is installed on your local system. There are several dedicated guides to help you achieve this. How To Install Wireshark on Debian Install Wireshark on Ubuntu Desktop On CentOS/Rocky Linux/Alma Linux, you can use the below command to install Wireshark for Gnome. sudo yum install wireshark wireshark-gnome You also need to make configurations for Wireshark to be able to capture the packets as a normal user. First, add the user to the Wireshark group. sudo usermod -a -G wireshark $USER Set the right permission for the dumpcap binary file. sudo chgrp wireshark /usr/bin/dumpcap sudo chmod 750 /usr/bin/dumpcap sudo setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap Verify the changes made. $ sudo getcap /usr/bin/dumpcap /usr/bin/dumpcap cap_net_admin,cap_net_raw=eip Step 4 – Capture Kubernetes pods packets To be able to capture the Kubernetes pods packets using tcpdump and Wireshark, a command with the syntax below is used. kubectl plugin sniff [-n ] [-c ] --image /docker --tcpdump-image /tcpdump Non-Privileged and Scratch Pods To be able to reduce the attack surface, many containers in a production environment run as non-privileged users or scratch containers. Ksniff supports those containers by shipping the-p (privileged) flag. When the -p flag is used, Ksniff creates a new pod on the remote cluster that has access to the node docker daemon. The pod is then used to execute a container attached to the target container namespace and performs the network capture. This guide demonstrates packet capture by setting up a demo Apache application container using helm. Install helm using the aid from the below guide. Install and Use Helm 3 on Kubernetes Cluster Now use helm to deploy the application. helm repo add bitnami https://charts.bitnami.com/bitnami helm install web-server bitnami/apache Ensure the pod is running: $ kubectl get pods NAME READY STATUS RESTARTS AGE web-server-apache-6cfbd8d7cb-55p2c 0/1 Running 0 10s Expose the service. $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 3m36s web-server-apache LoadBalancer 10.97.149.202 80:30338/TCP,443:30040/TCP 36s Verify if the server is accessible using the HTTP port 30338. or HTTPS port 30040 Now perform a packet capture with the command: kubectl sniff -p -n Since the pod is in the default namespace, the command will be as below: kubectl sniff -p web-server-apache-6cfbd8d7cb-55p2c On k0s, k3s e.t.c, you may get an error “MountVolume.SetUp failed for volume “container-socket” : hostPath type check failed“. You need to specify the socket as below: ##On K0s kubectl sniff -p web-server-apache-6cfbd8d7cb-55p2c --socket /run/k0s/containerd.sock ##On K3s kubectl sniff -p web-server-apache-6cfbd8d7cb-55p2c --socket /run/k3s/containerd/containerd.sock Sample output: INFO[0000] no container specified, taking first container we found in pod. INFO[0000] selected container: 'apache' INFO[0000] sniffing method: privileged pod
INFO[0000] sniffing on pod: 'web-server-apache-6cfbd8d7cb-55p2c' [namespace: 'default', container: 'apache', filter: '', interface: 'any'] INFO[0000] creating privileged pod on node: 'ubuntu' INFO[0000] pod: 'ksniff-flrv9' created successfully in namespace: 'default' INFO[0000] waiting for pod successful startup INFO[0006] pod: 'ksniff-flrv9' created successfully on node: 'ubuntu' INFO[0006] spawning wireshark! INFO[0006] starting remote sniffing using privileged pod INFO[0006] executing command: '[/bin/sh -c set -ex export CONTAINERD_SOCKET="/run/k0s/containerd.sock" export CONTAINERD_NAMESPACE="k8s.io" export CONTAINER_RUNTIME_ENDPOINT="unix:///host$CONTAINERD_SOCKET" export IMAGE_SERVICE_ENDPOINT=$CONTAINER_RUNTIME_ENDPOINT crictl pull docker.io/maintained/tcpdump:latest >/dev/null netns=$(crictl inspect 39c5bab4a072138d1d2fb9f7442b048c745fe4d7586d051758ea6ef340f9038f | jq '.info.runtimeSpec.linux.namespaces[] | select(.type == "network") | .path' | tr -d '"') exec chroot /host ctr -a $CONTAINERD_SOCKET run --rm --with-ns "network:$netns" docker.io/maintained/tcpdump:latest ksniff-container-gxGgkomy tcpdump -i any -U -w - ]' on container: 'ksniff-privileged', pod: 'ksniff-flrv9', namespace: 'default' INFO[0006] starting sniffer cleanup INFO[0006] removing privileged container: 'ksniff-privileged' INFO[0006] executing command: '[/bin/sh -c ...... INFO[0006] privileged container: 'ksniff-privileged' removed successfully INFO[0006] removing pod: 'ksniff-flrv9' INFO[0006] removing privileged pod: 'ksniff-flrv9' INFO[0006] privileged pod: 'ksniff-flrv9' removed INFO[0006] pod: 'ksniff-flrv9' removed successfully INFO[0006] sniffer cleanup completed successfully On Wireshark, you should be able to see the capture as below. You can as well capture the traffic directed to a specific port say 8080 in your container with the command: kubectl sniff -f "port 8080" -p Piping output to stdout Normally, Ksniff will start a Wireshark GUI instance. However, you can integrate it with other tools using the -o flag to pipe the data to stdout. For example, saving the output to pcap file rather than real-time capturing, the command will be. kubectl sniff -p -o dns.pcap You should have a dns.pcap file generated after the command. You can also integrate it with tshark. kubectl sniff pod-name -f "port 80" -o - | tshark -r - Voila! That is it. We have successfully walked through how to capture Kubernetes pods packets using tcpdump and Wireshark. I hope this was significant to you.
0 notes
Text
Kusama node deployment: tips and tricks for getting it up and running
Kusama node deployment can be a daunting task, but with the right tips and tricks, it can be a breeze. Here are some tips to get Kusama running on your machine:
1. First, you'll need to install Git and Node.js. Both of these can be obtained from the official repositories or via package managers like apt-get.
2. Next, you'll need to clone the Kusama repository onto your machine. This can be done by using the following command: git clone https://github. com/kushal-bharadwaj/kusama.git The Kusama repository contains the Kusama node. js package as well as some other dependencies.
3. The Kusama repository also contains a sample application that you can use to test your installation and to get started with playing around with the Kusama node.
4. Finally, you'll need to update the package.json file with information concerning your application and its dependencies.
5. Install Node-RED using the following command npm install -g node-red --production To test your installation, run the following command node-red 0. 6.4. 0 npm install -g node-red --production
Node-RED is a visual programming tool that makes it easy to create and share visual programs. In this tutorial, you'll learn how to use the Kusama Node to quickly create your own version of the application.
In today's article, we are going to be discussing how to deploy Kusama node. We will be providing tips and tricks on how to get it up and running as well as some troubleshooting tips if encountered along the way.
Kusama is a popular open source photo management platform that has been used by many organizations in the past. It is a mature platform that can handle a large number of photos, videos, and documents with ease.
Deploying Kusama nodes can be a daunting task for those unfamiliar with the platform. Thankfully, we are going to be providing you with all the information that you need to get started. If you are unfamiliar with Kusama, then I would recommend reading our article on How to Use Kusama Node before moving forward with this one.
Kusama Node is a new decentralized application that enables users to sell and buy items using cryptocurrencies. The Kusama Node was created by the team at Decentraland, a virtual reality platform that allows users to create, own, and manage digital assets. In order to deploy the Kusama Node, you will first need to download the Decentraland client. Once you have downloaded the Decentraland client, open it and click on the “Nodes” tab.
Kusama node deployment can be a daunting task, but with the right tips and tricks, it can be easily achieved. Here are some helpful tips to get Kusama Node up and running:
1. Decide on your deployment strategy. There are two main options for deploying Kusama nodes: using Kubernetes or using Docker Swarm. Kubernetes is more involved to set up, but offers greater scalability and flexibility. If you're new to Kubernetes, we recommend reading our guide first. For Docker Swarm, follow our tutorial .
2. Install Kubernetes and Docker Swarm on your machine. To install Kubernetes and Docker, you will need to follow the instructions at setup-k8s-on-macosx , but be sure to follow the instructions for your operating system.
3. Give your Kubernetes cluster a name. You can call it anything you like, but we recommend naming it something that reflects what it is.
4. Set up a kubeconfig file to connect with the Kubernetes API server on your cluster.
1 note
·
View note
Text
Grab the bull by the horns: Rancher 2.3a
Rancher have long been a peripheral player in the container space for me. There has not been any crossroads where we’d meet and share war stories or joint opportunities. Recently I’ve become an indirect fan boi by admiring their two recent projects K3s and Submariner. I’ve known a while the that their homegrown orchestrator (Cattle) was getting an makeover in favor of Kubernetes. So, what is this Rancher thing about and what can it do besides moo I've started asking myself?
Rancher
Based on what I’ve read and seen up to prior to installing (I’ve attended a few webinars) is that Rancher 2.x itself is an application that manages not only the cluster it runs in but also provisions and manages other clusters, including managed Kubernetes services such as GKE, AKS and EKS. It’s also light on the requirements side of things, all it needs is a supported version of Docker. All the way back to Docker 1.13.1 for vendor supported Docker from Red Hat which I have installed in my lab.
Given the minuscule host requirements, Rancher also has it’s own purposed built Linux distribution called RancherOS (Not to be confused with Huevos Rancheros). I have not had a chance to take RancherOS for a spin yet as my POC is based on Red Hat. Rancher recently launched another Kubernetes project that tangents to RancherOS called k3OS, which promises a running k3s cluster up and running in under 10 seconds (mind blown...). This I definitely want to take for a spin!
Gunning for RKE
Now, the task at hand here is to deploy Rancher in an HA configration and have Rancher deploy another cluster for my applications. I didn’t have a compatible IaaS provider in my lab so I just provisioned a handful of KVM VMs and installed a supported version of Docker on them (from the Red Hat extras repo). Installing with the HA option cobbles together a Kubernetes cluster based on the Rancher Kubernetes Engine (RKE). RKE has a simple CLI utility where you simply declare your cluster in a YAML file and run rke up. Thats. It. For reference:
--- nodes: - address: tme-lnx7-taco user: mmattsson role: [controlplane,worker,etcd] - address: tme-lnx8-taco user: mmattsson role: [controlplane,worker,etcd] - address: tme-lnx9-taco user: mmattsson role: [controlplane,worker,etcd] services: etcd: snapshot: true creation: 6h retention: 24h
What you might squint at here is the fact that I’m squishing control plane, workers and the etcd cluster on the same set of nodes. That’s perfectly fine in this case as I won’t run any workloads on this cluster besides Rancher. Very modestly configured VMs with 8 vCPU, 120GB of disk and 8GB of RAM.
The RKE installer will spit out a kubeconfig in your current directory and you can immediately start using the cluster.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION tme-lnx7-taco Ready controlplane,etcd,worker 15m v1.14.3 tme-lnx8-taco Ready controlplane,etcd,worker 15m v1.14.3 tme-lnx9-taco Ready controlplane,etcd,worker 15m v1.14.3
The next steps include installing Helm and Tiller. I’d refer to the minutia in the official docs but one thing I wanted point out here is if you wanted to run the 2.3 alpha, you need to specify the specific version when deploying the chart.
$ helm install rancher-alpha/rancher \ --name rancher \ --namespace cattle-system \ --set hostname=rancher.tacos.datamattsson.io \ --version=2.3.0-alpha5
My hostname points to a L4 load-balancer that in turn points to my nodes (port 80 and 443) in the initial Rancher cluster. This can also be achieved with round-robin DNS for lab setups.
Once the chart is deployed, you’ll be able to login to the Rancher UI via the hostname provided (your view will differ as the "first login” dance require you to set a password):
Day 2 cluster
Quickly riffing through the tabs I immediately found the “Add Cluster” button. Since I’m installing BYOD-style I simply selected “custom” and Rancher generates a docker command-line to paste into your nodes, simply by selecting the role it'll add the appropriate flags to the command-line text blob you conveniently copy out of the UI.
As nodes start to come online they immediately become available in the UI. So you get immediate feedback what you're doing is working. Once completed, the cluster will show up as a drop-down upper left tab. This is the landing page a few minutes later:
Conveniently enough you immediately have access to a kube-cuddle terminal in the browser window. I’m always sceptic against these sort of things but I could inspect my nodes and deploy workloads from external resources right off the bat. And, drum roll, scrolling, selecting text, copying and pasting, WORKS!
The Grand Tour
Baffled by how simple it was getting to where I am, I wasn’t very surprised finding very elaborate and visually pleasing configuration for apps, monitoring, logging, notifications & alerts, user management & RBAC and persistent storage. Each of these topics warrant their own blog post so I will save these for a rainy day.
Watch this space. Moo!
0 notes
Text
setting up minikube on mac osx
References: https://github.com/kubernetes/minikube https://kubecloud.io/minikube-workflows-d7166e1da290 https://kubernetes.io/docs/getting-started-guides/minikube/
Requirements (one of):
virtualbox vmwarefusion kvm (driver installation) xhyve (driver installation)
Install kubectl:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl -- OR -- $ brew install kubectl
Install docker for mac.
Then install minikube:
$ brew cask install minikube ==> Tapping caskroom/cask Cloning into '/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask'... . . . 🍺 minikube was successfully installed!
Start minikube:
$ minikube start Starting local Kubernetes v1.8.0 cluster... Starting VM... Downloading Minikube ISO 140.01 MB / 140.01 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading localkube binary 148.56 MB / 148.56 MB [============================================] 100.00% 0s Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.
Load the kubernetes dashboard:
$ minikube dashboard
This will launch your browser into it at http://192.168.99.100:30000/#!/overview?namespace=default
You can also use the kubectl binary to interact with the minikube cluster:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready none 3m v1.8.0
Create a deployment
$ kubectl run nginx --image=nginx --port=80 deployment "nginx" created
Create a service
$ kubectl expose deployment nginx --type=NodePort service "nginx" exposed
Open a service
$ minikube service nginx Waiting, endpoint for service is not ready yet... Opening kubernetes service default/nginx in default browser...
The above 3 steps deployed a boring nginx container.
To point the docker client towards minikube’s docker environment, run the command below.
$ eval $(minikube docker-env)
Now, when you build a docker image, the Kubernetes cluster is able to use it directly.
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 30b77dd46fc9 nginx "nginx -g 'daemon ..." 14 minutes ago Up 14 minutes k8s_nginx_nginx-7cbc4b4d9c-z8jjr_default_35d23654-d512-11e7-ad9e-08002772ac06_0 7c237662af09 gcr.io/google_containers/pause-amd64:3.0 "/pause" 14 minutes ago Up 14 minutes k8s_POD_nginx-7cbc4b4d9c-z8jjr_default_35d23654-d512-11e7-ad9e-08002772ac06_0 84bd1b443c62 gcr.io/google_containers/kubernetes-dashboard-amd64 "/dashboard --inse..." 18 minutes ago Up 18 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-pxl2k_kube-system_b5bcd675-d511-11e7-ad9e-08002772ac06_0 693095819a5b fed89e8b4248 "/sidecar --v=2 --..." 18 minutes ago Up 18 minutes k8s_sidecar_kube-dns-6fc954457d-nn2sz_kube-system_b5e6fe42-d511-11e7-ad9e-08002772ac06_0 2cba69aaa531 459944ce8cc4 "/dnsmasq-nanny -v..." 18 minutes ago Up 18 minutes k8s_dnsmasq_kube-dns-6fc954457d-nn2sz_kube-system_b5e6fe42-d511-11e7-ad9e-08002772ac06_0 f51eba8a5bda 512cd7425a73 "/kube-dns --domai..." 18 minutes ago Up 18 minutes k8s_kubedns_kube-dns-6fc954457d-nn2sz_kube-system_b5e6fe42-d511-11e7-ad9e-08002772ac06_0 dd4a8dbdfb16 gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 minutes ago Up 18 minutes k8s_POD_kube-dns-6fc954457d-nn2sz_kube-system_b5e6fe42-d511-11e7-ad9e-08002772ac06_0 377cb96112ef gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 minutes ago Up 18 minutes k8s_POD_kubernetes-dashboard-pxl2k_kube-system_b5bcd675-d511-11e7-ad9e-08002772ac06_0 70294947fd35 0a951668696f "/opt/kube-addons.sh" 18 minutes ago Up 18 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_9831e93c3188555873fdb49f43198eef_0 c646c86e3feb gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 minutes ago Up 18 minutes k8s_POD_kube-addon-manager-minikube_kube-system_9831e93c3188555873fdb49f43198eef_0
Some examples
This one builds an image from a Dockerfile which is a web server, serving an index.html
$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE flow-1 1 0b7d74b5b00e 1 second ago 15.5MB nginx latest 9e7424e5dbae 6 days ago 108MB nginx stable-alpine 72a20a008f56 3 weeks ago 15.5MB gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.5 fed89e8b4248 2 months ago 41.8MB gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.5 512cd7425a73 2 months ago 49.4MB gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.5 459944ce8cc4 2 months ago 41.4MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.7.0 284ec2f8ed6c 2 months ago 128MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.3 691a82db1ecd 4 months ago 139MB gcr.io/google-containers/kube-addon-manager v6.4-beta.2 0a951668696f 5 months ago 79.2MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 19 months ago 747kB
Deploy and run the new image with kubernetes:
$ kubectl run flow-1 --image=flow-1:1 --port=80 deployment "flow-1" created $ kubectl expose deployment flow-1 --type=NodePort service "flow-1" exposed $ minikube service flow-1 Opening kubernetes service default/flow-1 in default browser...
The browser will open up a new page, but you will see a blank page, because the blog's example has been moved to medium.com and so the image reference in this git repo is broken. Also you can see the container deployed:
$ kubectl get pod NAME READY STATUS RESTARTS AGE flow-1-568c7d77c4-nwbc9 1/1 Running 0 25s nginx-7cbc4b4d9c-z8jjr 1/1 Running 0 51m
Make some changes: change the images in the index.html to something else and iterate
$ make local Sending build context to Docker daemon 9.216kB Step 1/4 : FROM nginx:stable-alpine ---> 72a20a008f56 Step 2/4 : COPY config/nginx.conf /etc/nginx/nginx.conf ---> Using cache ---> 586ef15f6d97 Step 3/4 : COPY config/default.conf /etc/nginx/conf.d/default.conf ---> Using cache ---> 700d8eaa280a Step 4/4 : COPY website /usr/share/nginx/html/ ---> cb37a5d53789 Removing intermediate container 9da09af64538 Successfully built cb37a5d53789 Successfully tagged flow-1:tmp-1511967704 kubectl set image deployment flow-1 *=flow-1:tmp-1511967704 deployment "flow-1" image updated
Reload the browser and the page is updated with your new images, nice!
Let’s clean up and remove the deployment and service we just created.
$ kubectl delete deploy,svc flow-1 deployment "flow-1" deleted service "flow-1" deleted
To pack up and go home:
$ minikube stop Stopping local Kubernetes cluster... Machine stopped.
0 notes
Text
Kubernetes abbreviated as k8s or Kube is an effective container orchestration tool initially developed by Google, but currently a community project. This tool is used to automate and scale container deployment. In recent days, the popularity of Kubernetes and its ecosystem has grown immensely with its ability to design patterns, workload types, and behavior. One of the jewels in the crown is the init containers feature. This feature provides many advantages in the initialization of workloads. What is an init Container? An init container can be defined as a container with modified operational rules and behavior. They normally contain utilities and setup scripts that are not present in the app image. The most preeminent feature for init containers is that they start and terminate before the application containers. In a single pod, there might be multiple containers running with one or more init containers as well. Init containers have similar features and fields to app containers. These features include volumes, security settings, and resource limits though resource limits in init containers are handled differently. The main difference between init containers and regular containers are: Init containers always run to completion. Each init container must complete successfully before the next one starts. In addition to the above, init containers do not support livenessProbe, readinessProbe, startupProbe or lifecycle since they must run to completion for the pod to be ready. Normally, when multiple init containers are specified in a pod, the kubelet runs them sequentially. Each container has to succeed before the next runs. When an init pod fails, the Kubelet will restart it repeatedly until it succeeds. However, if the pod has a restartPolicy of Never, and the init Pod fails to run successfully, then the overall pod will be treated as a failed pod. Init containers are used in the following areas: Fetch encrypted secrets from a vault and write them to a file system A blockchain app that needs to register itself amongst its peers Database migrations/applying seeds – Dynamic data fetched from a database and cached for the app to run after startup An app that must fetch an access token from an identity provider Fetching assets from a git repository or cloud storage that requires credentials. Generating configuration files using runtime properties, for example, pod IP, assigned external IP, hostname, external ingress IP, etc. This guide provides an overview of how to create and use init containers in Kubernetes. Before you Begin I assume that you already have a Kubernetes cluster set up. If not, you can use the below guides to set up a Kubernetes cluster. Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Deploy Kubernetes Cluster on Linux With k0s Install Kubernetes Cluster on Ubuntu using K3s Run Kubernetes on Debian with Minikube Once the cluster is configured, you can use the kubectl utility to manage it. Install it as below: curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin Export the admin conf to be able to manage the cluster. # For k0s export KUBECONFIG=/var/lib/k0s/pki/admin.conf Creating and Using Init Containers in Kubernetes There are several ways in which Init containers can be used. Some of them can be: Waiting for a service to start using a shell command: for i in 1..100; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Waiting for some time before the app container is started. sleep 120 Register this Pod with a remote server from the downward API with a command: curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()' This guide provides a more practical example by creating a simple pod with two Init containers.
Normally, init containers are defined in the pod.spec.initContainers array while the app containers are defined in the pod. spec.containers array. Here, we will have the first init container that waits for service and the other waiting for database. Create the configuration file. vim application.yaml The file will contain the below lines: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-service image: busybox:1.28 command: ['sh', '-c', "until nslookup service.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for service; sleep 2; done"] - name: init-db image: busybox:1.28 command: ['sh', '-c', "until nslookup database.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for database; sleep 2; done"] Apply the manifest. $ kubectl apply -f application.yaml pod/myapp-pod created Check the status of the pod. $ kubectl get -f application.yaml NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 23s Alternatively use: $ kubectl get pod myapp-pod NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 65s From the output, we have two init containers. You can view more details by describing the pod: $ kubectl describe -f application.yaml Name: myapp-pod Namespace: default Priority: 0 Node: debian/192.168.205.4 Start Time: Sun, 24 Apr 2022 10:29:34 +0000 Labels: app=myapp Annotations: kubernetes.io/psp: 00-k0s-privileged Status: Pending IP: 10.244.1.3 IPs: IP: 10.244.1.3 Init Containers: init-service: Container ID: containerd://7ca9c41dbf829d3be660a68eb5dce4ebbe8902b6ae0562bbbd723453898ef9ce Image: busybox:1.28 Image ID: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 Port: Host Port: Command: sh -c until nslookup service.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for service; sleep 2; done State: Running Started: Sun, 24 Apr 2022 10:29:38 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l29rv (ro) init-db: Container ID: Image: busybox:1.28 Image ID: Port: Host Port: Command: sh -c until nslookup database.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for database; sleep 2; done State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: ....... Inspect logs for the init containers in the pod(myapp) with the command: # Inspect the first init container kubectl logs myapp-pod -c init-service # Inspect the second init container kubectl logs myapp-pod -c init-db The output suggests that the Init containers are waiting for the services database and service for them to run. So we will create the two services as below: vim service.yaml Add the lines below to it. --- apiVersion: v1 kind: Service metadata: name: service spec: ports: - protocol: TCP port: 80 targetPort: 9376 --- apiVersion: v1 kind: Service metadata: name: database spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the services with the command: $ kubectl apply -f service.yaml service/service created service/database created Now check the status of the pod. $ kubectl get -f application.yaml
myapp-pod 1/1 Running 0 4m52s The two Init containers have started and terminated successfully, which has allowed the app container( myapp-pod) to start as well. That brings this guide on how to create and use init containers in Kubernetes to the finale. You can now use the gathered knowledge to create your own Init containers in Kubernetes.
0 notes
Text
Mattermost is an open-source messaging platform designed to serve a large number of concurrent users across the world. It is the biggest competitor of MS Teams and Slack. The connection between individuals can be through video calls, normal voice calls, or chats. Mattermost gains popularity over other messaging platforms due to the following key features: Supports third Party Integrations Drag & Drop features IT Service desk Offers file Sharing Alerts/Notifications Incident resolution – resolves incidents quicky and thus saving on time. Data Import and Export Supports document Storage Application and network performance monitoring. Workflow management and orchestration. To install and configure Mattermost on a Kubernetes Cluster, follow the steps below: Setup Requirements. Ensure that you have a Kubernetes cluster set up. However, you can achieve this using the dedicated guides below: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Install Kubernetes Cluster on Ubuntu using K3s Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube You also need to have kubectl installed. curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin To be able to use Kubectl, you should be able to access the cluster using the generated certificate. # For k0s export KUBECONFIG=/var/lib/k0s/pki/admin.conf With the requirements met, proceed as below. 1 – Create the Mattermost Namespace We will first create the Mattemost namespace where our installation will happen. This namespace will contain several virtual clusters. Create the namespace. kubectl create namespace mattermost Verify if the namespace has been created: $ kubectl get namespaces NAME STATUS AGE default Active 97s kube-node-lease Active 97s kube-public Active 97s kube-system Active 97s mattermost Active 45s 2 – Create the Secrets File The secret file contains database details for the Mattermost database. Begin by generating base64 credentials as below. # MySQL root user $ echo -n 'root' | base64 cm9vdA== # MySQL root user password $ echo -n 'StrongRootPassword' | base64 U3Ryb25nUm9vdFBhc3N3b3Jk #Mattermost MySQL database name $ echo -n 'mattermost' | base64 bWF0dGVybW9zdA== # Mattermost MySQL user $ echo -n 'mattermost' | base64 bWF0dGVybW9zdA== # Mattermost MySQL user Password $ echo -n 'StrongUserPassword'|base64 U3Ryb25nVXNlclBhc3N3b3Jk Now create the secrets file vim secrets.yaml Add the lines below replacing appropriately apiVersion: v1 kind: Secret metadata: name: mattermost.env namespace: mattermost type: Opaque data: ROOT: cm9vdA== ROOT_PASSWORD: U3Ryb25nUm9vdFBhc3N3b3Jk DATABASE: bWF0dGVybW9zdA== USER: bWF0dGVybW9zdA== PASSWORD: U3Ryb25nVXNlclBhc3N3b3Jk Apply the config file. kubectl apply -f secrets.yaml Verify if the config has been applied. $ kubectl get secret -n mattermost NAME TYPE DATA AGE default-token-xlwqr kubernetes.io/service-account-token 3 84s mattermost.env Opaque 5 5s 3 – Create the Mattermost Database Pod. For this guide, we will use the MariaDB database. Create a volume for MariaDB sudo mkdir /var/mattermost Create a YAML file for the database vim database.yaml Add the below content to the file. --- apiVersion: v1 kind: Service metadata: name: mariadb namespace: mattermost spec: selector: app: mariadb ports: - name: mariadb protocol: TCP port: 3306 --- apiVersion: apps/v1 kind: Deployment metadata: name: mariadb namespace: mattermost labels: app: mariadb spec: selector: matchLabels: app: mariadb
template: metadata: labels: app: mariadb spec: containers: - name: mariadb image: mariadb:10.6 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mattermost.env key: ROOT_PASSWORD - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mattermost.env key: DATABASE - name: MYSQL_USER valueFrom: secretKeyRef: name: mattermost.env key: USER - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mattermost.env key: PASSWORD ports: - containerPort: 3306 name: mariadb volumeMounts: - name: mariadb-storage mountPath: /var/lib/mysql volumes: - name: mariadb-storage hostPath: path: /var/mattermost type: Directory Apply the configuration. kubectl apply -f database.yaml Verify if the pod has been created: $ kubectl get pod -n mattermost NAME READY STATUS RESTARTS AGE mariadb-5cdf7f54f4-9d7xb 1/1 Running 0 6m48s 4 – Deploy the Mattermost Service The Mattermost service will be used to expose our application running on port 8065. There are several ways to deploy a service such as NodePort, ClusterIP, or LoadBalancer Create the Mattermost service file: vim service.yaml Here, we will deploy a LoadBalancer service. apiVersion: v1 kind: Service metadata: name: "mattermost-service" namespace: mattermost spec: type: LoadBalancer ports: - name: http port: 8065 targetPort: 8065 protocol: TCP selector: app: mattermost-app Apply the config. kubectl apply -f service.yaml Verify if the service has been created: $ kubectl get svc -n mattermost NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mariadb ClusterIP 10.100.76.56 3306/TCP 7m36s mattermost-service LoadBalancer 10.99.90.154 8065:30095/TCP 5s 5 – Deploy the Mattermost Application. Now deploy the application. Create the deployment YAML as below vim mattermost_app.yaml To the file, add the lines below: apiVersion: apps/v1 kind: Deployment metadata: name: mattermost-app labels: app: mattermost-app tier: app namespace: mattermost spec: selector: matchLabels: app: mattermost-app template: metadata: labels: app: mattermost-app spec: containers: - name: mattermost-app image: "mattermost/mattermost-team-edition:6.3.2" env: - name: DB_TYPE value: "mariadb" - name: DB_HOST value: "mariadb" - name: DB_PORT_NUMBER value: "3306" - name: MM_USERNAME valueFrom: secretKeyRef: name: mattermost.env key: USER - name: MM_PASSWORD valueFrom: secretKeyRef: name: mattermost.env key: PASSWORD - name: MM_DBNAME valueFrom: secretKeyRef: name: mattermost.env key: DATABASE - name: MM_SQLSETTINGS_DATASOURCE value: "mattermost:StrongUserPassword@tcp(mariadb:3306)/mattermost?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s" - name: MM_SQLSETTINGS_DRIVERNAME value: "mysql" ports: - containerPort: 8065 name: http volumeMounts: # optional persistent storage #- name: appdata #mountPath: /mattermost/data - name: etclocaltime mountPath: /etc/localtime readOnly: true volumes: # optional persistent storage #- name: appdata
#persistentVolumeClaim: # claimName: mattermost-app - name: etclocaltime hostPath: path: /etc/localtime Deploy the application: kubectl apply -f mattermost_app.yaml View if the deployment is successful. $ kubectl get deploy -n mattermost NAME READY UP-TO-DATE AVAILABLE AGE mariadb 1/1 1 1 12m mattermost-app 1/1 1 1 27s Get pods in the Mattermost namespace. $ kubectl get pods -nmattermost NAME READY STATUS RESTARTS AGE mariadb-5cdf7f54f4-9d7xb 1/1 Running 0 13m mattermost-app-795578f4bc-5862k 1/1 Running 0 83s Obtain the poet on which the Mattermost application has been exposed. $ kubectl get svc -n mattermost NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mariadb ClusterIP 10.100.76.56 3306/TCP 14m mattermost-service LoadBalancer 10.99.90.154 8065:30095/TCP 6m46s 6 – Access the Mattermost Web Interface Now the Web UI should be accessible on port 30095. If you have a firewall enabled, allow the port through it. ##For Firewalld sudo firewall-cmd --add-service=30095/tcp --permanent sudo firewall-cmd --reload ##For UFW sudo ufw allow 30095 Now access the Mattermost Web Interface on your browser using the URL http://IP_Address:30095 or http://domain-name:30095 Create the Mattermost account and proceed to log in. Now you are set to manage your Mattermost installation. There are two options, create a team or proceed to the System console where you make admin changes to your server. The system console appears as below. To create a team and begin communication, proceed as below. Set the Team URL: Finish up the Team creation by providing the required details. Once created, you will have the team ready for communication as shown. Conclusion. That was enough learning! We have successfully walked through how to install and configure Mattermost on a Kubernetes Cluster. I hope this was impactful.
0 notes
Text
Kubernetes is undoubtedly the most advanced and adopted container orchestration platform powering millions of applications in production environments. One big challenge to most new Linux and Kubernetes users is on setting up the cluster. Though we have a number of guides on installation and configuration of Kubernetes clusters, this is our first guide on setting up Kubernetes cluster in AWS cloud environment with Amazon EKS. For users new to Amazon EKS it is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Since Amazon EKS is fully compatible with Community version of Kubernetes you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required. Amazon EKS eliminates headaches around high availability by automatically detecting and replacing unhealthy control plane instances. It also becomes easy to perform upgrades in an automated version. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications, including the following: Amazon ECR for container images Elastic Load Balancing for load distribution IAM for authentication Amazon VPC for isolation How To Deploy EKS Kubernetes Cluster on AWS The next sections will get more deep into installation of a Kubernetes Cluster on AWS with Amazon EKS managed service. The setup diagram looks like one shown below. Step 1: Install and Configure AWS CLI Tool We need to setup AWS CLI tooling since our installation will be command line based. This is done in your local Workstation machine. Our installation is for both Linux and macOS. --- Install AWS CLI on macOS --- curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" sudo installer -pkg AWSCLIV2.pkg -target / --- Install AWS CLI on Linux --- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install You can then determine the version of AWS CLI that you have installed with the commands below. $ aws --version aws-cli/2.1.38 Python/3.8.8 Darwin/20.3.0 exe/x86_64 prompt/off Configure AWS CLI credentials After installation we need to configure our AWS CLI credentials. We’ll use the aws configure command to set up AWS CLI installation for general use. $ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]: json Your AWS CLI details will be saved in the ~/.aws directory: $ ls ~/.aws config credentials Step 2: Install eksctl on Linux | macOS eksctl is the a simple CLI tool used to create EKS clusters on AWS. This tool is written in Go, and uses CloudFormation. With this tool you can have a running cluster in minutes. It has the following features as of this article writing: Create, get, list and delete clusters Create, drain and delete nodegroups Scale a nodegroup Update a cluster Use custom AMIs Configure VPC Networking Configure access to API endpoints Support for GPU nodegroups Spot instances and mixed instances IAM Management and Add-on Policies List cluster Cloudformation stacks Install coredns Write kubeconfig file for a cluster Install eksctl tool on Linux or macOS machine with the commands below. --- Linux --- curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin --- macOS --- /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" brew tap weaveworks/tap brew install weaveworks/tap/eksctl brew upgrade eksctl && brew link --overwrite eksctl # When upgrading Test that your installation was successful with the following command.
0 notes
Text
Red Hat OpenShift 4.1 ghetto setup
I recently reviewed the AWS preview of Red Hat OpenShift 4.0 in this blog post. Now, the time has come to install OpenShift 4.1 on what is called User Provided Infrastructure (UPI). Unlike Installer Provided Infrastructure (IPI), you have to jump through a few hoops to get your environment pristine enough to eventually install OpenShift. This blog post captures some of the undocumented “features” and how you easily can get it rolling in a lab environment to start kicking the tires. By no means should you use these steps to build a production environment, although some hints here might actually help you along the way for a production setup.
Just to be clear, these are the two Red Hat docs I’m following to get a running OpenShift cluster:
Installing OpenShift 4.1 on bare-metal
Adding RHEL compute nodes
Note: If Google landed you here. Please note that this post was written 6/28/19 for OpenShift 4.1. Any references to the OpenShift documentation mentioned below may be completely inaccurate by the time you read this.
Bare or not, that is the question
While I’m installing on KVM virtual machines, I’ve followed the "bare-metal” procedures found in the docs. They are somewhat semantically difference on how you boot the Red Hat CoreOS installer which can be done in multiple ways, either by injecting the ‘append’ line in the machine XML declaration, PXE or simply by manually booting the ISO. I leave this detail up to the discretion of the reader as it’s fairly out-of-scope and unique to everyone's setup.
The upfront DNS and LB requirements
I’m sitting on a lab network where I have zero control over DNS or DHCP. Whatever hostname gets put into the DHCP request gets registered in DNS. Given that OpenShift uses wildcard based DNS for all frontend traffic and the new paradigm of installing require a bunch of SRV and A records to the etcd servers in a dedicated sub-domain, I was faced with a challenge.
Since the network admins (and I’m too lazy) can’t have “marketing” hosting DNS zones in the lab, I have to outsmart them with my laziness. I’m a customer of Route 53 on AWS, said and done, I setup the necessary records in a ‘openshift’ subdomain of datamattsson.io. This way, the lab DNS servers will simply forward the queries to the external domain. Lo and behold, it worked just fine!
The next baffle is that you need to have a load-balancer installed (Update: DNS RR works for test and lab setups too according to this red hatter) upfront before you even start installing the cluster. A self-hosted LB is not an option. I started my quest to try find an LB that is as simple and dumb as I need it to be. Single binary, single config-file dumb. I found this excellent blog post that lists some popular projects in this space.
I went with gobetween.io (Number #10 on the list) as they had me at Hello: "gobetween is free, open-source, modern & minimalistic L4 load balancer and reverse proxy for the Cloud era". The config file is written in TOML, this is the tail section of the config example file the gobetween binary ships with:
# Local config [servers] # ---------- tcp example ----------- # [servers.api] protocol = "tcp" bind = "0.0.0.0:6443" [servers.api.discovery] kind = "static" static_list = [ #"10.21.199.167:6443", "10.21.199.140:6443", "10.21.199.139:6443", "10.21.199.138:6443" ] [servers.mcs] protocol = "tcp" bind = "0.0.0.0:22623" [servers.mcs.discovery] kind = "static" static_list = [ #"10.21.199.167:22623", "10.21.199.140:22623", "10.21.199.139:22623", "10.21.199.138:22623" ] [servers.http] protocol = "tcp" bind = "0.0.0.0:80" [servers.http.discovery] kind = "static" static_list = [ "10.21.199.60:80", "10.21.198.158:80" ] [servers.https] protocol = "tcp" bind = "0.0.0.0:443" [servers.https.discovery] kind = "static" static_list = [ "10.21.199.60:443", "10.21.198.158:443" ]
The first line-item of “api” and “mcs” is commented out as it’s the node required to bootstrap the control-plane nodes, once that step is done, it should be removed from rotation.
Running the LB in the foreground:
gobetween -c config.toml
Do note that GoBetween supports a number of different healthchecks, I opted out to experiment with those, but I would assume in a live scenario, you want to make sure health checks works.
3.. 2.. 1.. Ignition!
OpenShift is no longer installed using Ansible. It has it’s own openshift-install tool to help generate ignition configs. I’m not an expert on this topic what so ever (a caution). I’ve dabbled with matchbox/ignition pre-Red Hat era and it’s safe to say that Red Hat CoreOS is NOT CoreOS. The two have started to diverge and documentation you read on coreos.com doesn’t map 1:1. My only observation on this topic is that Red Hat CoreOS is just means to run OpenShift, that’s it. Just as a FYI, there is a Fedora CoreOS project setup for the inclined to dive deeper.
Initially you need to setup a install-config.yaml and here’s your first pro-tip. The openshift-install program will literally consume it. It validates it and produces the outputs and later deletes it. My advice is to copy this file outside of the install-dir directory structure to easily restart the ordeal from scratch.
This is my example install-config.yaml with the pull secret and ssh key redacted:
apiVersion: v1 baseDomain: datamattsson.io compute: - hyperthreading: Enabled name: worker replicas: 0 controlPlane: hyperthreading: Enabled name: master replicas: 3 metadata: name: openshift networking: clusterNetworks: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: 'foobar' sshKey: 'foobar'
Hint: Your pull secret is hidden here.
Consume the install-config.yaml file:
$ openshift-install create ignition-configs --dir=.
This is will result in a directory structure like this:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
The .ign files are JSON files. Somewhat obscured without line breaks and indentation. Now, my fundamental problem I had when I booted up the bootstrap node and masters on my first lap, all came up with localhost.localdomain as the hostname. If anyone have attempted installing a Kubernetes cluster with identical hostnames, you know it’s going to turn into a salad.
Setting the hostname is quite trivial and a perfectly working example is laid out here. Simply add a “files” entry under .ignition.storage:
"storage": { "files": [ { "filesystem": "root", "path": "/etc/hostname", "mode": 420, "contents": { "source": "data:,tme-lnx2-ocp-e1" } } ] }
Do note that the section where this is stanza is added differs slight from the bootstrap.ign and master.ign files.
Note: I use the nodejs json command (npm install -g json) to humanize JSON files, jq is quite capable too: jq . pull-secret.txt
Potential Issue: I did have an intermittent issue when doing a few iterations that the CNI wouldn’t initialize on the compute nodes. Sometimes a reboot resolved it and sometimes it sat for an hour or so and eventually it would start and the node would become Ready. I filed a support case with Red Hat on this matter. I will update this blog post if I get a resolution back. This is the error message on a node stuck NotReady:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni config uninitialized
Hello World!
At the end of the day, you should have a cluster that resemble this:
$ oc get nodes NAME STATUS ROLES AGE VERSION tme-lnx2-ocp-e1 Ready master 6h19m v1.13.4+9252851b0 tme-lnx3-ocp-e2 Ready master 6h18m v1.13.4+9252851b0 tme-lnx4-ocp-e3 Ready master 6h17m v1.13.4+9252851b0 tme-lnx5-ocp-c1 Ready worker 98m v1.13.4+9b19d73a0 tme-lnx6-ocp-c2 Ready worker 5h4m v1.13.4+9b19d73a0
As Red Hat is switching to a Operator model, all cluster services may now be listed as such:
$ oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.1.2 True False False 145m cloud-credential 4.1.2 True False False 6h19m cluster-autoscaler 4.1.2 True False False 6h19m console 4.1.2 True False False 147m dns 4.1.2 True False False 4h42m image-registry 4.1.2 True False False 172m ingress 4.1.2 True False False 149m kube-apiserver 4.1.2 True False False 6h17m kube-controller-manager 4.1.2 True False False 6h16m kube-scheduler 4.1.2 True False False 6h16m machine-api 4.1.2 True False False 6h19m machine-config 4.1.2 False False True 3h18m marketplace 4.1.2 True False False 6h14m monitoring 4.1.2 True False False 111m network 4.1.2 True True False 6h19m node-tuning 4.1.2 True False False 6h15m openshift-apiserver 4.1.2 True False False 4h42m openshift-controller-manager 4.1.2 True False False 4h42m openshift-samples 4.1.2 True False False 6h operator-lifecycle-manager 4.1.2 True False False 6h18m operator-lifecycle-manager-catalog 4.1.2 True False False 6h18m service-ca 4.1.2 True False False 6h18m service-catalog-apiserver 4.1.2 True False False 6h15m service-catalog-controller-manager 4.1.2 True False False 6h15m storage 4.1.2 True False False 6h15m
The password for the user that got created during install can be found in the auth subdirectory in the install-dir. It lets you login via oc login and also gives you access to the web console. The most obvious URL for the console is, in my case, https://console-openshift-console.apps.openshift.datamattsson.io
Now, let’s deploy some workloads on this Red Hat OpenShift 4.1 Ghetto Setup! Watch this space.
0 notes