#kubernetes kubectl config
Explore tagged Tumblr posts
virtualizationhowto · 2 years ago
Text
Kubectl get context: List Kubernetes cluster connections
Kubectl get context: List Kubernetes cluster connections @vexpert #homelab #vmwarecommunities #KubernetesCommandLineGuide #UnderstandingKubectl #ManagingKubernetesResources #KubectlContextManagement #WorkingWithMultipleKubernetesClusters #k8sforbeginners
kubectl, a command line tool, facilitates direct interaction with the Kubernetes API server. Its versatility spans various operations, from procuring cluster data with kubectl get context to manipulating resources using an assortment of kubectl commands. Table of contentsComprehending Fundamental Kubectl CommandsWorking with More Than One Kubernetes ClusterNavigating Contexts with kubectl…
Tumblr media
View On WordPress
0 notes
generativeinai · 1 month ago
Text
Top 10 Ways Generative AI in IT Workspace Is Redefining DevOps, Infrastructure Management, and IT Operations
Generative AI is no longer just a buzzword in enterprise IT — it’s a force multiplier. As businesses strive for faster delivery, resilient infrastructure, and autonomous IT operations, generative AI is becoming the secret weapon behind the scenes. From automating code to predicting outages before they happen, generative AI is transforming how DevOps teams, system admins, and IT managers operate daily.
Tumblr media
In this blog, we’ll explore the top 10 real-world ways generative AI is redefining the IT workspace—specifically in the areas of DevOps, infrastructure management, and IT operations.
1. AI-Generated Infrastructure as Code (IaC)
Generative AI can automatically create, test, and optimize infrastructure-as-code templates based on user input or workload requirements.
Instead of manually writing Terraform or CloudFormation scripts, engineers can describe their desired setup in plain English.
AI tools like GitHub Copilot or bespoke enterprise copilots generate IaC snippets on demand, reducing human error and speeding up cloud provisioning.
Impact: Saves hours of setup time, increases reproducibility, and enforces security-compliant defaults.
2. Predictive Incident Management and Self-Healing Systems
Generative AI models trained on historical incident logs can predict recurring issues and suggest preventive measures in real-time.
Integrated into observability platforms, AI can flag anomalies before they impact end users.
When tied into automation workflows (e.g., via ServiceNow or PagerDuty), it can trigger remediation scripts, effectively enabling self-healing infrastructure.
Impact: Reduces MTTR (Mean Time to Resolve), enhances uptime, and frees up SRE teams from firefighting.
3. Automated Code Review and Deployment Optimization
Generative AI assists in reviewing code commits with suggestions for performance, security, and best practices.
AI bots can flag problematic code patterns, auto-suggest fixes, and even optimize CI/CD pipelines.
In DevOps, AI tools can recommend the best deployment strategy (blue-green, canary, etc.) based on application type and past deployment metrics.
Impact: Speeds up release cycles while reducing bugs and deployment risks.
4. Natural Language Interfaces for DevOps Tools
Generative AI turns complex CLI and scripting tasks into simple prompts.
Instead of memorizing kubectl commands or writing bash scripts, developers can just ask: “Scale my pod to 5 instances and restart the deployment.”
AI interprets the intent and executes the backend commands accordingly.
Impact: Democratizes access to DevOps tools for non-experts and accelerates operations.
5. Dynamic Knowledge Management and Documentation
Keeping IT documentation up to date is painful — generative AI changes that.
It auto-generates technical documentation based on system changes, deployment logs, and config files.
Integrated with enterprise wikis or GitHub repositories, AI ensures every process is captured in real time.
Impact: Saves time, ensures compliance, and keeps institutional knowledge fresh.
6. Smart Capacity Planning and Resource Optimization
AI-powered models predict workload trends and auto-scale infrastructure accordingly.
Generative AI can simulate future demand scenarios, suggesting cost-saving measures like right-sizing or moving workloads to spot instances.
In Kubernetes environments, AI can recommend pod-level resource adjustments.
Impact: Cuts infrastructure costs and ensures optimal performance during traffic spikes.
7. Personalized IT Assistant for Developers and Admins
Think of this as a ChatGPT specifically trained on your IT stack.
Developers can ask, “Why did the build fail yesterday at 3 PM?” or “How do I restart the staging DB?”
The AI assistant fetches logs, searches through config files, and provides contextual answers.
Impact: Reduces dependency on IT support, accelerates troubleshooting, and enhances developer autonomy.
8. AI-Augmented Threat Detection and Security Auditing
Generative AI scans code, configs, and network activity to detect vulnerabilities.
It can generate risk reports, simulate attack vectors, and recommend patching sequences.
Integrated into DevSecOps workflows, it ensures security is not bolted on, but baked in.
Impact: Proactively secures the IT environment without slowing down innovation.
9. Cross-Platform Automation of Repetitive IT Tasks
Routine tasks like server patching, log rotation, or service restarts can be automated through generative scripts.
AI can orchestrate cross-platform operations involving AWS, Azure, GCP, and on-prem servers from a single interface.
It also ensures proper logging and alerting are in place for all automated actions.
Impact: Enhances operational efficiency and reduces human toil.
10. Continuous Learning from Logs and Feedback Loops
Generative AI models improve over time by learning from logs, performance metrics, and operator feedback.
Each remediation or change adds to the AI’s knowledge base, making it smarter with every iteration.
This creates a virtuous cycle of continuous improvement across the IT workspace.
Impact: Builds an adaptive IT environment that evolves with business needs.
Final Thoughts: The AI-Augmented Future of IT Is Here
Generative AI isn’t replacing IT teams — it’s amplifying their capabilities. Whether you're a DevOps engineer deploying daily, an SRE managing thousands of endpoints, or an IT manager overseeing compliance and uptime, generative AI offers tools to automate, accelerate, and augment your workflows.
As we move toward hyper-automation, the organizations that succeed will be those that integrate Generative AI in the IT workspace strategically and securely.
0 notes
nullset2 · 3 months ago
Text
Busybox for Kubernetes debugging
I ran into a problem with my Kubernetes homelab where, in PVC that I use to store Plex Media Server configuration, for some reason (probably a botched reboot), the Plex Media Server lost write access.
PMS: failure detected. Read/write access is required for path: /config/Library/Application Support/Plex Media Server
In cases like these where one needs to "browse inside a PVC" independently of the pod that consumes it (since the plex pod would crash, I had no way to shell into it), a solution is to create a barebones pod, attach the PVC to it, then shell into it. This is easy to achieve:
This manifest can simply be applied with kubectl -f busybox.yaml, and then a shell can be started. From here, the permissions issue was easy to fix.
0 notes
lacyc3 · 4 months ago
Text
Egészen elképesztő, hogy már március van
Hova tűnt az elmúlt két hónap?
Gyanítom, hogy ez is egy hosszú - rövid év lesz, ahol a hetek nagyon hosszúnak érződnek majd, de elrohannak.
Viszont idén se maradhat ki az évértékelő, bár kicsit.. na, cink, tudom.
Májusban véget ér az Ubuntu 20.04 LTS támogatása, de nem tudom mikor tudok a frissítésre időt szánni. Magánszemélyeknek ingyenes az Ubuntu Pro, szerintem benevezek rá. A következő öt évben csak sikerül az upgrade.. :D
A gond az, hogy ha már OS upgrade, akkor a régi oldal alatt kéne egy Drupal 7 -> ?? migráció is. Lehet Backdrop lesz, de csak a feladatok átnézése is elvette tőle a kedvemet. Na persze annak már nincs security támogatása, szóval.. na igen, ez is kínos.
De a nulladik lépés az, hogy megnézni hogy futnak-e még a 20.04-hez írt Ansible scriptek. Elvileg nem kéne, hogy probléma legyen velük, de mivel kiforrott a rendszer config, nem nagyon piszkáltam őket már jó régen. Lehet ezt - azt aktualizálni kell.
Viszont nem ártana, ha az új szerverre már a Pingvin 1.0 menne ki, ahhoz viszont kódot kéne írni, mert jelenleg... kicsit (nagyon) szét van verve a dolog. Bejött néhány érthető feature request W-től, ami viszont egyéb dolgokat implikált, amik viszont egyéb módosításokat, amik viszont...
Ki kéne okosítani a keresőt is, hogy ha rákeresek valamire és pont olyan nevű címke van, akkor az első találat maga a címke legyen. De akkor kell a címkének egy kép és leírás is. Viszont jelenleg ilyet nem igazán kezelünk. :D
Aztán lógok még hírekkel is és van két kisokos, amit ki kéne tenni. Az egyik kubectl + Kubernetes önaláírt cert (parancsok megvannak, csak a "körítés" kell), a másik pedig a hogyan állítsunk be laptopon maximum akkumulátor töltöttséget. Ez utóbbit ki kell dolgozni -> ki kell próbálni.
Ismerősnek kéne csinálni egy képernyőkép montázst a legfrissebb Linux Mintről. Ez mondjuk mehetne az oldalra képernyőkép túrának.
Szóval a suszter cipője lyukas mindig ugye.
1 note · View note
qcs01 · 6 months ago
Text
Getting Started with Kubernetes: A Hands-on Guide
Getting Started with Kubernetes: A Hands-on Guide
Kubernetes: A Brief Overview
Kubernetes, often abbreviated as K8s, is a powerful open-source platform designed to automate the deployment, scaling, and management of containerized applications. It1 simplifies the complexities of container orchestration, allowing developers to focus on building and deploying applications without worrying about the underlying infrastructure.2
Key Kubernetes Concepts
Cluster: A group of machines (nodes) working together to run containerized applications.
Node: A physical or virtual machine that runs containerized applications.
Pod: The smallest deployable3 unit of computing, consisting of one or more containers.
Container: A standardized unit of software that packages code and its dependencies.
Setting Up a Kubernetes Environment
To start your Kubernetes journey, you can set up a local development environment using minikube. Minikube creates a single-node Kubernetes cluster on your local machine.
Install minikube: Follow the instructions for your operating system on the minikube website.
Start the minikube cluster: Bashminikube start
Configure kubectl: Bashminikube config --default-context
Interacting with Kubernetes: Using kubectl
kubectl is the command-line tool used to interact with Kubernetes clusters. Here are some basic commands:
Get information about nodes: Bashkubectl get nodes
Get information about pods: Bashkubectl get pods
Create a deployment: Bashkubectl create deployment my-deployment --image=nginx
Expose a service: Bashkubectl expose deployment my-deployment --type=NodePort
Your First Kubernetes Application
Create a simple Dockerfile: DockerfileFROM nginx:alpine COPY index.html /usr/share/nginx/html/
Build the Docker image: Bashdocker build -t my-nginx .
Push the image to a registry (e.g., Docker Hub): Bashdocker push your-username/my-nginx
Create a Kubernetes Deployment: Bashkubectl create deployment my-nginx --image=your-username/my-nginx
Expose the deployment as a service: Bashkubectl expose deployment my-nginx --type=NodePort
Access the application: Use the NodePort exposed by the service to access the application in your browser.
Conclusion
Kubernetes offers a powerful and flexible platform for managing containerized applications. By understanding the core concepts and mastering the kubectl tool, you can efficiently deploy, scale, and manage your applications.
Keywords: Kubernetes, container orchestration, minikube, kubectl, deployment, scaling, pods, services, Docker, Dockerfile
1 note · View note
nel-world · 7 months ago
Text
Assess Application and Cluster Readiness
Review Application Dependencies:
Analyze libraries, JDK versions, frameworks (e.g., Spring Boot, Jakarta EE), and external services (databases, APIs).
Identify any Kubernetes-specific configurations (e.g., ConfigMaps, Secrets, environment variables).
Containerization:
Ensure your Java app is containerized using tools like Docker.
Update the Dockerfile if necessary (e.g., for JDK/JRE updates or changes in base image).
2. Prepare Kubernetes Environment
Target Cluster Validation:
Ensure the target Kubernetes cluster is set up and ready.
Check for compatibility with the current cluster’s API version and resources.
Namespace Setup:
Create equivalent namespaces, RBAC roles, and policies in the target cluster.
Storage:
Verify Persistent Volume Claims (PVCs) and ensure similar storage classes are available.
3. Backup and Export Resources
Kubernetes Resources:
Export all application-related resources using kubectl:bashCopy codekubectl get all -n <namespace> -o yaml > resources.yaml
Persistent Data:
Back up database data or persistent volumes using tools like Velero or rsync.
4. Migrate Application Configurations
ConfigMaps and Secrets:
Export existing configurations:bashCopy codekubectl get configmap -n <namespace> -o yaml > configmap.yaml kubectl get secret -n <namespace> -o yaml > secret.yaml
Reapply them in the new cluster:bashCopy codekubectl apply -f configmap.yaml kubectl apply -f secret.yaml
5. Deploy to Target Cluster
Deploy Application:
Use Helm charts, Kubernetes manifests, or CI/CD pipelines to deploy the Java application to the new cluster.
Update the kubeconfig for the target cluster:bashCopy codekubectl config use-context <target-cluster-context> kubectl apply -f resources.yaml
Persistent Volumes:
Ensure data is restored to the persistent volumes if needed.
6. Test the Migration
Validation:
Ensure the application is running as expected (pods, services, and networking).
Run integration and regression tests.
Connectivity:
Validate external service integrations (e.g., databases, APIs).
Check ingress configurations for public access.
7. Cutover and Cleanup
Switch Traffic:
Update DNS or external load balancer to direct traffic to the new cluster.
Monitor:
Use tools like Prometheus, Grafana, or ELK Stack to monitor application health.
Clean Up Old Environment:
After verifying stability, clean up resources in the old cluster.
Tools to Assist in Migration
Velero: For backup and restore of Kubernetes resources and persistent volumes.
Helm: To manage application deployments and upgrades.
Kubernetes Dashboard: For real-time cluster monitoring.
Kubectl Plugins: To enhance CLI functionality.
If you provide more details about the migration (e.g., specific cloud providers or Kubernetes environments), I can offer more tailored advice!
4o
You said:
Migration of 5000 files per hr fix
ChatGPT said:
ChatGPT
If you're migrating a large number of files (e.g., 5000 files per hour) and encountering performance bottlenecks, you can improve the speed and reliability of your file migration by addressing several key areas. Here's a step-by-step guide to optimize file transfer:
1. Identify and Analyze Bottlenecks
File System Bottlenecks:
Check the source and destination disk I/O performance.
Ensure the files are not locked or being used during migration.
Network Bandwidth:
Measure your available network bandwidth using tools like iperf and ensure it can handle the expected load.
File Metadata Overhead:
Small files can add significant overhead due to metadata operations (e.g., file creation, permissions).
2. Optimize Transfer Mechanisms
Use Parallel Processing:
Transfer multiple files concurrently using multithreaded or parallel tools like:
rsync with --max-size and --parallel options.
GNU Parallel for concurrent file transfers.
bashCopy codefind /source/directory -type f | parallel -j8 rsync -a {} /destination/directory
Use a custom script or a library like Python's concurrent.futures for more control.
Batch File Transfers:
Group files into batches (e.g., tar archives) to reduce the overhead of transferring many small files individually.bashCopy codetar -cvf files.tar /source/files && rsync files.tar /destination Extract the files on the destination:bashCopy codetar -xvf files.tar
3. Ensure Network Optimization
Compression:
Compress files during transfer to reduce data volume:bashCopy codersync -az /source/directory /destination/directory
Increase TCP Window Size:
Tweak TCP settings for large transfers using tools like scp or rsync with buffer options:bashCopy codersync --bwlimit=0 --whole-file /source /destination
Use Faster Protocols:
Use optimized file transfer protocols like Rclone, Aspera, or SFTP for high-speed transfer.
4. Improve Source and Destination Performance
Disk Performance:
Optimize disk reads and writes using SSDs for temporary storage or caches.
Ensure the file system supports efficient file handling (e.g., ext4, XFS).
File System Tuning:
For very large numbers of files, tune parameters like inode_cache to handle metadata efficiently.
5. Error Handling and Retry Logic
Retry Failed Transfers:
Use tools like rsync to retry failed files automatically:bashCopy codersync -avz --partial --progress /source /destination
Logging and Monitoring:
Log all transferred files and
4o
You said:
File trsnsfer to s3 improve
ChatGPT said:
ChatGPT
Improving file transfer to Amazon S3 requires addressing both performance and reliability. Below are strategies and tools to optimize large-scale or high-volume file transfers:
1. Use AWS-Specific Tools
AWS CLI with Parallelism:
Use the --multipart-chunk-size-mb and --parallel flags to upload files in parallel and in chunks:bashCopy codeaws s3 cp /local/path s3://your-bucket-name --recursive --storage-class STANDARD_IA --region <region>
Enable parallel transfers: Update your AWS CLI configuration file (~/.aws/config) to increase concurrency:plaintextCopy code[default] s3 = max_concurrent_requests = 10 max_queue_size = 1000 multipart_threshold = 64MB multipart_chunksize = 64MB
AWS DataSync:
Use AWS DataSync for automated, high-performance, large-scale transfers with minimal manual intervention. DataSync can also manage retry and error handling.
2. Optimize S3 Uploads
Multipart Upload:
Use multipart uploads for files larger than 100 MB to improve transfer speed and reliability.
Example using AWS CLI:bashCopy codeaws s3api create-multipart-upload --bucket your-bucket-name --key file_key
Automate multipart uploads with the AWS SDK or tools like boto3 for Python.
Use the Correct Storage Class:
Choose the appropriate storage class to save costs and improve upload speeds:
STANDARD (default): High performance for frequently accessed data.
INTELLIGENT_TIERING: Automatically optimizes storage costs for infrequently accessed files.
3. Improve Network Performance
Enable Transfer Acceleration:
Use S3 Transfer Acceleration for faster uploads over long distances by leveraging AWS edge locations:bashCopy codeaws s3 cp /local/path s3://your-bucket-name --region <region> --endpoint-url https://<bucket-name>.s3-accelerate.amazonaws.com
Compression:
Compress files before transferring to reduce data size:bashCopy codetar -czvf archive.tar.gz /local/path && aws s3 cp archive.tar.gz s3://your-bucket-name
Optimize Network:
Use AWS Direct Connect or a high-bandwidth internet connection for large-scale transfers.
Monitor latency and bandwidth with tools like iperf.
4. Automate and Parallelize Transfers
Rclone:
Use rclone for high-speed S3-compatible transfers with parallelism and retry capabilities:bashCopy coderclone copy /local/path s3:your-bucket-name --transfers 16 --s3-upload-concurrency 16
GNU Parallel:
Transfer files concurrently by splitting file lists:bashCopy codefind /local/path -type f | parallel -j10 aws s3 cp {} s3://your-bucket-name
5. Error Handling and Retry
Retry Mechanisms:
Use tools or scripts that automatically retry failed uploads.
AWS CLI has built-in retry logic; increase retries if necessary:bashCopy codeaws s3 cp --cli-read-timeout 600 --cli-connect-timeout 600 /local/path s3://your-bucket-name
Chunked Retry:
Break large transfers into smaller, manageable chunks and retry failed chunks only.
6. Monitor and Optimize Transfers
AWS CloudWatch Metrics:
Monitor transfer performance and error rates in real-time.
S3 Inventory:
Use S3 inventory reports to validate uploaded files.
AWS SDKs or Custom Scripts:
Build custom monitoring and retry logic with SDKs like boto3 (Python).
Example Optimized Upload Command
bash
Copy code
aws s3 cp /local/path s3://your-bucket-name --recursive --storage-class STANDARD --region us-east-1 --cli-read-timeout 600 --cli-connect-timeout 600
When to Use Snowball or Snowmobile
For extremely large datasets (terabytes or petabytes), use AWS Snowball or AWS Snowmobile for offline data transfer to S3.
These steps can dramatically improve the efficiency and reliability of your file transfer process to S3. Let me know if you need help implementing any of these!
4o
You said:
Improve fole transfers
ChatGPT said:
ChatGPT
Improving file transfer performance and reliability, whether between servers, local systems, or to cloud storage like Amazon S3, involves a combination of strategies. Here are some best practices for improving file transfers:
1. Optimize the File Transfer Process
Parallelize Transfers
Transfer multiple files concurrently to improve throughput:
Use tools like rsync, scp, or cloud-specific SDKs with multi-threading capabilities.
Example with rsync:bashCopy codersync -azP --info=progress2 --max-size=100M --exclude="*.tmp" /source/ /destination/
Batch Small Files
Small files cause metadata overhead during transfer. Bundle them into archives:bashCopy codetar -czvf archive.tar.gz /source/ Transfer the archive, then extract on the destination:bashCopy codetar -xzvf archive.tar.gz
2. Use High-Performance Transfer Tools
Rclone
Ideal for cloud storage and large-scale transfers with parallelism:bashCopy coderclone copy /source/ remote:bucket-name --transfers=16 --checkers=8
GNU Parallel
Parallelize file uploads/downloads:bashCopy codefind /source/ -type f | parallel -j8 scp {} user@destination:/target/
Multithreaded Tools
Use tools with built-in multithreading, such as bbcp, Aspera, or scp (with parallel scripts).
3. Network Optimization
Compression
Compress files before transfer to reduce the amount of data transmitted:bashCopy codegzip largefile.txt Or combine with the transfer:bashCopy codetar czf - /source/ | ssh user@destination "tar xzf - -C /target/"
Bandwidth Management
Limit bandwidth usage to prevent throttling:
With rsync:bashCopy codersync --bwlimit=1000 -az /source/ /destination/
Use TCP tuning options to optimize network transfer rates:bashCopy codesysctl -w net.ipv4.tcp_window_scaling=1
Use Faster Protocols
Use optimized transfer protocols:
SFTP: Secure but slower for high-volume transfers.
Rsync/SSH: Efficient for incremental and large transfers.
Aspera: High-speed UDP-based protocol.
Enable Transfer Acceleration (Cloud)
For Amazon S3:bashCopy codeaws s3 cp /source/ s3://your-bucket/ --region <region> --endpoint-url https://<bucket-name>.s3-accelerate.amazonaws.com
4. Error Handling and Resilience
Resume Transfers
Use tools that support resuming failed transfers:
Rsync:bashCopy codersync -az --partial /source/ /destination/
AWS CLI:bashCopy codeaws s3 cp --continue /source/ s3://bucket-name/
Retry Logic
Implement retry logic for failed transfers using scripts or tools.
Example using bash:bashCopy codefor i in {1..5}; do scp file user@destination:/target/ && break || sleep 5 done
5. Monitor and Analyze Performance
Real-Time Monitoring
Monitor progress with tools:
Rsync: --progress flag.
SCP: Wrap in pv for progress visualization:bashCopy codetar cf - /source/ | pv | ssh user@destination "tar xf - -C /target/"
Analyze Bottlenecks
Use tools like iperf or nload to check network performance.
Use iotop or dstat to monitor disk I/O on both source and destination.
6. Scale with Advanced Solutions
Distributed File Transfer
Use tools like Globus for efficient distributed data transfers.
Data Transfer Appliances
For extremely large datasets, consider appliances like AWS Snowball or offline methods to ship data physically.
Content Delivery Networks (CDNs)
Use CDNs for faster file distribution to end users.
By combining these strategies—parallelism, compression, protocol optimization, and monitoring—you can significantly enhance file transfer speed and reliability. Let me know if you'd like further customization for a specific use case!
4o
You said:
Threadpool
ChatGPT said:
ChatGPT
A thread pool is a collection of pre-initialized worker threads that stand ready to perform tasks. Instead of creating and destroying threads for each task, a thread pool manages a set of reusable threads, improving performance and resource utilization in multi-threaded applications.
Here’s how to effectively use and optimize thread pools in your application.
Why Use a Thread Pool?
Performance: Reduces the overhead of creating and destroying threads repeatedly.
Resource Management: Controls the number of concurrent threads, preventing system overload.
Scalability: Allows a controlled environment for task execution, making it easier to scale applications.
Key Concepts of Thread Pools
Core Threads: Minimum number of threads that are always kept alive.
**Max Threads
4o
You said:
Improve java spring file transfer
ChatGPT said:
ChatGPT
Improving file transfer in a Java Spring application involves optimizing both the code and the underlying infrastructure. Below are strategies to enhance file transfer performance and reliability.
1. Optimize File Transfer Code
Use Streaming for Large Files
Avoid loading the entire file into memory; instead, use streaming:javaCopy code@PostMapping("/upload") public ResponseEntity<String> uploadFile(@RequestParam("file") MultipartFile file) throws IOException { try (InputStream inputStream = file.getInputStream()) { // Process the input stream directly // Example: Save the file Files.copy(inputStream, Paths.get("target-directory/" + file.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING); } return ResponseEntity.ok("File uploaded successfully!"); }
Enable Asynchronous File Transfers
Use Spring's @Async annotation for non-blocking file uploads/downloads:javaCopy code@Async public CompletableFuture<String> uploadFileAsync(MultipartFile file) { // File processing logic return CompletableFuture.completedFuture("Upload Complete"); }
Use Chunked Uploads for Large Files
Break large files into smaller chunks for efficient transfer and to handle interruptions gracefully:javaCopy code@PostMapping("/uploadChunk") public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk, @RequestParam("fileId") String fileId) { // Append chunk data to the file return ResponseEntity.ok("Chunk uploaded successfully!"); }
2. Optimize for Cloud Storage (e.g., AWS S3, Azure Blob)
Use SDKs for Direct Uploads
Use cloud-specific SDKs (e.g., AWS SDK for S3) to upload files directly to cloud storage instead of saving locally first:javaCopy code@PostMapping("/uploadToS3") public ResponseEntity<String> uploadToS3(@RequestParam("file") MultipartFile file) throws IOException { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); String bucketName = "your-bucket-name"; s3Client.putObject(bucketName, file.getOriginalFilename(), file.getInputStream(), new ObjectMetadata()); return ResponseEntity.ok("File uploaded to S3!"); }
Enable S3 Transfer Acceleration
Use Transfer Acceleration for faster uploads to Amazon S3:javaCopy codeAmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withAccelerateModeEnabled(true) .build();
3. Optimize File Transfer in REST APIs
Enable Compression
Compress file responses to reduce data size:yamlCopy codespring: servlet: multipart: enabled: true max-file-size: 50MB max-request-size: 50MB gzip: enabled: true min-size: 1024
Use HTTP/2
Switch to HTTP/2 for better concurrency and reduced latency:yamlCopy codeserver: http2: enabled: true
Resume Interrupted Uploads
Use ETag or file metadata to allow resumable file uploads:javaCopy code@PostMapping("/resumeUpload") public ResponseEntity<String> resumeUpload(@RequestParam("file") MultipartFile file, @RequestParam("offset") Long offset) { // Resume logic based on offset return ResponseEntity.ok("Upload resumed!"); }
4. Parallelize File Transfers
Multithreaded File Upload
Use a ThreadPoolExecutor or Spring's TaskExecutor:Copy code
In addition to using @Async, there are other strategies and tools to improve file transfer performance and reliability in a Java Spring application. Here are some approaches:
1. Reactive Programming with Spring WebFlux
For truly non-blocking and asynchronous file transfers, use Spring WebFlux:
Streaming File Upload:javaCopy code@PostMapping(value = "/upload", consumes = MediaType.MULTIPART_FORM_DATA_VALUE) public Mono<Void> uploadFile(Flux<FilePart> filePartFlux) { return filePartFlux.flatMap(filePart -> { Path path = Paths.get("target-directory/" + filePart.filename()); return filePart.transferTo(path); }).then(); }
Streaming File Download:javaCopy code@GetMapping(value = "/download", produces = MediaType.APPLICATION_OCTET_STREAM_VALUE) public Mono<ResponseEntity<Resource>> downloadFile() { Path path = Paths.get("target-directory/large-file.txt"); return Mono.just(ResponseEntity.ok() .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=large-file.txt") .body(new FileSystemResource(path))); }
2. Use CompletableFuture and ExecutorService
Combine CompletableFuture with a custom ExecutorService for better control:javaCopy code@PostMapping("/upload") public CompletableFuture<String> uploadFile(@RequestParam("file") MultipartFile file) { ExecutorService executor = Executors.newFixedThreadPool(4); return CompletableFuture.supplyAsync(() -> { try { // File processing logic Files.copy(file.getInputStream(), Paths.get("target/" + file.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING); return "File uploaded successfully!"; } catch (IOException e) { throw new RuntimeException("File upload failed!", e); } }, executor); }
3. Apache Kafka for Asynchronous Processing
Use Apache Kafka to decouple file upload from processing:
Producer sends file metadata to Kafka after upload:javaCopy code@PostMapping("/upload") public ResponseEntity<String> uploadFile(@RequestParam("file") MultipartFile file) throws IOException { // Save the file Files.copy(file.getInputStream(), Paths.get("target/" + file.getOriginalFilename())); // Send metadata to Kafka kafkaTemplate.send("file-topic", file.getOriginalFilename()); return ResponseEntity.ok("File uploaded and queued for processing!"); }
Consumer processes files asynchronously:javaCopy code@KafkaListener(topics = "file-topic", groupId = "file-group") public void processFile(String fileName) { // Process the file Path path = Paths.get("target/" + fileName); System.out.println("Processing file: " + path); }
4. Chunked File Transfers
Handle large files by uploading them in chunks:
Frontend sends file chunks with an identifier.
Backend reconstructs the file:javaCopy code@PostMapping("/uploadChunk") public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk, @RequestParam("fileId") String fileId, @RequestParam("chunkNumber") int chunkNumber) { Path path = Paths.get("target/" + fileId + "_" + chunkNumber); try { Files.copy(chunk.getInputStream(), path, StandardCopyOption.REPLACE_EXISTING); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Failed to upload chunk"); } return ResponseEntity.ok("Chunk uploaded successfully!"); }
Reassemble chunks when all are uploaded.
5. WebSockets for Real-Time Progress
Use WebSockets to provide real-time feedback on file transfer progress:javaCopy code@Controller public class WebSocketController { @MessageMapping("/uploadStatus") @SendTo("/topic/progress") public String sendProgress(String status) { return status; } }
Send progress updates from the server while uploading:javaCopy codesimpMessagingTemplate.convertAndSend("/topic/progress", "50% completed");
6. Redis or Database Queues
Use a Redis queue or a database to handle file metadata asynchronously.
Enqueue metadata after upload:javaCopy coderedisTemplate.opsForList().leftPush("fileQueue", fileMetadata);
Process the queue in the background:javaCopy codeString metadata = redisTemplate.opsForList().rightPop("fileQueue");
7. Use MultipartConfig for Large Files
Configure multipart file handling for large files in application.properties:propertiesCopy codespring.servlet.multipart.max-file-size=1GB spring.servlet.multipart.max-request-size=2GB
Custom bean for fine-tuning:javaCopy code@Bean public MultipartConfigElement multipartConfigElement() { MultipartConfigFactory factory = new MultipartConfigFactory(); factory.setMaxFileSize("1GB"); factory.setMaxRequestSize("2GB"); return factory.createMultipartConfig(); }
8. Use AWS S3 for Direct Uploads
Use pre-signed URLs to offload upload work to the client:javaCopy code@GetMapping("/generatePresignedUrl") public ResponseEntity<String> generatePresignedUrl(@RequestParam String fileName) { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("bucket-name", fileName) .withMethod(HttpMethod.PUT) .withExpiration(new Date(System.currentTimeMillis() + 3600 * 1000)); URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest); return ResponseEntity.ok(url.toString()); }
The client uploads directly to S3 using the URL.
9. Optimize File Transfer for High Performance
Compression
Compress files before upload to reduce transfer size:javaCopy codebyte[] compressedData = compress(file.getBytes()); Files.write(Paths.get("compressed-file.gz"), compressedData);
Load Balancing
Use a load balancer (e.g., AWS ALB or NGINX) for scalable upload endpoints.
Retry Logic
Implement retries for failed transfers to improve reliability:javaCopy codeint maxRetries = 3; while (retries < maxRetries) { try { // Attempt file upload break; } catch (IOException e) { retries++; if (retries == maxRetries) { throw e; } } }
These strategies can enhance file transfer performance, scalability, and reliability in Java Spring applications. Let me know if you need code examples for specific use cases!
4o
You said:
S3 relaibility
ChatGPT said:
ChatGPT
Ensuring reliability when transferring files to Amazon S3 involves minimizing failures, handling errors effectively, and ensuring data integrity. Below are best practices and techniques to achieve reliable file transfers to S3:
1. Use Multipart Upload for Large Files
Multipart uploads split large files into smaller parts, upload them in parallel, and reassemble them in S3. This improves reliability and speeds up transfers.
Example Using AWS SDK (Java)
java
Copy code
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); String bucketName = "your-bucket-name"; String fileName = "large-file.zip"; String keyName = "uploads/" + fileName; File file = new File(fileName); TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); Upload upload = tm.upload(bucketName, keyName, file); try { upload.waitForCompletion(); System.out.println("File uploaded successfully!"); } catch (AmazonClientException e) { System.err.println("Upload failed: " + e.getMessage()); }
2. Use Retry Logic
Add retry mechanisms to handle intermittent network issues or service interruptions.
AWS SDK Retry Settings
The AWS SDK retries failed requests automatically, but you can customize it:
java
Copy code
ClientConfiguration config = new ClientConfiguration(); config.setRetryPolicy(PredefinedRetryPolicies.getDefaultRetryPolicyWithCustomMaxRetries(5)); AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withClientConfiguration(config) .build();
3. Enable Transfer Acceleration
Transfer Acceleration uses AWS edge locations to improve upload speeds over long distances.
Enable Transfer Acceleration in S3
Enable it in the S3 bucket settings.
Update your client code to use the accelerated endpoint:
java
Copy code
AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withAccelerateModeEnabled(true) .build();
4. Validate Data Integrity
Use MD5 checksums to ensure the uploaded file matches the original file.
AWS CLI Example
bash
Copy code
aws s3 cp file.txt s3://your-bucket/ --content-md5 <calculated-md5>
Java Example
java
Copy code
String md5 = Base64.getEncoder().encodeToString(DigestUtils.md5(new FileInputStream(file))); ObjectMetadata metadata = new ObjectMetadata(); metadata.setContentMD5(md5); PutObjectRequest request = new PutObjectRequest(bucketName, keyName, new FileInputStream(file), metadata); s3Client.putObject(request);
5. Monitor File Transfers
Use CloudWatch Metrics to monitor S3 request success, failures, and latencies.
Monitor transfer progress with TransferManager in the AWS SDK:
java
Copy code
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); Upload upload = tm.upload(bucketName, keyName, file); upload.addProgressListener(progressEvent -> { System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred()); }); upload.waitForCompletion();
6. Handle Failures Gracefully
Use Resumable Transfers for interrupted uploads with TransferManager:
java
Copy code
TransferManager tm = TransferManagerBuilder.standard().build(); Upload upload = tm.upload(bucketName, keyName, file); // Add a progress listener upload.addProgressListener((ProgressListener) progressEvent -> { System.out.println("Transferred: " + progressEvent.getBytesTransferred()); }); // Graceful failure handling try { upload.waitForCompletion(); } catch (AmazonClientException e) { System.err.println("Error occurred: " + e.getMessage()); }
7. Use Pre-Signed URLs
Generate pre-signed URLs for reliable, time-limited uploads directly from the client, reducing the load on your server.
Generate Pre-Signed URL
java
Copy code
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, keyName) .withMethod(HttpMethod.PUT) .withExpiration(new Date(System.currentTimeMillis() + 3600 * 1000)); URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest); System.out.println("Pre-Signed URL: " + url);
8. Use S3 Event Notifications
Configure S3 Event Notifications to trigger AWS Lambda or SNS for post-upload validation, processing, or alerting on failures.
Example: Notification for Upload Completion
Configure your S3 bucket to send an event to an SQS queue or Lambda function upon successful upload.
9. Leverage Data Replication
Use S3 Cross-Region Replication (CRR) or Same-Region Replication (SRR) for redundancy.
Automatically replicate files to another bucket for disaster recovery.
10. Automate with AWS DataSync
Use AWS DataSync for automated and highly reliable file transfers between on-premises storage and S3. It includes built-in retry and integrity checks.
11. Optimize Network Performance
Compress files before uploading:javaCopy codeFiles.write(Paths.get("compressed-file.gz"), gzipCompress(fileContent));
Use efficient transfer protocols (e.g., HTTP/2 if supported by your client).
12. Secure Transfers
Always use HTTPS for secure communication.
Use S3 Bucket Policies and IAM Roles to restrict access to authorized users.
Example: Complete Reliable Upload Function
java
Copy code
public void reliableUpload(File file, String bucketName, String keyName) { AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient(); TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build(); try { // Multipart upload Upload upload = tm.upload(bucketName, keyName, file); upload.addProgressListener((ProgressListener) progressEvent -> { System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred()); }); upload.waitForCompletion(); System.out.println("File uploaded successfully!"); } catch (AmazonClientException e) { System.err.println("Upload failed: " + e.getMessage()); } finally { tm.shutdownNow(); } }
0 notes
qbokubernetesengine · 1 year ago
Text
What is Kubeflow and How to Deploy it on Kubernetes
Tumblr media
Machine learning (ML) processes on Kubernetes, the top container orchestration technology, may be simplified and streamlined with Kubeflow, an open-source platform. From data pretreatment to model deployment, it's like having your specialised toolbox for managing all your ML and AI operations within the Kubernetes ecosystem. Keep on reading this article to know about Kubeflow deployment in Kubernetes.
Why Kubeflow? 
Integrated Approach 
Complex ML processes can more easily be managed with Kubeflow because it unifies several tools and components into a unified ecosystem. 
Efficiency in scaling 
Thanks to its foundation in Kubernetes, Kubeflow can easily grow to manage massive datasets and ML tasks that require a lot of computing power. 
Consistent results 
The significance of reproducibility is highlighted by Kubeflow, who defines ML workflows as code, allowing for the replication and tracking of experiments. 
Maximising the use of available resources 
Separating ML workloads inside Kubernetes eliminates resource conflicts and makes sure everything runs well. 
Easy Implementation 
Kubeflow deployment in Kubernetes makes deploying machine learning models as web services easier, which opens the door to real-time applications. 
Integration of Kubeflow with Kubernetes on GCP 
For this example, we will utilise Google Cloud Platform (GCP) and their managed K8s GKE. However, there may be subtle variations depending on the provider you choose. The majority of this tutorial is still applicable to you. 
Set up the GCP project
Just follow these instructions for Kubeflow deployment in Kubernetes. 
You can start a new project or choose one from the GCP Console. 
Establish that you are the designated "owner" of the project. The implementation process involves creating various service accounts with adequate permissions to integrate with GCP services without any hitches. 
Verify that your project meets all billing requirements. To make changes to a project, refer to the Billing Settings Guide. 
Verify that the necessary APIs are allowed on the following GCP Console pages: 
o Compute Engine API
o Kubernetes Engine API
o Identity and Access Management (IAM) API
o Deployment Manager API
o Cloud Resource Manager API
o Cloud Filestore API
o AI Platform Training & Prediction API
Remember that the default GCP version of Kubeflow cannot be run on the GCP Free Tier due to space constraints, regardless of whether you are utilising the $300 credit 12-month trial term. A payable account is where you need to be. 
Deploy kubeFlow using the CLI 
Before running the command line installer for Kubeflow: 
Make sure you've got the necessary tools installed: 
kubectl 
Gcloud
Check the GCP documentation for the bare minimum requirements and ensure your project satisfies them. 
Prepare your environment 
So far, we've assumed you can connect to and operate a GKE cluster. If not, use one as a starting point: 
Container clusters in Gcloud generate cluster-name environment compute-zone 
More details regarding the same command can be found in the official documentation. 
To get the Kubeflow CLI binary file, follow these instructions: 
Go to the kfctl releases page and download the v1.0.2 version. 
Unpack the tarball:
tar -xvf kfctl_v1.0.2_<platform>.tar.gz
• Sign in. Executing this command is mandatory just once: 
gcloud auth login
• Establish login credentials. Executing this command is mandatory just once:
gcloud auth application-default login
• Set the zone and project default values in Gcloud. 
To begin setting up the Kubeflow deployment, enter your GCP project ID and choose the zone: 
export PROJECT=<your GCP project ID> export ZONE=<your GCP zone>
gcloud config set project ${PROJECT} gcloud config set compute/zone ${ZONE}
Select the KFDef spec to use for your deployment
Select the KFDef spec to use for your deployment
Export
CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/v1.0-branch/kfdef/kfctl_gcp_iap.v1.0.2.yaml"
Ensure you include the OAuth client ID and secret you generated earlier in your established environment variables. 
export CLIENT_ID=<CLIENT_ID from OAuth page> export CLIENT_SECRET=<CLIENT_SECRET from OAuth page>
You can access the CLIENT_ID and CLIENT_SECRET in the Cloud Console by going to APIs & Services -> Credientials. 
Assign a directory for your configuration and give your Kubeflow deployment the name KF_NAME. 
export KF_NAME=<your choice of name for the Kubeflow deployment> export BASE_DIR=<path to a base directory> export KF_DIR=${BASE_DIR}/${KF_NAME}
When you perform the kfctl apply command, Kubeflow will be deployed with the default settings:
mkdir -p ${KF_DIR} cd ${KF_DIR} kfctl apply -V -f ${CONFIG_URI}
By default, kfctl will attempt to fill the KFDef specification with a number of values.
Conclusion Although you are now familiar with the basics of Kubeflow deployment in Kubernetes, more advanced customisations can make the process more challenging. However, many of the issues brought up by the computational demands of machine learning can be resolved with a containerised, Kubernetes-managed cloud-based machine learning workflow, such as Kubeflow. It allows for scalable access to central processing and graphics processing units, which may be automatically increased to handle spikes in computing demand.
1 note · View note
alexus · 1 year ago
Text
HowTo Run Kubernetes locally on Apple silicon via Homebrew & Minikube
requirement(s): Terminal (doh) Homebrew — The Missing Package Manager for macOS (or Linux) % brew install qemu ... % brew tap homebrew/services ... % brew install socket_vmnet .. % sudo brew services start socket_vmnet ... % brew install minikube ... % minikube start --driver qemu --network socket_vmnet ... % #test % kubectl config current-context minikube % kubectl get nodes NAME STATUS ROLES…
View On WordPress
1 note · View note
gianthunter20 · 4 years ago
Text
Kubernetes Airflow
Tumblr media
Airflow Kubernetes Executor
Kubernetes Airflow System
Wouldn’t be convenient to be able to run Apache Airflow locally with the Kubernetes Executor on a multi-node Kubernetes cluster? That’s could be a great way to test your DAGs and understand how Airflow works in a Kubernetes environment isn’t it? Well that’s exactly what we are going to do here. I will show you step by step, how to quickly set up your own development environment and start running Airflow locally on Kubernetes. If you want to learn more about Airflow, don’t forget to check my course: Apache Airflow: The Complete Hands-On Introduction. Let’s get started!
Tumblr media
Apache Airflow is an open source workflow management tool used to author, schedule, and monitor ETL pipelines and machine learning workflows among other uses. To make easy to deploy a scalable Apache Arflow in production environments, Bitnami provides an Apache Airflow Helm chart comprised, by default, of three synchronized nodes: web server, scheduler, and worke.
The kubernetes executor is introduced in Apache Airflow 1.10.0. The Kubernetes executor will create a new pod for every task instance using the podtemplate.yaml that you can find templates/config/configmap.yaml, otherwise you can override this template using worker.podTemplate. To enable KubernetesExecutor set the following parameters. This allows us to scale airflow workers and executors, but we still have problems like this. This article is going to show how to: Use airflow kubernetes operator to isolate all business rules from airflow pipelines; Create a YAML DAG using schema validations to simplify the usage of airflow for some users; Define a pipeline pattern. Containers Deploying Bitnami applications as containers is the best way to get the most from your infrastructure. Our application containers are designed to work well together, are extensively documented, and like our other application formats, our containers are continuously updated when new versions are made available. A Kubernetes cluster of 3 nodes will be set up with Rancher, Airflow and the Kubernetes Executor in local to run your data pipelines. Advanced concepts will be shown through practical examples such as templatating your DAGs, how to make your DAG dependent of another, what are Subdags and deadlocks, and more.
As we are going to create a multi-node Kubernetes cluster and interact with it, there are some tools to install in first. Let’s discover them.
The first one is KinD. KinD means Kubernetes IN Docker and allows to run local multi-node kubernetes clusters using Docker container “nodes”. Unlike with MiniKube, KinD has a significantly faster startup speed since it doesn’t rely on virtual machines. Take a look at the quick start guide to install it. Obviously, Docker should be installed as well on your machine.
The second tool to install is Kubectl. If you are familiar with Kubernetes, you should already know Kubectl. Kubectl is the official Kubernetes command-line tool and allows you to run commands against Kubernetes clusters. Whenever you want to deploy applications, manage cluster resources, or view logs, you will use Kubectl. Check the documentation to install it.
The last tool we need is Helm. Helm is the package manager for Kubenetes in order to install and manage Kubernetes applications in a very easy way. Helm relies on helm charts. A chart is a collection of files describing a set of Kubernetes resources. For example, the chart of Airflow will deploy a web server, the scheduler, the metastore, a service to access the UI and so on. Take a look at the Airflow chart here to have a better idea of what a chart is. Installing Helm is pretty straightforward as you can see here.
Now tools are installed, let’s create the Kubernetes cluster to run Apache Airflow locally with the Kubernetes Executor.
To give you a better hands-on experience, I made the following video where I show you how to set up everything you need to get Airflow running locally on a multi-node Kubernetes cluster. In this video, you will learn:
Airflow Kubernetes Executor
Configuring and creating a multi-node Kubernetes cluster with KinD
Installing and upgrading the Helm chart of Airflow.
Building your Docker image of Airflow packaged with your DAGs
Creating a local registry to push your Docker image of Airflow
Configuring Airflow to execute your tasks with the Kubernetes Executor.
That’s a lot of amazing stuff to learn! At the end, you will have Airflow running with the Kubernetes Executor in a local multi-node Kubernetes cluster. That way, you will be able to test and execute your DAGs in a Kubernetes environment without having to use expensive cloud providers. Enjoy!
Kubernetes Airflow System
Interested by learning more? Stay tuned and get special promotions!
Tumblr media
1 note · View note
engineering · 6 years ago
Text
Open Sourcing our Kubernetes Tools
At Tumblr, we are avid fans of Kubernetes. We have been using Kubernetes for all manner of workloads, like critical-path web requests handling for tumblr.com, background task executions like sending queued posts and push notifications, and scheduled jobs for spam detection and content moderation. Throughout our journey to move our 11 year old (almost 12! 🎂) platform to a container-native architecture, we have made innumerable changes to how our applications are designed and run. Inspired by a lot of existing Kubernetes APIs and best practices, we're excited to share with the community some of the tools we've developed at Tumblr as our infrastructure has evolved to work with Kubernetes.
To help us integrate Kubernetes into our workflows, we have built a handful of tools of which we are open-sourcing three today! Each tool is a small, focused utility, designed to solve specific integration needs Tumblr had while migrating our workflows to Kubernetes. The tools were built to handle our needs internally, but we believe they are useful to the wider Kubernetes community.
github.com/tumblr/k8s-sidecar-injector
github.com/tumblr/k8s-config-projector
github.com/tumblr/k8s-secret-projector
k8s-sidecar-injector
Any company that has containerized an application as large and complex as Tumblr knows that it requires a tremendous amount of effort. Applications don't become container-native overnight, and sidecars can be useful to help emulate older deployments with colocated services on physical hosts or VMs. To reduce the amount of fragile copy-paste code by developers adding in sidecars to their Deployments and CronJobs, we created a service to dynamically inject sidecars, volumes, and environment data into pods as they are launched.
The k8s-sidecar-injector listens to the Kubernetes API for Pod launches that contain annotations requesting a specific sidecar to be injected. For example, the annotation injector.tumblr.com/request=sidecar-prod-v1 will add any environment variables, volumes, and containers defined in the sidecar-prod-v1 configuration. We use this to add sidecars like logging and metrics daemons, cluster-wide environment variables like DATACENTER and HTTP_PROXY settings, and volumes for shared configuration data. By centralizing configuration of sidecars, we were able to reduce complexity in CronJobs and Deployments by hundreds of lines, eliminated copy-paste errors, and made rolling out updates to shared components in our sidecars effortless.
An example sidecar ConfigMap is below, which adds a logging container, a volume from a logger-config ConfigMap, and some environment variables into the Pod.
--- apiVersion: v1 kind: ConfigMap metadata: name: example-sidecars namespace: kube-system labels app: k8s-sidecar-injector data: logger-v1: | name: logger-v1 containers: - name: logger image: some/logger:2.2.3 imagePullPolicy: IfNotPresent ports: - containerPort: 8888 volumeMounts: - name: logger-conf mountPath: /etc/logger volumes: - name: logger-conf configMap: name: logger-config env: - name: DATACENTER value: dc01 - name: HTTP_PROXY value: http://my-proxy.org:8080/ - name: HTTPS_PROXY value: http://my-proxy.org:8080/
This configuration will add the logger container into each pod with the annotation injector.tumblr.com/request: logger-v1, with a ConfigMap projected as a volume in /etc/logger. Additionally, every container in the Pod will get the DATACENTER=dc01 and HTTP_PROXY environment variables added, if they were not already set. This has allowed us to drastically reduce our boilerplate configuration when containerizing legacy applications that require a complex sidecar configuration.
k8s-config-projector
Internally, we have many types of configuration data that is needed by a variety of applications. We store canonical settings data like feature flags, lists of hosts/IPs+ports, and application settings in git. This allows automated generation/manipulation of these settings by bots, cron jobs, Collins, and humans alike. Applications want to know about some subset of this configuration data, and they want to be informed when this data changes as quickly as possible. Kubernetes provides the ConfigMap resource, which enables users to provide their service with configuration data and update the data in running pods without requiring a redeployment. We wanted to use this to configure our services and jobs in a Kubernetes-native manner, but needed a way to bridge the gap between our canonical configuration store (git repo of config files) to ConfigMaps. Thus, was k8s-config-projector born.
The Config Projector (github.com/tumblr/k8s-config-projector)[github.com/tumblr/k8s-config-projector] is a command line tool, meant to be run by CI processes. It combines a git repo hosting configuration data (feature flags, lists of hostnames+ports, application settings) with a set of "projection manifest" files that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. The config projector allows developers to encode a set of configuration data the application needs to run into a projection manifest. As the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed. Projection datasources can handle both structured and unstructured configuration files (YAML, JSON, and raw text/binary).
An example projection manifest is below, describing how a fictitious notification application could request some configuration data that may dynamically change (memcached hosts, log level, launch flags, etc):
--- name: notifications-us-east-1-production namespace: notification-production data: # extract some fields from JSON - source: generated/us-east-1/production/config.json output_file: config.json field_extraction: - memcached_hosts: $.memcached.notifications.production.hosts - settings: $.applications.notification.production.settings - datacenter: $.datacenter - environment: $.environment # extract a scalar value from a YAML - source: apps/us-east-1/production/notification.yaml output_file: launch_flags extract: $.launch_flags
After processing by the config projector, the following ConfigMap is generated, which can then be posted to a Kubernetes cluster with kubectl create -f <generatedfile>.
kind: ConfigMap apiVersion: v1 metadata name: notifications-us-east-1-production namespace: notification-production labels: tumblr.com/config-version: "1539778254" tumblr.com/managed-configmap: "true" data: config.json: | { "memcached_hosts": ["2.3.4.5:11211","4.5.6.7:11211","6.7.8.9:11211"], "settings": { "debug": false, "buffer": "2000", "flavor": "out of control", "log_level": "INFO", }, "datacenter": "us-east-1", "environment": "production" } launch_flags: "-Xmx5g -Dsun.net.inetaddr.ttl=10"
With this tool, we have enabled our applications running in kubernetes to receive dynamic configuration updates without requiring container rebuilds or deployments. More examples can be found here.
k8s-secret-projector
Similar to our configuration repository, we store secure credentials in access controlled vaults, divided by production levels. We wanted to enable developers to request access to subsets of credentials for a given application without needing to grant the user access to the secrets themselves. Additionally, we wanted to make certificate and password rotation transparent to all applications, enabling us to rotate credentials in an application-agnostic manner, without needing to redeploy applications. Lastly, we wanted to introduce a mechanism where application developers would explicitly describe which credentials their services need, and enable a framework to audit and grant permissions for a service to consume a secret.
The k8s-secret-projector operates similarly to the k8s-config-projector, albeit with a few differences. The secret projector combines a repository of projection manifests with a set of credential repositories. A Continuous Integration (CI) tool like Jenkins will run the k8s-secret-projector against any changes in the projection manifests repository to generate new Kubernetes Secret YAML files. Then, Continuous Deployment can deploy the generated and validated Secret files to any number of Kubernetes clusters.
Take this file in the production credentials repository, named aws/credentials.json:
{ "us-east-1": { "region": "us-east-1", "aws": { "key": "somethignSekri7T!", }, "s3": { "key": "passW0rD!", }, "redshift": { "key": "ello0liv3r!", "database": "mydatabase" } }, "us-west-2": { "region": "us-west-2", "aws": { "key": "anotherPasswr09d!", }, "s3": { "key": "sueprSekur#", } } }
We need to create an amazon.yaml configuration file containing the s3.key and aws.key for us-east-1, as well as a text file containing our region. The projection manifest below will extract only the fields we need, and output them in the format desired.
name: aws-credentials namespace: myteam repo: production data: # create an amazon.yaml config with the secrets we care about - name: amazon.yaml source: format: yaml json: aws/credentials.json jsonpaths: s3: $.us-east-1.s3.key aws: $.us-east-1.aws.key region: $.us-east-1.region # create a item containing just the name of the region we are in - name: region source: json: aws/credentials.json jsonpath: $.us-east-1.region
Projecting this manifest with the above credentials results in the following Kubernetes Secret YAML file:
apiVersion: v1 kind: Secret metadata: labels: tumblr.com/managed-secret: "true" tumblr.com/secret-version: master-741-7459d1abcc120 name: aws-credentials namespace: myteam data: region: dXMtZWFzdC0x # region decoded for clarity: us-east-1 amazon.yaml: LS0tCnMzOiAicGFzc1cwckQhIgphd3M6ICJzb21ldGhpZ25TZWtyaTdUISIKcmVnaW9uOiB1cy1lYXN0LTEK # amazon.yaml decoded for clarity: # --- # s3: "passW0rD!" # aws: "somethignSekri7T!" # region: us-east-1
In addition to being able to extract fields from structured YAML and JSON sources, we gave it the ability to encrypt generated Secrets before they touch disk. This allows Secrets to be deployed in shared Kubernetes environments, where users are colocated with other users, and do not feel comfortable with their Secret resources being unencrypted in etcd. Please note, this requires decryption by your applications before use. More details on how the encryption modules work can be found here.
For more examples of how to use this, check out examples here!
What's Next
We are excited to share these tools with the Kubernetes open source community, and we hope they can help your organization adopt container-native thinking when managing application lifecycle like they helped Tumblr. Feature enhancements and bug fixes are welcome! And, shameless plug: if you are interested in Kubernetes, containerization technology, open source, and scaling a massive website with industry leading technologies and practices? Come join us!.
- @pipefail
96 notes · View notes
Text
Hello!
Not sure how successful this will be but I’m sitting at work learning about something...thinking...”It would be nice to document this for the future and while I’m at it maybe someone else might find this useful.” Now, what platform would be best to do this? Not sure so trying this out...
This will be something I learn on the fly ;-) Most of this will focus on techy stuff but maybe expand.
So here it goes...
I’m in the process of learning more about Kubernetes and getting this going on my Windows 10 laptop is relatively easy. Thing is I am running the Windows Subsystem for Linux (WSL) and like to run everything through there. Found this nice little blog. https://devkimchi.com/2018/06/05/running-kubernetes-on-wsl/ and following the steps gets me where I need to be.
However, in working through the steps I come across the following...
And let kubectl know to use the Docker for Windows context.
kubectl config use-context docker-for-desktop
 and I’m like
“How do I know to use docker-for-desktop?”
So knowing that kubectl is already set up in my Windows env, I run kubectl config --help
C:\Users\jacoj1> kubectl config --help
which provides me with a list of commands
Excerpt of output...
Available Commands: current-context Displays the current-context delete-cluster  Delete the specified cluster from the kubeconfig delete-context  Delete the specified context from the kubeconfig get-clusters    Display clusters defined in the kubeconfig get-contexts    Describe one or many contexts rename-context  Renames a context from the kubeconfig file. set             Sets an individual value in a kubeconfig file set-cluster     Sets a cluster entry in kubeconfig set-context     Sets a context entry in kubeconfig set-credentials Sets a user entry in kubeconfig unset           Unsets an individual value in a kubeconfig file use-context     Sets the current-context in a kubeconfig file view            Display merged kubeconfig settings or a specified kubeconfig file
So then I can issue current-context and viola!
C:\Users\jacoj1> kubectl config current-context docker-for-desktop
Docker Desktop is great as it does all the setup of Kubernetes which allows you to start using it right away. As such you might miss out on some setup/config steps that reinforce some of the concepts. You can learn more about context here. https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
1 note · View note
foxutech · 2 years ago
Text
Kubectl Config Set-Context - How to use
Currently or in future, we will be going to manage more number of Kubernetes clusters. It is important to have better way to manage it. In Kubernetes there is more number of tools, in this articles let’s check about kubectl set-context. Also, will try to cover some examples and best practices for better understanding. Introduction The first time you configure kubectl, you need to place…
Tumblr media
View On WordPress
0 notes
computingpostcom · 3 years ago
Text
Welcome to this exhilarating tutorial on how to deploy and use Quarkus in Kubernetes. Kubernetes is one of the open-source tools currently preferred when automating system deployments. It makes it easy to scale and manage containerized applications. Kubernetes works by distributing workloads across the cluster and automating the container networking needs. Also, storage and persistent volumes are allocated, by doing so the desired state of container applications is continuously maintained. Quarkus offers provides an easy way to automatically generate the Kubernetes resources based on some defaults and the user-provided configuration. This Kubernetes-native Java framework also provides an extension used to build and push container images to a registry before the application is deployed to the target. Another feature about Quarkus is that it enabled one to use the Kubernetes ConfigMap as a configuration source without mounting them on the pod. The cool features associated with Quarkus are: Community and Standards: It provides a cohesive and fun-to-use full-stack framework by leveraging a growing list of over fifty best-of-breed libraries that you love and use Container First: It offers amazingly fast boot time, incredibly low RSS memory (not just heap size!) offering near-instant scale-up and high density memory utilization in container orchestration platforms like Kubernetes Unifies imperative and reactive: It allows developers to combine both the familiar imperative code and the reactive style when developing applications. Kube-Native: The combination of Quarkus and Kubernetes provides an ideal environment for creating scalable, fast, and lightweight applications. It highly increases the developer productivity with tooling, pre-built integrations, application services e.t.c By following this guide to the end, you will learn how to: Use Quarkus Dekorate extension to automatically generate Kubernetes manifests based on the source code and configuration Build and push images to Docker registry with Jib extension Deploy an application on Kubernetes without any manually created YAML in one click Use Quarkus Kubernetes Config to inject configuration properties from ConfigMap Let’s dive in! Setup Pre-requisites For this guide, you will require: Quarkus CLI Apache Maven 3.8.1+ (Optional) Access to a Kubernetes cluster A Kubernetes cluster can be deployed with the aid of the guides below: Run Kubernetes on Debian with Minikube Deploy Kubernetes Cluster on Linux With k0s Install Kubernetes Cluster on Ubuntu using K3s Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Once the cluster is running, install kubectl curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin Ensure that you can access the cluster. # For k0s export KUBECONFIG=/var/lib/k0s/pki/admin.conf 1. Install Quarkus CLI The Quarkus CLI can be installed on Linux, macOS, and Windows (using WSL or bash compatible shell-like Cygwin or MinGW) by running the below commands: curl -Ls https://sh.jbang.dev | bash -s - trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/ curl -Ls https://sh.jbang.dev | bash -s - app install --fresh --force quarkus@quarkusio You can install it on Windows systems using the Powershell: iex "& $(iwr https://ps.jbang.dev) trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/" iex "& $(iwr https://ps.jbang.dev) app install --fresh --force quarkus@quarkusio" Once installed, restart your shell. The Quarkus CLI can also be installed using SDKMAN as below: sdk install quarkus 2. Create a Project Use the Quarkus command-line interface (CLI) to create a new project. The below command adds resteasy-reactive, Jib, and kubernetes dependencies. quarkus create app quarkus-example --extension=resteasy-reactive,kubernetes,jib
cd quarkus-example Sample Output: After this, you will have several files generated, among these files is the pom.xml file bearing dependencies to the build file: ......... io.quarkus quarkus-resteasy-reactive io.quarkus quarkus-kubernetes io.quarkus quarkus-container-image-jib ...... The good thing with Quarkus is that it generates Deployment/StatefulSet resources that it use your registry_username/test-quarkus-app:tag as the container image of the Pod. The image here is controlled by the Jib extension and can be customized using the application.properties as shown: Open the file for editing: vim src/main/resources/application.properties Add the following lines replacing where required. quarkus.container-image.group=registry_username quarkus.container-image.name=tutorial-app quarkus.container-image.tag=latest quarkus.container-image.username=registry_username quarkus.container-image.password=Your_registry -Password If no registry has not been specified, the default, docker.io registry will be used. A detailed demonstration on specifying a registry has been captured elsewhere in this guide. 3. Build and Deploy your Application Jib is used to build optimized images for Java applications without a Docker daemon and no need for the mastery of deep docker practices. Dekorate is a Java library that makes it simple to generate and decorate Kubernetes manifests. It generates manifests based on the annotations, source code, and configuration variables. Now build and deploy your application using Quarkus CLI: quarkus build -Dquarkus.container-image.push=true Sample Output: After the build process, you will have two files named kubernetes.json and kubernetes.yml under the target/kubernetes/ directory. # ls target/kubernetes kubernetes.json kubernetes.yml Both files contain both the Kubernetes Deployment and Service. For example, the kubernetes.yml file looks like this: # cat target/kubernetes/kubernetes.yml --- apiVersion: v1 kind: Service metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/name: tutorial-app app.kubernetes.io/version: latest name: tutorial-app spec: ports: - name: http port: 80 targetPort: 8080 selector: app.kubernetes.io/name: tutorial-app app.kubernetes.io/version: latest type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app name: tutorial-app spec: replicas: 1 selector: matchLabels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app template: metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app spec: containers: - env: - name: KUBERNETES_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace image: registry_username/tutorial-app:latest imagePullPolicy: Always name: tutorial-app ports: - containerPort: 8080 name: http protocol: TCP You will also have the image pushed to your registry. DockerHub for this example: It is possible to generate a StatefulSet resource instead of the default Deployment resource via the application.properties; quarkus.kubernetes.deployment-kind=StatefulSet Now deploy the application to your Kubernetes cluster using any of the two manifests. For example: kubectl apply -f target/kubernetes/kubernetes.yml Verify if the deployment is up: # kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE tutorial-app 1/1 1 1 13s
# kubectl get pods NAME READY STATUS RESTARTS AGE tutorial-app-bc774dc8d-k494g 1/1 Running 0 19s Check if the service is running: # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 31m tutorial-app LoadBalancer 10.102.87.114 80:30400/TCP 4m53s Access the deployment using the provided port 30400. This can be done using a web browser with the URL http://IP_Address:30400/hello Or from the terminal as shown: $ curl 192.168.205.4:30400/hello Hello from RESTEasy Reactive This is the output of the file at src/main/java/org/acme/GreetingResource.java 4. Tuning the generated resources using application.properties Quarkus allows one to tune the generated manifest using the application.properties file. Through this file, several configurations can be made. These include: A. Namespace Quarkus allows one to run the application in a chosen namespace. It omits the namespace in the generated manifest rather than enforcing it in the default namespace. Therefore, you can run the application in the desired namespace say test using the command: kubectl apply -f target/kubernetes/kubernetes.yml -n=test Aside from specifying the namespace when running the Kubernetes command, you can still capture the namespace in the application.properties as shown: quarkus.kubernetes.namespace=mynamespace Replace mynamespace with the desired namespace for the application. B. Defining a Docker registry There are several other registries that can be defined. If left undefined, docker.io is used. If you want ot use another registry such as quay.io, then you need to specify it: quarkus.container-image.registry=my.docker-registry.net my.docker-registry.net is the registry you want to use. C. Environment variables There are several ways of defining variables on Kubernetes. These includes: key/value pairs import all values from a Secret or ConfigMap interpolate a single value identified by a given field in a Secret or ConfigMap interpolate a value from a field within the same resource Environment variables from key/value pairs To add environment variables from key/value pairs, use the below syntax: quarkus.kubernetes.env.vars.my-env-var=foobar This adds MY_ENV_VAR=foobar as an environment variable. my-env-var is converted to uppercase and the dashes are replaced with underscores to result in MY_ENV_VAR. Environment variables from Secret To add key/value pairs of Secret as environment variables, add the lines below to application.properties: quarkus.kubernetes.env.secrets=my-secret,my-other-secret This will result in the following in the container environment: envFrom: - secretRef: name: my-secret optional: false - secretRef: name: my-other-secret optional: false You can set the variable by extracting a value defined by keyName form the my-secret: quarkus.kubernetes.env.mapping.foo.from-secret=my-secret quarkus.kubernetes.env.mapping.foo.with-key=keyName Resulting into: - env: - name: FOO valueFrom: secretKeyRef: key: keyName name: my-secret optional: false Environment variables from ConfigMap Quarkus can be used to add key/value pairs from ConfigMap as environment variables. To achieve this, you need to add the lines below separating the ConfigMap to be used as a source by a comma. For example: quarkus.kubernetes.env.configmaps=my-config-map,another-config-map This will result into: envFrom: - configMapRef: name: my-config-map optional: false - configMapRef: name: another-config-map optional: false It is also possible to extract keyName field from the my-config-map by using: quarkus.kubernetes.env.mapping.foo.from-configmap=my-configmap quarkus.kubernetes.env.mapping.foo.with-key=keyName This will generate a manifest with the below lines:
- env: - name: FOO valueFrom: configMapRefKey: key: keyName name: my-configmap optional: false That is it! Closing Thoughts That summarizes this guide on how to deploy and use Quarkus in Kubernetes. I am sure that you are now in a position to generate the Kubernetes resources based on some defaults and the user-provided configuration using Quarkus. I hope this was valuable.
0 notes
pinerhuman · 3 years ago
Text
Activation code for easy translator 12.5
Tumblr media
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 UPDATE#
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 UPGRADE#
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 SOFTWARE#
Now, we can dive into a more elaborate answer.
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 SOFTWARE#
Does the Linux Kernel need software engineering? – Maíra Canalįor those looking for a short answer: yes, it does.**kicontheme**, **kid3**, **kidentitymanagement**, **kidletime**, **kig**, **kigo** from the Slackware **kde** package set. GNU World Order (Audio Show) ☛ GNU World Order 474.One of the most often complaints that you will hear about Linux is that there are too many distros that all do the same thing Video ☛ Manjaro and EndeavourOS are like Ding Dongs and Zingers – Invidious.
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 UPDATE#
On this episode of This Week in Linux: Happy 25th Birthday to GNOME, KDE Gear 22.08 Released, LibreOffice 7.4 Released, Android 13 Released, GLIBC Update Breaks Easy Anti-Cheat, John Deere Tractor Hacked To Run Doom, Krita 5.1 Released, Introducing the Horizon Linux project, Neptune 7.5, Humble Bundle Summer Sale, Spider-Man Remastered On Linux Via Proton, all that and much more on Your Weekly Source for Linux GNews!
Tux Digital ☛ 210: GNOME, KDE, LibreOffice, Android 13, DEF CON, Krita, Spider-Man and more Linux news!.
The longer version of this is in How we handle Ubuntu LTS versions.
#ACTIVATION CODE FOR EASY TRANSLATOR 12.5 UPGRADE#
The every two year machines upgrade to every LTS version the every four year machines upgrade every other LTS version, as their old LTS version threatens to fall out of support. We have long had a habit of upgrading machines between Ubuntu versions either every two years (for most machines that users log in to or directly use) or every four years (although the past two years are an exception).
uni Toronto ☛ Our slow turnover of servers and server generations.
It abstracts many common kubectl commands and maps them to just a few shortcut keys or clicks of the mouse. Fortunately, there is a terminal UI called K9s that makes interacting with your cluster faster and easier. Not the longest command, but typing things like this over an over can get cumbersome. Kubectl edit deployment/mydeployment -o yaml -save-config For example, here is a command for editing a deployment: Though the CLI is robust, commands can quickly become cumbersome to run. However, one needs to know many commands to fetch metrics and operate a Kubernetes cluster using Kubectl. Kubectl is the de facto and most popular Kubernetes command line tool used for accessing Kubernetes cluster metrics.
Earthly ☛ A guide to swiftly managing your Kubernetes resources using K9s – Earthly Blog.
It dual-boots Windows and Linux and is my only x86-64 computer. And, yes, I’ve installed Ubuntu MATE (a semi-official distro that has dedicated builds for this hardware) and Emacs on it, and I program on it. Granted, I’ve only had it for a month, so it may be a novelty effect that’ll wane over time, but still: I’m impressed. Guess which of these two I find myself using more? That’s right, the GPD Micro. This article forms part of our Linux Around The World series. We cover events and user groups that are running in Kosovo.
Linux Links ☛ Linux Around The World: Kosovo – LinuxLinks.
Freedom of Information / Freedom of the Press.
Tumblr media
1 note · View note
buysgreys · 3 years ago
Text
Zipsplit windows
Tumblr media
#Zipsplit windows how to#
#Zipsplit windows zip file#
#Zipsplit windows archive#
#Zipsplit windows zip#
Java: build OCI compatible image for Spring Boot web app using jib.Prometheus: sending a test alert through AlertManager.Prometheus: adding a Grafana dashboard using a ConfigMap.Prometheus: monitoring a custom Service using ServiceMonitor and PrometheusRule.Prometheus: monitoring services using additional scrape config for Prometheus Operator.Prometheus: installing kube-prometheus-stack on a kubeadm cluster.Kubernetes: independent nf for CoreDNS with kubeadm.Kubernetes: independent nf for CoreDNS with K3s.Kubernetes: custom upstream for domain with CoreDNS.Buildah: Installing buildah and podman on Ubuntu 20.04.Java: creating OCI-compatible image for Spring Boot web using buildah.Kubernetes: accessing the Kubernetes Dashboard with least privilege.KVM: configuring a base Window2019 instance with Sysprep.KVM: creating a Windows2019 Domain Controller using Powershell.KVM: Creating a Windows2019 ADFS server using Powershell.Gradle: fixing the gradle wrapper for a Java project.GoLang: Installing the Go Programming language on Ubuntu 20.04.Microsoft: configuring an Application Group for OAuth2/OIDC on ADFS 2019.Java: Spring Security OAuth2/OIDC protecting Client App and Resource Server.Gradle: interactive JDWP debugging of bootRun gradle task in Eclipse IDE.Python: Flask-OIDC protecting Client App and Resource Server using Windows 2019 ADFS.Kubernetes: Keycloak IAM deployed into Kubernetes cluster for OAuth2/OIDC.Kubernetes: copying files into and out of containers without ‘kubectl cp’.OAuth2: Configuring okta for OAuth2/OIDC.OAuth2: Configuring Google for OAuth2/OIDC.KVM: Creating a bridged network with NetPlan on Ubuntu 22.04.Linux: Mounting a loopback ext4/xfs filesystem to isolate or enforce storage limits.
#Zipsplit windows zip#
Linux: Excluding directories when using zip.
Linux: Use stat to verify permissions and ownership.
Ubuntu: Creating a Samba/CIFS share to quickly share files with Windows.
Linux: Using zip/unzip to add, update, and remove files from a Java jar/war.
Valid suffixes for size units are (b|k|m|g).
#Zipsplit windows archive#
# OR mend original archive, but do not extract content yetĬreate split archive on Linux, extract on WindowsĬonversely, if you want to create a split archive from Linux with 100k per file: 7z -v100k a fromlinux.7z įrom Window you would right-click on the “fromlinux.7z.001” file, and select “Extract here” to unzip the content. Then you can transfer this to a Linux host and use the following to extract it. I find that using 7zip provides easy interoperability across Linux and Windows.Ĭreating a split archive on Windows is detailed in posts like this (Note that it works using either 7zip or zip format). While there are many ways to do the same thing using utilities like zip, split, etc. Taking this a step further and splitting this archive into manageable chunks allows you to take advantage of parallel transfer. If you’d like to change the location where the files will be extracted, click “Browse,” and then navigate to the desired drive or folder.If you are transferring large files between systems, it can be advantageous to create a compressed archive using a tool like 7zip. In the “Extract Compressed (Zipped) Folders” window that appears, you’ll see the current file path as the default location for the unzipped files. Next, right-click the file and select “Extract All” in the menu.
#Zipsplit windows zip file#
First, locate the ZIP file that you’d like to extract in File Explorer.
#Zipsplit windows how to#
RELATED: Everything You Need to Know About Zip Files How to Extract a Zip File in Windows 11Įxtracting (unzipping) ZIP files in Windows 11 is just as easy as zipping them. Now that you have a ZIP file, you can copy it or share it like any other kind of file. When you’re ready, press Enter.Īnd that’s all there is to it! Pretty simple. Type a new file name or leave the default name in place. The ZIP file name will highlight, allowing you to type in whatever name you’d like. After a moment, your ZIP file will appear in the same folder as the source files that you just compressed.
Tumblr media
0 notes
swarnalata31techiio · 3 years ago
Text
Minikube (Single Cluster of Kubernetes)
Minikube
Minikube is an open source tool that was developed to enable developers and system administrators to run a single cluster of Kubernetes on their local machine.
Step 1: Update system:
sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get upgrade
Step 2: Install Docker
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable
apt-cache policy docker-ce
sudo apt-get install -y docker-ce
sudo systemctl status docker
docker run -d -p 5000:5000 --restart=always --name registry registry:2 #for local image
Step 3: Install Minikube:
sudo wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo chmod +x minikube-linux-amd64
sudo mv minikube-linux-amd64 /usr/local/bin/Minikube
minikube version
Step 4: Install Kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version -o json
Step 5: Start Minikube:
echo ‘export CHANGE_MINIKUBE_NONE_USER=true’ >> ~/.bashrc
minikube start
eval $(minikube docker-env)
Step 6: Check Status Minikube and Kubectl:
minikube service list
minikube docker-env
kubectl cluster-info
kubectl config view
kubectl get nodes
Step 7: Deploy Pod:
kubectl create deployment test-nginx --image=nginx
Kubectl get pods
kubectl exec pod_name env
kubectl exec -it pod_name bash
kubectl scale deployment test-nginx --replicas=3
kubectl expose deployment test-nginx --type="NodePort" --port 80
kubectl get services test-nginx
minikube service test-nginx –url
https://techiio.com/blog/rakeshrestha/minikube-single-cluster-of-kubernetes
0 notes