#cloud kubernetes service
Explore tagged Tumblr posts
Text
cloud kubernetes service
Cyfuture Cloud with its Kubernetes service provides a powerful and flexible platform for deploying and managing applications in the cloud.
0 notes
Text
At DVS IT Services, we specialize in Linux Server Management, Cloud Migration, Data Center Migration, Disaster Recovery, and RedHat Satellite Server Solutions. We also offer expert support for AWS Cloud, GCP Cloud, Multi-Cloud Operations, Kubernetes Services, and Linux Patch Management. Our dedicated team of Linux Administrators helps businesses ensure smooth server operations with effective root cause analysis (RCA) and troubleshooting. Learn more about our services at https://dvsitservices.com/.
#At DVS IT Services#Linux Server Management#Cloud Migration#Data Center Migration#Disaster Recovery#RedHat Satellite Server Solutions.#GCP Cloud#Multi-Cloud Operations#Kubernetes Services
0 notes
Text
Tip: Focus on What value you get once Cloud Solution is implemented for next business growth OR innovation.
Do you know, Techjour's Cloud Solution reduces business operating cost, gives security to data and flexibility to focus more on your business. It meets immediate on-demand business need, in fast moving digital world.

#devopsengineer#devopstools#devops#cloud solutions#cloud service provider#cloud services#google cloud#cloudcomputing#cloudmigration#cloudconsulting#kubernetes#ansible#jenkins github#startup#automation#technology#trendingnow#successful business#small business#digital business#docker#sme#entrepreneur#digital strategy
0 notes
Text
Entwicklung einer skalierbaren IT-Strategie: Best Practices
In der heutigen, schnelllebigen Geschäftswelt ist eine flexible und skalierbare IT-Strategie für Unternehmen unerlässlich, um wettbewerbsfähig zu bleiben. Der Bedarf an einer solchen Strategie ist besonders relevant, da Unternehmen zunehmend auf digitale Transformationen und Cloud-Lösungen setzen. Eine gut durchdachte IT-Strategie ermöglicht es Unternehmen, effizient auf Marktveränderungen zu…
#Automatisierte Prozesse#Best Practice#Best Practices#Cloud-Services#Containerisierung#DevOps#Digitale Transformation#FĂĽhrung#IT-Infrastruktur#IT-Ressourcen#IT-Strategie#IT-Strategien#Kubernetes#Microservices#Virtualisierung
0 notes
Text
Top 5 Container Management Software Of 2024
Container Management Software is essential for businesses aiming to efficiently manage their applications across various environments. As the market for this technology is projected to grow significantly, here’s a look at the top five Container Management Software solutions for 2024:
Portainer: Established in 2017, Portainer is known for its easy-to-use interface supporting Docker, Kubernetes, and Swarm. It offers features like real-time monitoring and role-based access control, making it suitable for both cloud and on-premises environments.
Amazon Elastic Container Service (ECS): This AWS service simplifies deploying and managing containerized applications, integrating seamlessly with other AWS tools. It supports features like automatic load balancing and serverless management through AWS Fargate.
Docker: Since 2010, Docker has been a pioneer in containerization. It provides tools for building, shipping, and running applications within containers, including Docker Engine and Docker Compose. Docker Swarm enables cluster management and scaling.
DigitalOcean Kubernetes: Known for its user-friendly approach, DigitalOcean’s Kubernetes offering helps manage containerized applications with automated updates and monitoring. It integrates well with other DigitalOcean services.
Kubernetes: Developed by Google and now managed by CNCF, Kubernetes is a leading tool for managing containerized applications with features like automatic scaling and load balancing. It supports customizations and various networking plugins.
Conclusion: Selecting the right Container Management Software is crucial for optimizing your deployment processes and scaling applications efficiently. Choose a solution that meets your business’s specific needs and enhances your digital capabilities.
0 notes
Text
Skyrocket Your Efficiency: Dive into Azure Cloud-Native solutions
Join our blog series on Azure Container Apps and unlock unstoppable innovation! Discover foundational concepts, advanced deployment strategies, microservices, serverless computing, best practices, and real-world examples. Transform your operations!!
#Azure App Service#Azure cloud#Azure Container Apps#Azure Functions#CI/CD#cloud infrastructure#cloud-native applications#containerization#deployment strategies#DevOps#Kubernetes#microservices architecture#serverless computing
0 notes
Text
Shaping Kubernetes Network Traffic With Topology-Aware Routing
In cloud-based deployments, Kubernetes clusters are often spread across multiple availability zones for redundancy and scalability. However, by default, Kubernetes services distribute traffic randomly between pods, which can lead to inefficiencies. Traffic might travel between zones unnecessarily, increasing latency and potentially incurring extra costs. In this post, let’s understand about…
0 notes
Text
Deploying Large Language Models on Kubernetes: A Comprehensive Guide
New Post has been published on https://thedigitalinsider.com/deploying-large-language-models-on-kubernetes-a-comprehensive-guide/
Deploying Large Language Models on Kubernetes: A Comprehensive Guide
Large Language Models (LLMs) are capable of understanding and generating human-like text, making them invaluable for a wide range of applications, such as chatbots, content generation, and language translation.
However, deploying LLMs can be a challenging task due to their immense size and computational requirements. Kubernetes, an open-source container orchestration system, provides a powerful solution for deploying and managing LLMs at scale. In this technical blog, we’ll explore the process of deploying LLMs on Kubernetes, covering various aspects such as containerization, resource allocation, and scalability.
Understanding Large Language Models
Before diving into the deployment process, let’s briefly understand what Large Language Models are and why they are gaining so much attention.
Large Language Models (LLMs) are a type of neural network model trained on vast amounts of text data. These models learn to understand and generate human-like language by analyzing patterns and relationships within the training data. Some popular examples of LLMs include GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and XLNet.
LLMs have achieved remarkable performance in various NLP tasks, such as text generation, language translation, and question answering. However, their massive size and computational requirements pose significant challenges for deployment and inference.
Why Kubernetes for LLM Deployment?
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides several benefits for deploying LLMs, including:
Scalability: Kubernetes allows you to scale your LLM deployment horizontally by adding or removing compute resources as needed, ensuring optimal resource utilization and performance.
Resource Management: Kubernetes enables efficient resource allocation and isolation, ensuring that your LLM deployment has access to the required compute, memory, and GPU resources.
High Availability: Kubernetes provides built-in mechanisms for self-healing, automatic rollouts, and rollbacks, ensuring that your LLM deployment remains highly available and resilient to failures.
Portability: Containerized LLM deployments can be easily moved between different environments, such as on-premises data centers or cloud platforms, without the need for extensive reconfiguration.
Ecosystem and Community Support: Kubernetes has a large and active community, providing a wealth of tools, libraries, and resources for deploying and managing complex applications like LLMs.
Preparing for LLM Deployment on Kubernetes:
Before deploying an LLM on Kubernetes, there are several prerequisites to consider:
Kubernetes Cluster: You’ll need a Kubernetes cluster set up and running, either on-premises or on a cloud platform like Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS).
GPU Support: LLMs are computationally intensive and often require GPU acceleration for efficient inference. Ensure that your Kubernetes cluster has access to GPU resources, either through physical GPUs or cloud-based GPU instances.
Container Registry: You’ll need a container registry to store your LLM Docker images. Popular options include Docker Hub, Amazon Elastic Container Registry (ECR), Google Container Registry (GCR), or Azure Container Registry (ACR).
LLM Model Files: Obtain the pre-trained LLM model files (weights, configuration, and tokenizer) from the respective source or train your own model.
Containerization: Containerize your LLM application using Docker or a similar container runtime. This involves creating a Dockerfile that packages your LLM code, dependencies, and model files into a Docker image.
Deploying an LLM on Kubernetes
Once you have the prerequisites in place, you can proceed with deploying your LLM on Kubernetes. The deployment process typically involves the following steps:
Building the Docker Image
Build the Docker image for your LLM application using the provided Dockerfile and push it to your container registry.
Creating Kubernetes Resources
Define the Kubernetes resources required for your LLM deployment, such as Deployments, Services, ConfigMaps, and Secrets. These resources are typically defined using YAML or JSON manifests.
Configuring Resource Requirements
Specify the resource requirements for your LLM deployment, including CPU, memory, and GPU resources. This ensures that your deployment has access to the necessary compute resources for efficient inference.
Deploying to Kubernetes
Use the kubectl command-line tool or a Kubernetes management tool (e.g., Kubernetes Dashboard, Rancher, or Lens) to apply the Kubernetes manifests and deploy your LLM application.
Monitoring and Scaling
Monitor the performance and resource utilization of your LLM deployment using Kubernetes monitoring tools like Prometheus and Grafana. Adjust the resource allocation or scale your deployment as needed to meet the demand.
Example Deployment
Let’s consider an example of deploying the GPT-3 language model on Kubernetes using a pre-built Docker image from Hugging Face. We’ll assume that you have a Kubernetes cluster set up and configured with GPU support.
Pull the Docker Image:
bashCopydocker pull huggingface/text-generation-inference:1.1.0
Create a Kubernetes Deployment:
Create a file named gpt3-deployment.yaml with the following content:
apiVersion: apps/v1 kind: Deployment metadata: name: gpt3-deployment spec: replicas: 1 selector: matchLabels: app: gpt3 template: metadata: labels: app: gpt3 spec: containers: - name: gpt3 image: huggingface/text-generation-inference:1.1.0 resources: limits: nvidia.com/gpu: 1 env: - name: MODEL_ID value: gpt2 - name: NUM_SHARD value: "1" - name: PORT value: "8080" - name: QUANTIZE value: bitsandbytes-nf4
This deployment specifies that we want to run one replica of the gpt3 container using the huggingface/text-generation-inference:1.1.0 Docker image. The deployment also sets the environment variables required for the container to load the GPT-3 model and configure the inference server.
Create a Kubernetes Service:
Create a file named gpt3-service.yaml with the following content:
apiVersion: v1 kind: Service metadata: name: gpt3-service spec: selector: app: gpt3 ports: - port: 80 targetPort: 8080 type: LoadBalancer
This service exposes the gpt3 deployment on port 80 and creates a LoadBalancer type service to make the inference server accessible from outside the Kubernetes cluster.
Deploy to Kubernetes:
Apply the Kubernetes manifests using the kubectl command:
kubectl apply -f gpt3-deployment.yaml kubectl apply -f gpt3-service.yaml
Monitor the Deployment:
Monitor the deployment progress using the following commands:
kubectl get pods kubectl logs <pod_name>
Once the pod is running and the logs indicate that the model is loaded and ready, you can obtain the external IP address of the LoadBalancer service:
kubectl get service gpt3-service
Test the Deployment:
You can now send requests to the inference server using the external IP address and port obtained from the previous step. For example, using curl:
curl -X POST http://<external_ip>:80/generate -H 'Content-Type: application/json' -d '"inputs": "The quick brown fox", "parameters": "max_new_tokens": 50'
This command sends a text generation request to the GPT-3 inference server, asking it to continue the prompt “The quick brown fox” for up to 50 additional tokens.
Advanced topics you should be aware of
While the example above demonstrates a basic deployment of an LLM on Kubernetes, there are several advanced topics and considerations to explore:
_*]:min-w-0″ readability=”131.72387362124″>
1. Autoscaling
Kubernetes supports horizontal and vertical autoscaling, which can be beneficial for LLM deployments due to their variable computational demands. Horizontal autoscaling allows you to automatically scale the number of replicas (pods) based on metrics like CPU or memory utilization. Vertical autoscaling, on the other hand, allows you to dynamically adjust the resource requests and limits for your containers.
To enable autoscaling, you can use the Kubernetes Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA). These components monitor your deployment and automatically scale resources based on predefined rules and thresholds.
2. GPU Scheduling and Sharing
In scenarios where multiple LLM deployments or other GPU-intensive workloads are running on the same Kubernetes cluster, efficient GPU scheduling and sharing become crucial. Kubernetes provides several mechanisms to ensure fair and efficient GPU utilization, such as GPU device plugins, node selectors, and resource limits.
You can also leverage advanced GPU scheduling techniques like NVIDIA Multi-Instance GPU (MIG) or AMD Memory Pool Remapping (MPR) to virtualize GPUs and share them among multiple workloads.
3. Model Parallelism and Sharding
Some LLMs, particularly those with billions or trillions of parameters, may not fit entirely into the memory of a single GPU or even a single node. In such cases, you can employ model parallelism and sharding techniques to distribute the model across multiple GPUs or nodes.
Model parallelism involves splitting the model architecture into different components (e.g., encoder, decoder) and distributing them across multiple devices. Sharding, on the other hand, involves partitioning the model parameters and distributing them across multiple devices or nodes.
Kubernetes provides mechanisms like StatefulSets and Custom Resource Definitions (CRDs) to manage and orchestrate distributed LLM deployments with model parallelism and sharding.
4. Fine-tuning and Continuous Learning
In many cases, pre-trained LLMs may need to be fine-tuned or continuously trained on domain-specific data to improve their performance for specific tasks or domains. Kubernetes can facilitate this process by providing a scalable and resilient platform for running fine-tuning or continuous learning workloads.
You can leverage Kubernetes batch processing frameworks like Apache Spark or Kubeflow to run distributed fine-tuning or training jobs on your LLM models. Additionally, you can integrate your fine-tuned or continuously trained models with your inference deployments using Kubernetes mechanisms like rolling updates or blue/green deployments.
5. Monitoring and Observability
Monitoring and observability are crucial aspects of any production deployment, including LLM deployments on Kubernetes. Kubernetes provides built-in monitoring solutions like Prometheus and integrations with popular observability platforms like Grafana, Elasticsearch, and Jaeger.
You can monitor various metrics related to your LLM deployments, such as CPU and memory utilization, GPU usage, inference latency, and throughput. Additionally, you can collect and analyze application-level logs and traces to gain insights into the behavior and performance of your LLM models.
6. Security and Compliance
Depending on your use case and the sensitivity of the data involved, you may need to consider security and compliance aspects when deploying LLMs on Kubernetes. Kubernetes provides several features and integrations to enhance security, such as network policies, role-based access control (RBAC), secrets management, and integration with external security solutions like HashiCorp Vault or AWS Secrets Manager.
Additionally, if you’re deploying LLMs in regulated industries or handling sensitive data, you may need to ensure compliance with relevant standards and regulations, such as GDPR, HIPAA, or PCI-DSS.
7. Multi-Cloud and Hybrid Deployments
While this blog post focuses on deploying LLMs on a single Kubernetes cluster, you may need to consider multi-cloud or hybrid deployments in some scenarios. Kubernetes provides a consistent platform for deploying and managing applications across different cloud providers and on-premises data centers.
You can leverage Kubernetes federation or multi-cluster management tools like KubeFed or GKE Hub to manage and orchestrate LLM deployments across multiple Kubernetes clusters spanning different cloud providers or hybrid environments.
These advanced topics highlight the flexibility and scalability of Kubernetes for deploying and managing LLMs.
Conclusion
Deploying Large Language Models (LLMs) on Kubernetes offers numerous benefits, including scalability, resource management, high availability, and portability. By following the steps outlined in this technical blog, you can containerize your LLM application, define the necessary Kubernetes resources, and deploy it to a Kubernetes cluster.
However, deploying LLMs on Kubernetes is just the first step. As your application grows and your requirements evolve, you may need to explore advanced topics such as autoscaling, GPU scheduling, model parallelism, fine-tuning, monitoring, security, and multi-cloud deployments.
Kubernetes provides a robust and extensible platform for deploying and managing LLMs, enabling you to build reliable, scalable, and secure applications.
#access control#Amazon#Amazon Elastic Kubernetes Service#amd#Apache#Apache Spark#app#applications#apps#architecture#Artificial Intelligence#attention#AWS#azure#Behavior#BERT#Blog#Blue#Building#chatbots#Cloud#cloud platform#cloud providers#cluster#clusters#code#command#Community#compliance#comprehensive
0 notes
Text
Dive into the debate: Terraform vs Kubernetes – unlocking the future of infrastructure management. Read our comprehensive analysis and discover who holds the key to the future.
1 note
·
View note
Text

What are the key advantages of DevOps consulting services?
Agile Transformation:Â DevOps consulting facilitates the adoption of agile methodologies, enabling organizations to respond quickly to changing market demands and customer needs.
Continuous Monitoring:Â With DevOps, continuous monitoring and feedback loops are established, allowing for proactive identification and resolution of issues before they impact users.
Cloud-Native Architecture: DevOps consulting helps organizations transition to cloud-native architectures, leveraging cloud services for scalability, elasticity, and cost-efficiency.
Infrastructure as Code (IaC): DevOps promotes the use of infrastructure as code, allowing for automated provisioning and configuration of infrastructure resources, leading to greater consistency and reproducibility.
DevSecOps Integration:Â DevOps consulting services integrate security into the development process from the outset, ensuring that security considerations are addressed throughout the software lifecycle.
Containerization and Orchestration:Â DevOps consulting facilitates the adoption of containerization and orchestration technologies such as Docker and Kubernetes, enabling organizations to build, deploy, and manage applications more efficiently.
Microservices Architecture:Â DevOps encourages the adoption of microservices architecture, breaking down monolithic applications into smaller, independently deployable services for improved agility and scalability.
Culture of Innovation:Â DevOps consulting fosters a culture of innovation and experimentation, empowering teams to take risks, learn from failures, and continuously improve.
These points can be illustrated in your infographic to showcase the comprehensive benefits of DevOps consulting services for businesses seeking to optimize their software delivery pipelines and drive digital transformation initiatives.
#devops#devops consulting#aws devops#devopsservices#cloud services#cybersecurity#azure devops#ci/cd#kubernetes#cloud#devops course#software#cloud computing
0 notes
Text
Azure Kubernetes Service (AKS): Mastering Container Orchestration
As cloud computing continues to revolutionize the way applications are developed and deployed, container orchestration has emerged as a critical component for managing and scaling containerized applications. In this blog post, we will delve into the concept of container orchestration and explore how Azure Kubernetes Service (AKS) plays a crucial role in this domain. We will discuss the importance of container orchestration in modern cloud computing and provide a comprehensive guide to understanding and utilizing AKS for container management.
Understanding Container Orchestration
Before diving into the specifics of AKS, it is essential to grasp the concept of container orchestration and its role in managing containers. Container orchestration involves automating containers’ deployment, scaling, and management within a cluster. Manual management of containers poses several challenges, such as resource allocation, load balancing, and fault tolerance. Automated container orchestration solutions like AKS provide a robust and efficient way to address these challenges, enabling seamless application deployment and scaling.
Getting Started with AKS
To begin our journey with AKS, let’s first understand what it is. Microsoft Azure offers a controlled container orchestration service called Azure Kubernetes Service (AKS). It simplifies the deployment and management of Kubernetes clusters, allowing developers to focus on building and running their applications. Setting up an AKS cluster involves several steps, including creating a resource group, configuring the cluster, and configuring networking. While AKS streamlines the process, it is essential to be aware of potential prerequisites and challenges during the initial setup.
Deploying Applications with AKS
Once the AKS cluster is up and running, the next step is to deploy containerized applications to the cluster. AKS provides several options for deploying applications, including using YAML manifests, Azure DevOps Pipelines, and Azure Container Registry. Deploying applications with AKS offers numerous benefits, such as easy scaling, rolling updates, and application versioning. Real-world examples and use cases of applications deployed with AKS illustrate the practical applications and advantages of utilizing AKS for application deployment.
Scaling and Load Balancing
One of the significant advantages of AKS is its automatic scaling capabilities. AKS monitors the resource utilization of containers and scales the cluster accordingly to handle increased demand. Load balancing is another critical aspect of container orchestration, ensuring that traffic is distributed evenly across the containers in the cluster. Exploring AKS’s automatic scaling and load-balancing features provides insights into how these capabilities simplify application management and ensure optimal performance.
Monitoring and Maintenance
Monitoring and maintaining AKS clusters are essential for ensuring the stability and performance of applications. AKS offers built-in monitoring and logging features that enable developers to gain visibility into the cluster’s health and troubleshoot issues effectively. Best practices for maintaining AKS clusters, such as regular updates, backup strategies, and resource optimization, contribute to the overall stability and efficiency of the cluster. Sharing insights and lessons learned from managing AKS in a production environment helps developers better understand the intricacies of AKS cluster maintenance.
Security and Compliance
Container security is a crucial consideration when using AKS for container orchestration. AKS provides various security features, including Azure Active Directory integration, role-based access control, and network policies. These features help secure the cluster and protect against unauthorized access and potential threats. Additionally, AKS assists in meeting compliance requirements by providing features like Azure Policy and Azure Security Center integration. Addressing the challenges faced and solutions implemented in ensuring container security with AKS provides valuable insights for developers.
Advanced AKS Features
In addition to its core features, AKS offers several advanced capabilities that enhance container orchestration. Integration with Azure Monitor enables developers to gain deeper insights into the performance and health of their applications running on AKS. Helm charts and Azure DevOps integration streamline the deployment and management of applications, making the development process more efficient. Azure Policy allows developers to enforce governance and compliance policies within the AKS cluster, ensuring adherence to organizational standards.
Real-world Use Cases and Case Studies
To truly understand the impact of AKS on container orchestration, it is essential to explore real-world use cases and case studies. Many organizations across various industries have successfully implemented AKS for their container management needs. These use cases highlight the versatility and applicability of AKS in scenarios ranging from microservices architectures to AI-driven applications. By examining these examples, readers can gain insights into how AKS can be leveraged in their projects.
Future Trends and Considerations
The container orchestration landscape is continuously evolving, and staying updated on emerging trends and considerations is crucial. Kubernetes, the underlying technology of AKS, is evolving rapidly, with new features and enhancements being introduced regularly. Understanding the future trends in container orchestration and Kubernetes helps developers make informed decisions and stay ahead of the curve. Additionally, considering the role of AKS in the future of cloud-native applications provides insights into the long-term benefits and possibilities of utilizing AKS.
Benefits and Takeaways
Summarizing the key benefits of using Azure Kubernetes Service, we find that AKS simplifies container orchestration and management, reduces operational overhead, and enhances scalability and fault tolerance. By leveraging AKS, developers can focus on building and running their applications without worrying about the underlying infrastructure. Recommendations for starting or advancing the AKS journey include exploring AKS documentation, participating in the AKS community, and experimenting with sample applications.
In conclusion, mastering container orchestration is crucial in the world of modern cloud computing. Azure Kubernetes Service (AKS) provides a powerful and efficient solution for managing and scaling containerized applications. Explore online platforms like the ACTE Institute, which provides detailed Microsoft Azure courses, practice examinations, and study materials for certification exams, to get started on your Microsoft Azure certification journey. By understanding the concepts and features of AKS, developers can streamline their container management processes, enhance application deployment and scalability, and improve overall operational efficiency. We encourage readers to explore AKS for their container management needs and engage in the AKS community to continue learning and sharing experiences.
#microsoft azure#kubernetes#cloud services#education#technology#information technology#tech#information security#information
1 note
·
View note
Text
#kubernetes#devops service provider#devops solutions and services#devops practices#devops development company#devops#cloud#cloud solutions#devops service company in india#cloud services#cloud migration
0 notes
Text
code
#codeonedigest#cloud#aws#docker container#java#nodejs#javascript#docker image#dockerfile#docker file#ec2#ecs#elastic container service#elastic cloud computing#amazon ec2#amazon ecs#microservice#solid principle#python#kubernetes#salesforce#shopify#microservice design pattern#solid principles#java design pattern
0 notes
Text
Day three of tech convention. The last time I was at this exhibition hall, it was comicon. Now it is full of cloud computing geeks, and I’m having to physically dodge out of the way of people helping run AI Workflows (Which, annoyingly, I do care about, because my job involves services that do image analysis using LLM-based systems, but the fact that your booth has a enough AI art to drain a small lake does not make me interested in your company).
On an empty table south of the main hall, the roar of five thousand nerds becomes the white noise of the ocean, with only the sounds of a nearby tennis game (I’ve no idea why the booth for a kubernetes container security service has built a full sized tennis court in the hall, and I’m afraid they might tell me if I ask) disrupting its swell. I am adrift in a sea of humanity, and now I go to a talk on “A twelve factor approach to workload identification”. I hope one of the factors is coffee.
It’s not. But hope keeps us moving.
One day more.

8 notes
·
View notes
Text
using git and a … wikipedia says that the generic term for what github and gitlab are is 'software forge' for document editing is pretty great
you can use your favorite text editor! you can track a multi-file project! the default workflow encourages you to keep copies both on your computer and in the cloud! you can log what changes you're making!
if you accidentally edit with the wrong account you can go edit the history, it'll be a pain but you can.
the other options for "edit a document and show it to your friends as you edit" are, like:
google docs. it's a weird proprietary format and if you export to html it will be a horrible mess of html that needs cleanup. you don't by default end up with your file constantly up to date on your computer. it's super easy to end up viewing a doc with the wrong account.
edit in some platform on your computer and upload files to share them with your friends. you will have to upload the files a lot to lots of people if you want to keep them all up to date.
use some weird other web text service. it might also randomly go down and delete everything and there's way less of a robust advice ecosystem
unfortunately if you use a software forge they might ask you if you want to add a kubernetes cluster. and also they often won't enable word wrap on plaintext.
(im currently writing stuff on gitgud.io which has a fairly lenient ToS)
7 notes
·
View notes