#VPCNetworking
Explore tagged Tumblr posts
govindhtech · 1 year ago
Text
DZ Bank Secrets to Developer Efficiency Cloud Workstations
Tumblr media
Experience with developers is highly valued at DZ BANK. The development environment’s security profile shouldn’t be jeopardised, though, at the same time. Google Cloud started a mission to significantly improve both Cloud and DZ Bank security posture and the developer experience as part of DZ Bank cooperation with Google Cloud. Here’s how Google Cloud used Cloud Workstations to accomplish DZ Bank objectives.
Lack of emphasis on developer experience
In the past, there was no common method for automating project setup and developer environments. The onboarding process for new developers might take days or weeks, depending on the complexity of the project. They had to manually set up their projects, which required them to comb through numerous internal documentation sources, provision infrastructure, and speak with colleagues when they encountered problems. This was a considerable amount of labour that ought to be automated.
Moreover, the developers didn’t have a prescribed method for obtaining specific container tools, such as Docker runtime and the tooling around it. Consequently, a great deal of teams were operating independently and not exchanging best practices for production. Standardising development environments is essential to better understand security posture and to give transparency about the tools and frameworks that development teams are using. It desired the ability to regularly check tools for potential vulnerabilities and to have control over which tools developers use.
Workstations in the cloud to the rescue
Cloud Workstations offer a simple solution to standardize DZ Bank development environments because they are a fully managed service. Without putting in additional work, it can use predefined base images to handle infrastructure, OS patches, and security patches. Additionally, users can redirect traffic between ports on a local machine and ports workstation without exposing it to the internet by directly accessing Workstation tools via SSH (or any other TCP protocol). It is able to encrypt resources with a customer-managed encryption key thanks to CMEK support.
Furthermore, Cloud Workstations facilitates persistent discs, which let they store data in between sessions, and offers multiple base images with preset Integrated Development Environments (IDEs) that are frequently used by developers. These base images provide support for Docker-in-Docker and can be further customised. For workstation setups, Google Cloud might install JetBrains IDEs or other standardised IDE extensions and plugins.
It also have a lot of alternatives with Cloud Workstations to help DZ Bank expedite the developer experience. To provision resources and permissions for Cloud Workstations, for example, DZ Bank can use the infrastructure-as-code tool Terraform. This allows DZ Bank to automate the configuration of the entire development environment. In order to speed up startup times and enable engineers to get started more quickly, DZ Bank also set up a series of pre-warmed workstations. Additionally, inactivity time limitations can be set, which will cause workstations to automatically shut down after a predetermined amount of inactivity. DZ Bank have also managed it’s expenditures by using Cloud Workstations, where you just pay for workstation uptime.
DZ BANK architecture for deployment
Image credit to Google Cloud
Google Cloud operate DZ Bank’s workstations within DZ Bank’s secure Google Cloud landing zone, or deployed cloud environment, in a private workstation cluster with private IP addresses within a shared VPC network. There are two Private Service Connect (PSC) endpoints needed to access the workstation cluster and private network:
Workstation clusters with a private gateway by default construct a PSC endpoint to connect the control plane to workstations in DZ Bank’s private network.
An extra PSC endpoint that facilitates connections between developers and desktops within DZ Bank’s VPC. In DZ Bank’s private DNS zone, it additionally establish a DNS record for the workstation domain using the IP address of this PSC endpoint.
Ongoing input from developers
After gaining access to DZ Bank’s Cloud Workstations within the Google Cloud landing area, it proceeded to modify them to suit their needs. Using the preset basic images that the Cloud Workstation team provided, it produced DZ Bank’s own unique Docker images, which featured the following:
A centrally located proxy configuration
A package and tool download artefact server that is routinely inspected by the cyber security team
Package manager configurations particular to a language (e.g., mvn, pip, npm, etc.)
Additional standardised tools according to project requirements and programming language
Pre-installation and IDE plugin and extension upgrades carried out automatically
Repositories of the OS package manager that are accessible via the artefact server (i.e., without internet connectivity)
An automated setup of the environment that includes the Java Keytool, Git certificates, IDE setups, and other standard environment variables
X11 enablement utilising SSH, so that the developers can also access GUI apps, such as tools for UI testing
Certain bash scripts
Additionally, it carried out several proofs of concept with different DZ BANK development teams, each of which represented a distinct set of issues and tooling environments. It further enhanced and tailored DZ Bank’s Cloud Workstations environment based on their input.
Project-specific customisations are one instance. Even while standardised images cover the majority of developer needs, some requirements like project-specific tools and environment variables cannot be included in the standardised images. It utilise bash scripts to customise images on startup in order to automate tasks.
It generate a unique workstation.yaml file for every project, with all the necessary automation commands, and we double-check it in DZ Bank’s Git repository. DZ Bank’s script is a bash script that searches for this file at startup of a Cloud Workstation and executes the commands found within. This enables them to fully automate the setup of DZ Bank’s projects, allowing a fresh developer to contribute from the outset.
A cloud workstation order CI pipeline was also developed by DZ Bank. It’s Git repository houses the custom image code, which, when committed, starts a continuous integration process. This pipeline produces all the necessary container images depending on the hierarchy of images provided in DZ Bank’s Dockerfiles.
Docker images are inspected for vulnerabilities according to Google cloud’s cyber security requirements and pushed into an Artifact Registry of DZ Bank’s Google Cloud project allocated for testing by DZ Bank’s testers and developers. Following a successful scan and testing process, photos are combined and put into production.
Developers can place internal orders for Cloud Workstations using an automated procedure that starts DZ Bank’s order CI pipeline and installs all the appropriate permissions and infrastructure. No more looking in documentation is necessary! In order to further empower and expedite DZ Bank’s devs, they are excited to investigate into AI-enabled code development now that Gemini Code Assist powers Cloud Workstations.
Learnings
It’s collaboration with Google Cloud and the use of Cloud Workstations has made it possible for it to greatly increase the development productivity of bank teams.
“Before Cloud Workstations, onboarding new devs took a week, but now it only takes one day. The cloud-native development environment is fully automated, safe, and standardised, allowing developers to begin working on the code base right away. The qualities of automation and standardisation make development easier. – Gregor Otto Milenkovic, DZ BANK AG Product Owner
Along the road, DZ Bank picked up a lot of knowledge that may be helpful for your own travels:
In order to arrive at a solution that pleases all parties involved, the ongoing developer feedback cycle is essential.
It’s essential to strike the right balance between the freedom provided to developers and the environment’s security requirements.
Bank Customer Engineers and the Product Engineering team are instrumental in seeing projects through to the end.
This Bank had regular touch with them to answer it’s concerns, report bugs, and feature requests.
Automate everything and eliminate toil!
Read more on govindhtech.com
0 notes
awsexchage · 6 years ago
Photo
Tumblr media
AWS CDK を使って簡単に ECS(Fargate) 環境を構築する方法 https://ift.tt/2SscTCi
streampack の Tana です。
ECS(Fargate) 環境を手動セットアップするには、画面からぽちぽちすればある程度できるものの、 ALB を作成したり、Service, Task Definition などを設定するのが複雑です。 また、コンセプトを理解するのにも時間がかかります。 Dynamic Port Mapping を使う場合は、Security Group に inbound を登録しないといけなかったり、トライ&エラーの繰り返しです。 構築だけで疲れちゃいます。
そこで、AWS Cloud Template Kit(CDK) を使えば、20行ぐらいのコードと数回のコマンド実行で 下記のリソースと共にベストプラクティスなECS(Fargate)環境を構築してくれる方法です。
ネットワーク
VPC
Subnets
Internet Gateway
NAT gateway(EIP)
Route Tables
Security Group
ECS(Fargate)
ALB(Target Group, Security Group込み)
Cluster
Service
Task Definition(Container)
CDK の installや初期手順などはこちらで記載あるので割愛します。 https://github.com/awslabs/aws-cdk#getting-started
cdk init で作成されたテンプレートを元に、下記が実際に書いたコードです。 Dockerhub を使って例は他にもあるので、今回は ECR を使用します。
lib/demo-stack.ts
import cdk = require('@aws-cdk/cdk'); import ecs = require("@aws-cdk/aws-ecs"); import ec2 = require("@aws-cdk/aws-ec2"); import ecr = require("@aws-cdk/aws-ecr"); export class DemoStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); // ------------- ここを記載 -------------- // ベストプラクティスなネットワーク環境の構築 const vpc = new ec2.VpcNetwork(this, 'MyCdkVpc', {maxAZs: 2}); const cluster = new ecs.Cluster(this, 'Cluster', {vpc}); // ECSクラスタ設定(t2.medium指定) cluster.addDefaultAutoScalingGroupCapacity({instanceType: new ec2.InstanceType("t2.medium"), instanceCount: 1}); // ECRを使用 const repository = new ecr.Repository(this, 'myRepoName'); // ECS(ALB/Service/Task Defintion/Container) 関連をまとめて構築 new ecs.LoadBalancedEc2Service(this, 'Service', { cluster, memoryLimitMiB: 512, image: ecs.ContainerImage.fromEcrRepository(repository), containerPort: 8080, // コンテナポート番号 environment: { ENV: 'production' } }); // ------------- ここまで -------------- } }
synth で typescript 書いたコードから CloudFormation Template(Yaml)で事前確認できます。 CloudFormationの400行ぐらいのコードを書く必要が無くなります。
$ cdk synth
$ cdk deploy
deploy は IAM Policy や Security Group のプレビューが表示され、AWSアカウントに構築開始されます。
ECRを使う場合はレポジトリまで作成してくれますが、アプリケーションの Docker Image を push してあげる必要があります。 よって、作成されたレポジトリを取得し、Image を push します。
# リポジトリ名を取得 $ aws ecr describe-repositories | jq -r '.repositories[].repositoryName' ---- demos-demoa-xxxxxx
下記のようなコマンド(shell)を実行して build&push します。
build_push.sh
ECRID=xxxxxxxxx ECRNAME=demos-demoa-xxxxxx # build an your docker app docker build -t ${ECRNAME} . docker tag ${ECRNAME}:latest ${ECRID}.dkr.ecr.ap-northeast-1.amazonaws.com/${ECRNAME}:latest # push $(aws ecr get-login --no-include-email --region ap-northeast-1) docker push ${ECRID}.dkr.ecr.ap-northeast-1.amazonaws.com/${ECRNAME}:latest
成功すると、ALBのURLが払い出されますので、アクセスして確認し、結果が返ってくれば完了です!
Tumblr media
結果が正しく表示されなければ、Cluster の EventのログやTaskをAWSコンソールや ecs-cli などを使って再度確認します。
ecs-cli ps --cluster DemoStack-Clusterxxxx-xxxxx
たいてい、container port が正しくなかったり、環境変数、ECRリポジトリに登録されているかなどを確認します。
削除したい場合は、destory を実行します。(事前に ECRに登録した Image の削除の必要あり)
$ cdk destory
CDKデメリットとしては細かな指定や調整したい場合は、別途それぞれコードを書く必要がありそうです。 例えば、
コスト削減のためにも NAT Gateway(EIP) を省きたい
コンテナの log driver や ulimits を設定したい
下記はコンテナに log driver などを追加する場合です。
demo-stack.ts
const demoContainer = demoTaskDefinition.addContainer('demo-container', { image: ecs.ContainerImage.fromEcrRepository(repository), logging: new ecs.AwsLogDriver(this, 'demo-logging', { streamPrefix: 'demo-app' }) }); demoContainer.addUlimits( { name : ecs.UlimitName.Fsize, hardLimit : 10240000, softLimit : 10240000 } )
細かな引数の指定や props の指定は CDK doc にてブラウザにてドキュメントを開いてくれて、検索・確認できます。
$ cdk docs
まだ、プレビューで今後改善さらに改善されそうですが、ECS(Fargate) 環境を頻繁に作ってはテストして削除するケースには最適です。
CDK workshop もあるようなので、一度見て試してみるとイメージが湧くかと思います。 https://cdkworkshop.com/
元記事はこちら
「AWS CDK を使って簡単に ECS(Fargate) 環境を構築する方法」
February 26, 2019 at 04:00PM
0 notes
govindhtech · 7 months ago
Text
How DNS-Based Endpoints Enhance Security in GKE Clusters
Tumblr media
DNS-Based Endpoints
In order to prevent unwanted access while maintaining cluster management, it is crucial to restrict access to the cluster control plane, which processes Kubernetes API calls, as you are aware if you use Google Kubernetes Engine (GKE).
Authorized networks and turning off public endpoints were the two main ways that GKE used to secure the control plane. However, accessing the cluster may be challenging when employing these techniques. To obtain access through the cluster’s private network, you need to come up with innovative solutions like bastion hosts, and the list of permitted networks needs to be updated for every cluster.
Google Cloud is presenting a new DNS-based endpoint for GKE clusters today, which offers more security restrictions and access method flexibility. All clusters have the DNS-based endpoint available today, irrespective of cluster configuration or version. Several of the present issues with Kubernetes control plane access are resolved with the new DNS-based endpoint, including:
Complex allowlist and firewall setups based on IP: ACLs and approved network configurations based on IP addresses are vulnerable to human setup error.
IP-based static configurations: You must adjust the approved network IP firewall configuration in accordance with changes in network configuration and IP ranges.
Proxy/bastion hosts: You must set up a proxy or bastion host if you are accessing the GKE control plane from a different cloud location, a distant network, or a VPC that is not the same as the VPC where the cluster is located.
Due to these difficulties, GKE clients now have to deal with a complicated configuration and a perplexing user experience.
Introducing a new DNS-based endpoint
Any network that can connect to Google Cloud APIs, such as VPC networks, on-premises networks, or other cloud networks, can access the frontend that the DNS name resolves to. This front-end Each cluster control plane has its own DNS or fully qualified domain name (FQDN) with the new DNS-based endpoint for GKE routes traffic to your cluster after using security policies to block unwanted traffic.Image credit to Google cloud
This strategy has several advantages:
Simple flexible access from anywhere
Proxy nodes and bastion hosts are not required when using the DNS-based endpoint. Without using proxies, authorized users can access your control plane from various clouds, on-premises deployments, or from their homes. Transiting various VPCs is unrestricted with DNS-based endpoints because all that is needed is access to Google APIs. You can still use VPC Service Controls to restrict access to particular networks if you’d like.
Dynamic Security
The same IAM controls that safeguard all GCP API access are also utilized to protect access to your control plane over the DNS-based endpoint. You can make sure that only authorized users, regardless of the IP address or network they use, may access the control plane by implementing identity and access management (IAM) policies. You can easily remove access to a specific identity if necessary, without having to bother about network IP address bounds and configuration. IAM roles can be tailored to the requirements of your company.
See Customize your network isolation for additional information on the precise permissions needed to set up IAM roles, rules, and authentication tokens.
Two layers of security
You may set up network-based controls with VPC Service Controls in addition to IAM policies, giving your cluster control plane a multi-layer security architecture. Context-aware access controls based on network origin and other attributes are added by VPC Service Controls. The security of a private cluster that is only accessible from a VPC network can be equaled.
All Google Cloud APIs use VPC Service Controls, which ensures that your clusters’ security setup matches that of the services and data hosted by all other Google Cloud APIs. For all Google Cloud resources used in a project, you may provide solid assurances for the prevention of illegal access to data and services. Cloud Audit Logs and VPC Service Controls work together to track control plane access.
How to configure DNS-based access
The procedure of setting up DNS-based access for the GKE cluster control plane is simple Check the next steps.
Enable the DNS-based endpoint
Use the following command to enable DNS-based access for a new cluster:
$ gcloud container clusters create $cluster_name –enable-dns-access
As an alternative, use the following command to allow DNS-based access for an existing cluster:
$ gcloud container clusters update $cluster_name –enable-dns-acces
Configure IAM
Requests must be authenticated with a role that has the new IAM authorization in order to access the control plane.
roles/container.developer
roles/container.viewer
Ensure your client can access Google APIs
You must confirm that your client has access to Google APIs if it is connecting from a Google VPC. Activating Private Google Access, which enables clients to connect to Google APIs without using the public internet, is one approach to accomplish this. Each subnet has its own configuration for private Google Access.
Tip: Private Google Access is already enabled for node subnetworks.
[Selective] Setting up access to Google APIs via Private Service Connect
The Private Service Connect for Google APIs endpoint, which is used to access the other Google APIs, can be used to access the DNS endpoint of the cluster. To configure Private Service Connect for Google APIs endpoints, follow the instructions on the Access Google APIs through endpoints page.
Since using a custom endpoint to access the cluster’s DNS is not supported, as detailed in the use an endpoint section, in order to get it to work, you must create a CNAME to “gke.goog” and an A record between “gke.goog” and the private IP allocated to Private Service Connect for Google APIs.
Try DNS access
You can now try DNS-based access. The following command generates a kubeconfig file using the cluster’s DNS address:
gcloud container clusters get-credentials $cluster_name –dns-endpoint
Use kubectl to access your cluster. This allows Cloud Shell to access clusters without a public IP endpoint, previously required a proxy.
Extra security using VPC Service Controls
Additional control plane access security can be added with VPC Service Controls.
What about the IP-based endpoint?
You can test DNS-based control plane access without affecting your clients by using the IP-based endpoint. After you’re satisfied with DNS-based access, disable IP-based access for added security and easier cluster management:
gcloud container clusters update $cluster_name –enable-ip-access=false
Read more on Govindhtech.com
1 note · View note
govindhtech · 9 months ago
Text
Class E IP Address Space Helps GKE Manage IPv4 Depletion
Tumblr media
Using Class E IPv4 Address space to help GKE address IPv4 depletion problems. The need for private IPv4 addresses is growing along with the amount of services and apps hosted on Google Kubernetes Engine (GKE) (RFC 1918). The RFC1918 address space is becoming harder to come by for a lot of big businesses, which makes IP address depletion a problem that affects their application scalability.
This precise address depletion problem is resolved by IPv6, which offers a large number of addresses. But not every business or application is prepared for IPv6 just yet. You may continue to expand your company by entering the IPv4 address space (240.0.0.0/4), which can handle these problems.
Class E addresses (240.0.0.0/4) are set aside for future usage, as indicated in RFC 5735 and RFC 1112, as stated in Google VPC network acceptable IPv4 ranges; nevertheless, this does not preclude you from using them in certain situations today. Google will also provide tips for organizing and using GKE clusters with Class E.
Recognizing Class E addresses
IPv4 addresses
Some typical criticisms or misunderstandings about the use of Class E addresses are as follows:
Other Google services do not function with class E addresses. This is untrue. Class E addresses are included in the acceptable address ranges for IPV4 that Google Cloud VPC offers. Furthermore, private connection techniques using Class E addresses provide access to a large number of Google controlled services.
Communicating with services outside of Google (internet/on-premises/other clouds) is limited when using Class E addresses. False. You may use NAT or IP masquerading to convert Class E addresses to public or private IPv4 addresses in order to access destinations outside of Google Cloud, since Class E addresses are not routable and are not published over the internet or outside of Google Cloud. Furthermore,
a. Nowadays, a large number of operating systems support Class E addresses, with Microsoft Windows being the prominent exception.
b. Routing the addresses for usage in private DCs is supported by several on-premises suppliers (Cisco, Juniper, Arista).
There are scale and performance restrictions on Class E addresses. This is untrue. Regarding performance, there is no difference between the addresses and other address ranges used by Google Cloud. Agents can grow to accommodate a high number of connections without sacrificing speed, even with NAT/IP Masquerade.
Therefore, you may utilize Class E addresses for private usage inside Google Cloud VPCs, for both Compute Engine instances and Kubernetes pods/services in GKE, even though they are reserved for future use, not routable over the internet, and shouldn’t be publicized over the public internet.
Advantages
Class E IP Addresses
Despite these limitations, Class E addresses provide some benefits:
Large address space: Compared to standard RFC 1918 private addresses (around 17.9 million addresses vs. about 268.4 million addresses for it), Class E addresses provide a much bigger pool of IP addresses. Organizations experiencing IP address depletion will benefit from this abundance as it will enable them to expand their services and applications without being constrained by a finite amount of address space.
Growth and scalability: It addressing’s wide reach facilitates the simple scalability of services and apps on Google Cloud and GKE. IP address restrictions do not prevent you from deploying and growing your infrastructure, which promotes innovation and development even during times of high consumption.
Effective resource utilization: By using Class E addresses to enhance your IP address allocation procedures, you may reduce the possibility of address conflicts and contribute to the efficient use of IP resources. This results in reduced expenses and more efficient operations.
Future-proofing: Although it is not supported by all operating systems, its use is anticipated to rise in response to the growing need for IP addresses. You can future-proof your infrastructure scalability to enable company development for many years to come by adopting Class E early on.
Class E IP addresses
Things to be mindful of
Even though Class E IP addresses provide many advantages, there are a few crucial things to remember:
Compatibility with operating systems: At the moment, not all operating systems enable Class E addressing. Make sure your selected operating system and tools are compatible before putting Class E into practice.
Software and hardware for networking: Check to see whether your firewalls and routers (or any other third-party virtual appliance solutions running on Google Compute Engine) are capable of handling the addresses. Make sure any programs or software that use IP addresses are updated to support it as well.
Migration and transition: To ensure there are no interruptions while switching from RFC 1918 private addresses to it, meticulous preparation and execution are needed.
How Snap implemented Class E
Network IP management is becoming more difficult due to the growing use of microservices and containerization systems such as GKE, particularly by major clients like Snap. Snap’s finite supply of RFC1918 private IPv4 addresses was rapidly depleted with hundreds of thousands of pods deployed, impeding cluster scalability and necessitating a large amount of human work to release addresses.
Originally contemplating an IPv6 migration, Snap ultimately opted to deploy dual-stack GKE nodes and GKE pods (IPv6 + Class E IPv4) due to concerns over application readiness and compatibility. In addition to preventing IP fatigue, this approach gave Snap the scale of IP addresses it required for many years to accommodate future expansion and cut down on overhead. Furthermore, this technique was in line with Snap’s long-term plan to switch to IPv6.
Fresh clusters
Requirement
Construct native VPC clusters.
Steps
Make a subnetwork with supplementary ranges for services and pods, if desired. It range (240.0.0.0/4) has CIDRs that may be used in the secondary ranges.
When creating the cluster for the pod and services CIDR ranges, use the previously generated secondary ranges. This is an example of the user-managed secondary range assignment mechanism.
Setup IP masquerading to source network address translation (SNAT) to map the IP address of the underlying node to the source network address.
Migrating clusters
Requirement
The clusters need to be native to the VPC.
Steps
It is not possible to modify the cluster’s default pod IPv4 range. For more recent node pools that support Class E ranges, you may add pod ranges.
Workloads from earlier node pools may potentially be moved to newer node pools.
IPv4 Vs IPv6
Making the switch from IPv4 to IPv6 Class E
For enterprises experiencing IP depletion, switching to dual-stack clusters with the IPv4 and IPv6 addresses now is a wise strategic step. By increasing the pool of IP addresses that are accessible, it offers instant relief and permits expansion and scalability inside Google Cloud and GKE. Furthermore, implementing dual-stack clusters is an essential first step toward a more seamless IPv6-only transition.
Read more on Govindhtech.com
0 notes
govindhtech · 11 months ago
Text
Google VPC Service Controls: Private IPs for Data Security
Tumblr media
Introducing VPC Service Controls with Private IPs to increase the protection against data exfiltration
GCP VPC service controls
Organisations may reduce the risk of data exfiltration from their Google Cloud managed services by utilising Google  Cloud’s VPC Service Controls. In order to assist you restrict access to your sensitive data, Google  Cloud’s VPC Service Controls (VPC-SC) build isolation perimeters around networks and cloud services.
Google Cloud VPC service controls
Google cloud is thrilled to announce support for private IP addresses in VPC Service Controls today. This new feature allows protected resources to be accessed by traffic from particular internal networks.
Expanding the use of VPC-SC to safeguard resources within private IP address space With specified perimeters accessible only by authorised users and resources, VPC-SC aids in preventing data exfiltration to unauthorised Cloud organisations, folders, projects, and resources. Clients using VPC-SC can enforce least privilege access to Google Cloud managed services by utilising its extensive access rule features. Our customers can now grant access from specific on-premise settings to resources within a service perimeter thanks to this new feature.
Enterprise security teams can specify fine-grained perimeter restrictions and enforce that security posture across many Google Cloud services and projects using VPC Service Controls. To readily scale their security controls, users can create, update, and remove resources inside service boundaries.
Crucially, clients can designate private IP address ranges for a VPC network using basic access levels. Customers are able to extend perimeters into private address space by attaching these access levels to ingress and egress access rules, which impose granular access controls for Google services.
The usage of a macro, or “mega,” perimeter is advised by Google Cloud as best practice since it is simple to scale and administer. Private IP now gives you more options for clients with particular use cases that call for finer-grained segmentation.
Here are a few use cases where the private IP functionality offered by VPC Service Controls might help you create a more secure infrastructure.
Apply scenario: extending your on-site setup to a safe cloud boundary
For access-related reasons, VPC Service Controls views a customer’s on-premise environment as a single network. Consequently, the entire on-premise environment is subject to the enforcement of network-based access controls. Because only certain on-premise clients need access to the VPC-SC border, some customers are worried about overprovisioning access. Private address-based ingress and egress rules can be applied to on-premise systems to enable more granular access control from on-premise workloads to perimeter resources.
Apply scenario: dividing up your cloud projects in a shared VPC
VPC-SC verifies whether the source network is a part of a project within the trusted perimeter as part of the evaluation process for requests. The network in shared virtual private cloud settings is owned by the host project and shared with the service project. Customers were thus unable to divide the host and service projects into distinct perimeters. The host and service projects can be situated in distinct perimeters, with access being enabled by the rules, thanks to support for private address-based entry and egress rules. This also restricts the amount of unapproved services that can access resources.
Examining cases: Increasing security at MSCI with VPC Service Controls
MSCI, a well-known provider of vital services and tools for the international investment industry, leverages  cloud computing for more than simply infrastructure; it is their fundamental underpinning for fostering innovation.
In their pursuit of safe, scalable, and agile computing, MSCI and Google Cloud have been working together since 2022. Built on their dedication to cutting edge technology is their Google  Cloud environment, a well planned jumble of services that includes Compute Engine, BigQuery, and Kubernetes Engine.
MSCI looked to VPC-SC to protect sensitive data while taking advantage of the scalability offered by the cloud. The need for a defense-in-depth strategy that could secure data at several levels and the sensitivity of the data were the driving forces behind this choice. On top of Google’s cloud-first controls, such IAM and firewall, VPC Service Controls gave MSCI an extra line of protection with its strict egress and ingress restrictions. On the other hand, MSCI has stipulated precise specifications for private IP-based subnetwork granular access.
“Access to protected resources for particular private IP ranges within the VPC network is made possible by the newly added VPC private address support feature, which gives MSCI the ability to establish precise constraints. Better detailing in MSCI’s security configurations is the outcome of this breakthrough. According to Sandesh D’Souza, executive director of  Cloud Engineering at MSCI, “the bespoke solution has emerged as a key addition in the organization’s security repository,Particularly for its support of private IP management, which demonstrates the immense potential of cloud technology when combined with planning and collaborative solution-building.”
Next actions
For the majority of Google Cloud users, VPC Service Controls are a fundamental security measure. They can provide clients with more precise controls to better suit their needs by supporting private IPs. Before going live in production, you can verify your configurations using their newly released VPC Service Controls Dry Run mode.
Read more on govndhtech.com
0 notes
govindhtech · 11 months ago
Text
Google Cross Cloud Network: Build Worldwide Distributed Apps
Tumblr media
GCP cross cloud network
Are you curious about how to connect, secure, and deliver apps between on-premises, Google Cloud, and other cloud environments while streamlining your distributed application design with Cross-Cloud Network? An extensive guide on developing and putting into practice a strong cross-cloud environment can be found in the recently released  Cloud Architecture Centre section on Cross-Cloud Networking for Distributed Applications. They will examine a few of the Google Cross  Cloud Network‘s advantages and have a quick look at the architecture documentation in this blog Utilise a case summary.
Use service-centric, any-to-any connection built on Google’s global network infrastructure to speed up application rollout and performance. Google Cloud has a large global presence, with 187+ Points of Presence (PoPs) across more than 200 countries and territories, backed by an encrypted SLA. For distributed apps hosted anywhere, get private access to cloud-native services and seamless cross-cloud connection.
You can safeguard your apps with Cloud NGFW’s industry-leading threat efficacy. Simplify the integration of partner security solutions with Google Cross Cloud Network and enhance network security posture control.
Cross-Cloud Interconnect is a high-performance, natively secured network connection that makes hybrid and multicloud networking simple. Utilise an open, safe, and well-optimized network platform to cut down on operational expense while boosting corporate growth. Using Private Service Connect, you can connect managed  SaaS and Google services everywhere. You can quickly link and secure services on-premises and across clouds using Service Centric Google Cross  Cloud Network.
Google Cross-cloud network
Network across clouds for dispersed applications
An architecture for the building of distributed applications is made possible by the Google Cross  Cloud Network. You can distribute workloads and services throughout various on-premises and cloud networks with the help of a Google Cross Cloud Network. Application developers and operators may now enjoy the benefits of a single cloud experience across various clouds with this solution. This system makes advantage of multicloud and hybrid networking, while also expanding on its proven uses.
Network architects and engineers who wish to plan and develop distributed applications over a Google Cross  Cloud Network are the target audience for this book. You will gain a thorough understanding of Google Cross Cloud Network design considerations by following this guide.
Cross cloud network
Network connectivity and segmentation
The design’s cornerstones are connection and segmentation structure. A unified or segmented infrastructure can be used to implement the VPC segmentation structure shown in the accompanying figure. The relationships between the networks are not depicted in this diagram.
The size of the application VPCs that you need, whether you want to deploy perimeter firewalls internally or outside, and whether you want to publish services centrally or distributedly will all influence the segmentation structure that you choose for the application VPCs.
Both local and global application stack deployment are supported by the Cross  Cloud Network. With the inter-VPC communication pattern, the proposed segmentation structure supports both of these application resiliency patterns.
By utilising HA-VPN hub-and-spoke patterns in conjunction with VPC Network Peering, you can establish inter-VPC communication between segments. Alternatively, all VPCs can be included as spokes in a Network Connectivity Centre hub by using Network Connectivity Centre.
Regardless of the connectivity pattern, the segmentation structure also defines the design of the DNS infrastructure.
Networking services
Cross cloud network Google
Distinct service networking patterns result from distinct application deployment archetypes. The Multi-regional deployment paradigm, in which an application stack operates independently in different zones across two or more Google Cloud regions, should be the main emphasis of Google Cross  Cloud Network design.
The following characteristics of a multi-regional deployment archetype are helpful for designing Google Cross  Cloud Network:
To direct inbound traffic to the regional load balancers, utilise DNS routing policies.
The traffic can then be distributed to the application stack via the regional load balancers.
Regional failover can be achieved by re-anchoring the application stack’s DNS mappings with a DNS failover routing policy.
In the blog post Google Cross Cloud Network: Private, Adaptable, and Flexible Networking, they briefly discussed three typical applications for this system. These were the following:
Developing Dispersed Software
Delivery of content and applications via the internet
workforce hybridization
Cross cloud networking
Architecture manuals
The “Cross-Cloud Networking for Distributed Applications” design guide offers comprehensive expertise to assist you on your journey. This guide is divided into four documents and was prepared by multiple Google specialists. Based on diverse use cases, each of these delves into distinct patterns and designs. The following are the documents:
Overview of Cross-Cloud Networking for Distributed Applications
Interaction Cross-Cloud Network segmentation and connection for distributed applications
Cross-Cloud Network service networking for dispersed applications
Cross-cloud network security for dispersed apps
The design guide is intended to be your primary source of information, helping you to assess all relevant factors and directing you to reference structures that outline the use of suggested patterns. These suggestions can serve as a roadmap, models, or foundational elements whether you’re creating, investigating, or organising your network. As with everything architectural, there is a range of flexibility in terms of how the final design turns out.
Reachability between on-premises and other cloud environments using a transit VPC is a crucial feature that is demonstrated here. All connections to other clouds and on-premises are closed in this transit VPC. The centralised transit VPC can be reached by other VPCs by VPC network peering, Network Connectivity Centre, or  Cloud VPN. Route exchange between linked sources is facilitated by cloud routers, which are positioned in various locations.
Read more on Govindhtech.com
1 note · View note
govindhtech · 1 year ago
Text
Cloud SQL Auth Proxy: Securing Your Cloud SQL Instances
Tumblr media
Cloud SQL Auth Proxy
In this blog they will explain how to utilise the Cloud SQL Auth Proxy to create safe, encrypted data, and authorised connections to your instances. To connect to Cloud SQL from the App Engine standard environment or App Engine flexible environment, you do not need to configure SSL or use the Cloud SQL Auth Proxy.
The Cloud SQL Auth Proxy’s advantages
Without requiring authorised networks or SSL configuration, the Cloud SQL Auth Proxy is a Cloud SQL connection that offers secure access to your instances.
The following are some advantages of the Cloud SQL Auth Proxy and other Cloud SQL Connectors:
Secure connections:
TLS 1.3 with a 256-bit AES cypher is automatically used by this to encrypt traffic to and from the database. You won’t need to administer SSL certificates because they are used to validate the identities of clients and servers and are not dependent on database protocols.
Simpler authorization of connections:
IAM permissions are used by the Cloud SQL Auth Proxy to restrict who and what can connect to your Cloud SQL instances. Therefore, there is no need to supply static IP addresses because it manages authentication with Cloud SQL.
It depends on the IP connectivity that already exists; it does not offer a new way for connecting. The Cloud SQL Auth Proxy needs to be on a resource that has access to the same VPC network as the Cloud SQL instance in order to connect to it via private IP.
The operation of the Cloud SQL Auth Proxy
A local client that is operating in the local environment is required for the Cloud SQL Auth Proxy to function. Your application uses the common database protocol that your database uses to connect with the Cloud SQL Auth Proxy.
It communicates with its server-side partner process through a secure channel. One connection to the Cloud SQL instance is made for each connection made via the Cloud SQL Auth Proxy.
An application that connects to Cloud SQL Auth Proxy first determines if it can establish a connection to the target Cloud SQL instance. In the event that a connection is not established, it makes use of the Cloud SQL Admin APIs to acquire an ephemeral SSL certificate and connects to Cloud SQL using it. The expiration date of ephemeral SSL certificates is around one hour. These certificates are refreshed by Cloud SQL Auth Proxy prior to their expiration.
The only port on which the Cloud SQL Auth Proxy establishes outgoing or egress connections to your Cloud SQL instance is 3307. All egress TCP connections on port 443 must be permitted since Cloud SQL Auth Proxy uses the domain name sqladmin.googleapis.com to use APIs, which does not have a stable IP address. Make that your client computer’s outbound firewall policy permits outgoing connections to port 3307 on the IP address of your Cloud SQL instance.
Although it doesn’t offer connection pooling, it can be used in conjunction with other connection pools to boost productivity.
The connection between Cloud SQL Auth Proxy and Cloud SQL is depicted in the following diagram:
image credit to Google cloud
Use of the Cloud SQL Auth Proxy Requirements
The following conditions must be fulfilled in order for you to use the Cloud SQL Auth Proxy:
Enabling the Cloud SQL Admin API is necessary.
It is necessary to supply Google Cloud authentication credentials to this.
You need to supply a working database user account and password to this proxy.
The instance needs to be set up to use private IP or have a public IPv4 address.
It is not necessary for the public IP address to be added as an approved network address, nor does it need to be reachable from any external address.
Options for starting Cloud SQL Auth Proxy
You give the following details when you launch it:
Which Cloud SQL instances to connect to so that it can wait to receive data from your application that is sent to Cloud SQL
Where can it locate the login credentials needed to validate your application with Cloud SQL?
Which type of IP address to use, if necessary.
Whether it will listen on a TCP port or a Unix socket depends on the startup parameters you supply. It creates the socket at the specified location, which is often the /cloudsql/ directory, if it is listening on a Unix socket. The Cloud SQL Auth Proxy by default listens on localhost for TCP.
For authentication, use a service account
To authorise your connections to a Cloud SQL instance, you must authenticate as a Cloud SQL IAM identity using this Proxy.
For this purpose, the benefit of using a service account is that you may make a credential file particularly for the Cloud SQL Auth Proxy, and as long as it approach for production instances that aren’t operating on a Compute Engine instance is to use a service account
In the event that you require this to be invoked from several computers, you can replicate the credential file within a system image.
You have to make and maintain the credential file in order to use this method. The service account can only be created by users who possess the resourcemanager.projects.setIamPolicy permission, which includes project owners. You will need to get someone else to create the service account or find another way to authenticate this, if your Google Cloud user does not have this permission.
Read more on govindhtech.com
0 notes
govindhtech · 1 year ago
Text
IaC Sights into IBM Cloud Edge VPC Deployable Architecture
Tumblr media
VPC Management
An examination of the IaC features of the edge VPC using deployable architecture on IBM Cloud. Many organizations now find themselves forced to create a secure and customizable virtual private cloud (VPC) environment within a single region due to the constantly changing nature of cloud infrastructure. This need is met by the VPC landing zone deployable architectures, which provide a collection of initial templates that may be easily customized to meet your unique needs.
Utilizing Infrastructure as Code (IaC) concepts, the VPC Landing Zone deployable architecture enables you to describe your infrastructure in code and automate its deployment. This method facilitates updating and managing your edge VPC setup while also encouraging uniformity across deployments.
The adaptability of the VPC Landing Zone is one of its main advantages. The starting templates are simply customizable to meet the unique requirements of your organisation. This can entail making changes to security and network setups as well as adding more resources like load balancers or block volumes. You may immediately get started with the following patterns, which are starter templates.
Edge VPC setup
Pattern of landing zone VPCs: Installs a basic IBM Cloud VPC architecture devoid of any computational resources, such as Red Hat OpenShift clusters or VSIs.
QuickStart virtual server instances (VSI) pattern: In the management VPC, a jump server VSI is deployed alongside an edge VPC with one VSI.
QuickStart ROKS pattern: One ROKS cluster with two worker nodes is deployed in a workload VPC using the Quick Start ROKS pattern.
Virtual server (VSI) pattern: In every VPC, deploys the same virtual servers over the VSI subnet layer.
Red Hat Open Shift pattern: Every VPC’s VSI subnet tier has an identical cluster deployed per the Red Hat Open Shift Kubernetes (ROKS) design.
VPC Patterns that adhere to recommended standards
To arrange and oversee cloud services and VPCs, establish a resource group.
Configure Cloud Object Storage instances to hold Activity Tracker data and flow logs.
This makes it possible to store flow logs and Activity Tracker data for a long time and analyze them.
Keep your encryption keys in situations of Key Protect or Hyper Protect Crypto Services. This gives the management of encryption keys a safe, convenient location.
Establish a workload VPC for executing programmes and services, and a management VPC for monitoring and controlling network traffic.
Using a transit gateway, link the workload and management VPCs.
Install flow log collectors in every VPC to gather and examine information about network traffic. This offers visibility and insights on the performance and trends of network traffic.
Put in place the appropriate networking rules to enable VPC, instance, and service connectivity.
Route tables, network ACLs, and security groups are examples of these.
Configure each VPC’s VPEs for Cloud Object Storage.
This allows each VPC to have private and secure access to Cloud Object Storage.
Activate the management VPC VPN gateway.
This allows the management VPC and on-premises networks to communicate securely and encrypted.
Patterns of landing zones
To acquire a thorough grasp of the fundamental ideas and uses of the Landing Zone patterns, let’s investigate them.
First, the VPC Pattern
Being a modular solution that provides a strong base upon which to develop or deploy compute resources as needed, the VPC Pattern architecture stands out. This design gives you the ability to add more compute resources, such as Red Hat OpenShift clusters or virtual private islands (VSIs), to your cloud environment. This method not only makes the deployment process easier, but it also makes sure that your cloud infrastructure is safe and flexible enough to meet the changing demands of your projects.
The VSI pattern QuickStart with edge VPC
Deploying an edge VPC with a load balancer inside and one VSI in each of the three subnets is the Quickstart VSI pattern pattern. It also has a jump server VSI that exposes a public floating IP address in the management VPC. It’s vital to remember that this design, while helpful for getting started quickly, does not ensure high availability or validation within the IBM Cloud for Financial Services framework.
ROKS pattern QuickStart
A security group, an allow-all ACL, and a management VPC with a single subnet make up the Quickstart ROKS pattern pattern. The Workload VPC features a security group, an allow-all ACL, and two subnets located in two distinct availability zones. There is a Transit Gateway that connects the workload and management VPCs.
In the workload VPC, a single ROKS cluster with two worker nodes and an enabled public endpoint is also present. The cluster keys are encrypted using Key Protect for further protection, and a Cloud Object Storage instance is configured as a prerequisite for the ROKS cluster.
Pattern of virtual servers
Within the IBM Cloud environment, the VSI pattern architecture in question facilitates the establishment of a VSI on a VPC landing zone. One essential part of IBM Cloud’s secure infrastructure services is the VPC landing zone, which is made to offer a safe platform for workload deployment and management. For the purpose of building a secure infrastructure with virtual servers to perform workloads on a VPC network, the VSI on VPC landing zone architecture was created expressly.
Pattern of Red Hat OpenShift
The architecture of the ROKS pattern facilitates the establishment and implementation of a Red Hat OpenShift Container Platform in a single-region configuration inside a VPC landing zone on IBM Cloud.
This makes it possible to administer and run container apps in a safe, isolated environment that offers the tools and services required to maintain their functionality.
Because all components are located inside the same geographic region, a single-region architecture lowers latency and boosts performance for applications deployed within this environment.
It also makes the OpenShift platform easier to set up and operate.
Organizations can rapidly and effectively deploy and manage their container apps in a safe and scalable environment by utilizing IBM Cloud’s VPC landing zone to set up and manage their container infrastructure.
Read more on govindhtech.com
0 notes