gcpjourney
gcpjourney
GCP Journey
3 posts
Don't wanna be here? Send us removal request.
gcpjourney · 5 years ago
Text
May 14, 2020
There are four ways you can interact with GCP, and we'll talk about each in turn. There's the Google Cloud Platform console or GCP console, Cloud Shell and the Cloud SDK, the APIs and the Cloud mobile app. 
0 notes
gcpjourney · 5 years ago
Text
May 13, 2020 - Core Infrastructure
Google has seven services with more than a billion users
Google's infrastructure provides cryptographic privacy and integrity for remote procedure called data-on-the-network, which is how Google services communicate with each other.
how can I make sure I don't accidentally run up a big GCP bill? GCP provides four tools to help: budgets and alerts, billing, export, reports and quotas.
When you run your workloads in GCP, you use projects to organize them. You use Google Cloud Identity, and Access Management, also called IM, or IAM to control who can do what. And you use your choice of several interfaces to connect.
You may find it easiest to understand the GCP resource hierarchy from the bottom up. All the resources you use, whether they're virtual machines, cloud storage buckets, tables and big query or anything else in GCP are organized into projects. Optionally, these projects may be organized into folders. Folders can contain other folders. All the folders and projects used by your organization can be brought together under an organization node. Projects, folders and organization nodes are all places where the policies can be defined. Some GCP resources let you put policies on individual resources too, like those cloud storage buckets I mentioned.
IAM lets administrators authorize who can take action on specific resources. An IAM policy has a "who" part, a "can do what" part, and an "on which resource" part. The "who" part names the user or users you're talking about. The "who" part of an IAM policy can be defined either by a Google account, a Google group, a Service account, an entire G Suite, or a Cloud Identity domain. The "can do what" part is defined by an IAM role. An IAM role is a collection of permissions. Most of the time, to do any meaningful operations, you need more than one permission. 
Compute Engines InstanceAdmin Role lets whoever has that role perform a certain set of actions on virtual machines. The actions are: listing them, reading and changing their configurations, and starting and stopping them. And which virtual machines? Well, that depends on where the roles apply. 
There are four ways you can interact with Google Cloud Platform, and we'll talk about each in turn: the Console, the SDK and Cloud Shell, the Mobile App and the APIs.
Say you want a quick way to get started with GCP with minimal effort. That's what Google Cloud Launcher provides. It's a tool for quickly deploying functional software packages on Google Cloud Platform. There's no need to manually configure the software, virtual machine instances, storage or network settings. Although, you can modify many of them before you launch if you like. Most software packages in Cloud Launcher are at no additional charge beyond the normal usage fees for GCP resources. 
Of all the ways you can run workloads in the cloud, Virtual Machines may be the most familiar. Compute Engine lets you run virtual machines on Google's global infrastructure. 
Compute Engine lets you create and run virtual machines on Google infrastructure. There are no upfront investments and you can run thousands of virtual CPUs on a system that is designed to be fast and to offer consistent performance. You can create a virtual machine instance by using the Google Cloud Platform console or the GCloud command line tool. Your VM can run Linux and Windows Server images provided by Google or customized versions of these images, and you can even import images for many of your physical servers. 
Much like physical networks, VPCs have routing tables. These are used to forward traffic from one instance to another instance within the same network. Even across sub-networks and even between GCP zones without requiring an external IP address. VPCs routing tables are built in, you don't have to provision or manage a router. 
Every application needs to store data, maybe media to be streamed or censor data from devices or customer account balances, or maybe the fact that my Dragonite has more than 2600 CP. Different applications and workloads required different storage database solutions. 
Let's start with Google Cloud Storage. What's object storage? It's not the same as file storage, in which you manage your data as a hierarchy of folders. It's not the same as block storage, in which your operating system manages your data as chunks of disk. Instead, object storage means you save to your storage here, you keep this arbitrary bunch of bytes I give you and the storage lets you address it with a unique key. That's it. Often these unique keys are in the form of URLs which means object storage interacts nicely with Web technologies. Cloud Storage works just like that, except better. It's a fully managed scalable service. That means that you don't need to provision capacity ahead of time. Just make objects and the service stores them with high durability and high availability. You can use Cloud Storage for lots of things: serving website content, storing data for archival and disaster recovery, or distributing large data objects to your end users via Direct Download. Cloud Storage is not a file system because each of your objects in Cloud Storage has a URL. Each feels like a file in a lot of ways and that's okay to use the word "file" informally to describe your objects, but still it's not a file system. You would not use Cloud Storage as the root file system of your Linux box. Instead, Cloud Storage is comprised of buckets you create and configure and use to hold your storage objects. The storage objects are immutable, which means that you do not edit them in place but instead you create new versions. Cloud Storage always encrypts your data on the server side before it is written to disk and you don't pay extra for that. Also by default, data in-transit is encrypted using HTTPS. Speaking of transferring data, there are services you can use to get large amounts of data into Cloud Storage conveniently. 
Cloud Storage lets you choose among four different types of storage classes: Regional, Multi-regional, Nearline, and Coldline. 
Here's how to think about them. Multi-regional and Regional are high-performance object storage, whereas Nearline and Coldline are backup and archival storage. That's why I placed that heavy dividing line between these two groups. All of the storage classes are accessed in comparable ways using the cloud storage API and they all offer millisecond access times. Now, let's talk about how they differ. Regional storage lets you store your data in a specific GCP region: US Central one, Europe West one or Asia East one. It's cheaper than Multi-regional storage but it offers less redundancy. Multi-regional storage on the other hand, cost a bit more but it's Geo-redundant. That means you pick a broad geographical location like the United States, the European Union, or Asia and cloud storage stores your data in at least two geographic locations separated by at least 160 kilometers. Multi-regional storage is appropriate for storing frequently accessed data. For example, website content, interactive workloads, or data that's part of mobile and gaming applications. People use regional storage on the other hand, to store data close to their Compute Engine, virtual machines, or their Kubernetes engine clusters. That gives better performance for data-intensive computations. Nearline storage is a low-cost, highly durable service for storing infrequently accessed data.
We already discussed one GCP NoSQL database service: Cloud Bigtable. Another highly scalable NoSQL database choice for your applications is Cloud Datastore. One of its main use cases is to store structured data from App Engine apps. 
We've already discussed Compute Engine, which is GCPs Infrastructure as a Service offering, which lets you run Virtual Machine in the cloud and gives you persistent storage and networking for them,and App Engine, which is one of GCP's platform as a service offerings. Now I'm going to introduce you to a service called Kubernetes Engine. It's like an Infrastructure as a Service offering in that it saves you infrastructure chores. It's also like a platform as a service offering, in that it was built with the needs of developers in mind. 
So we've discussed two GCP products that provide the compute infrastructure for applications: Compute Engine and Kubernetes Engine. What these have in common is that you choose the infrastructure in which your application runs. Based on virtual machines for Compute Engine and containers for Kubernetes Engine. But what if you don't want to focus on the infrastructure at all? You just want to focus on your code. That's what App Engine is for.
GCP provides Deployment Manager to let you do just that. It's an Infrastructure Management Service that automates the creation and management of your Google Cloud Platform resources for you. To use it, you create a template file using either the YAML markup language or Python that describes what you want the components of your environment to look like. Then, you give the template to Deployment Manager, which figures out and does the actions needed to create the environment your template describes. If you need to change your environment, edit your template and then tell Deployment Manager to update the environment to match the change. Here's a tip: you can store and version control your Deployment Manager templates in Cloud Source repositories.
Google believes that in the future, every company will be a data company. Because making the fastest and best use of data is a critical source of competitive advantage. Google Cloud provides a way for everybody to take advantage of Google's investments in infrastructure and data processing innovation. 
Cloud Dataproc is great when you have a data set of known size or when you want to manage your cluster size yourself. But what if your data shows up in real time or it's of unpredictable size or rate? That's where Cloud Dataflow is particularly a good choice. It's both a unified programming model and a managed service and it lets you develop and execute a big range of data processing patterns: extract, transform, and load batch computation and continuous computation. 
0 notes
gcpjourney · 5 years ago
Text
May 10, 2020
Overview
Data management
App modernization
Infrastructure modernization
AI/ML
Security
0 notes