#GoogleComputeEngine
Explore tagged Tumblr posts
govindhtech · 10 months ago
Text
Improved Malware Analysis with Google Gemini 1.5 flash
Tumblr media
Gemini 1.5 flash
Google cloud looked at how Gemini 1.5 Pro may be used to automate malware binaries’ code analysis and reverse engineering in Google’s earlier post. Google cloud is now concentrating on Google Gemini 1.5 flash, Google’s new low-weight and affordable model, to move that analysis out of the lab and into a production-ready system that can analyse malware on a massive scale. Gemini 1.5 Flash can handle heavy workloads and offers remarkable speed, handling up to one million tokens.
Google cloud developed an architecture on Google Compute Engine to accommodate this, including a multi-stage workflow with stages for scalable unpacking and recompilation. Although encouraging, this is only the beginning of a long road to address accuracy issues and realise AI’s full potential in malware analysis.
Every day, 1.2 million unique new files that have never been seen on the platform before are analysed by VirusTotal on average. The binary files that make up about half of these could be useful for reverse engineering and code analysis. This amount of new threats is simply too much for traditional, manual ways to handle. The task of developing a system that can quickly and effectively automatically unpack, decompile, and analyse this amount of code is formidable, but Google Gemini 1.5 flash is intended to assist in overcoming it.
Expanding upon the wide range of features of the Gemini 1.5 Pro, the Gemini 1.5 Flash model was designed to maximise speed and efficiency without sacrificing performance. While both models can handle a context window of more than a million tokens and have strong multimodal capabilities, Google Gemini 1.5 flash is specifically made for quick inference and low deployment costs. This is accomplished by using online distillation techniques in conjunction with the parallel computation of feedforward and attention components.
With the latter, Flash can pick up training knowledge directly from the bigger and more intricate Pro model. With the help of these architectural improvements,Google cloud handle up to 1,000 requests and 4 million tokens per minute on Gemini 1.5 flash.
First, we’ll provide some examples of Google Gemini 1.5 flash analyzing decompiled binaries to demonstrate how this pipeline functions. After that, quickly go over the earlier phases of unpacking and large-scale decompilation.
What is Gemini 1.5 Flash
Google AI built Gemini 1.5 Flash, a big language model. It is the fastest and most cost-effective model for high-volume jobs at scale.
Key Gemini 1.5 Flash features:
Speed: Processing 1,000 queries and 4 million tokens per minute, it is extremely fast. This makes it perfect for real-time applications.
Cost-efficiency: This Gemini 1.5 Flash model is more cost-effective than others, giving it a budget-friendly choice for huge projects.
Long context window: A startling one million tokens can be handled by 1.5 Flash, despite its lightweight nature. This lets it consider a lot of data when answering prompts and requests, providing more detailed answers.
Gemini 1.5 Flash balances speed, price, and performance, making it useful for large-scale language processing jobs.
Gemini 1.5 flash Model
Analysis Speed and Illustrative Cases
Google cloud examined 1,000 Windows executables and DLLs that were chosen at random from VirusTotal’s incoming stream in order to assess the real-world efficacy of Google’s malware analysis pipeline. This selection process guaranteed a wide variety of samples, including both malware and legitimate applications. The speed of the Gemini 1.5 Flash was the first item that caught Google’s attention.
This is consistent with the performance benchmarks reported in the paper by the Google Gemini team, where Google Gemini 1.5 flash surpassed other large language models time and time again in terms of text creation speed across a variety of languages.
Google cloud observations showed that the fastest processing time was 1.51 seconds, and the slowest was 59.60 seconds. Google Gemini 1.5 flash handled each file in an average of 12.72 seconds. It’s crucial to remember that these periods do not include the unpacking and decompilation phases, which Google cloud will discuss in more detail in a subsequent blog article.
The length of the ensuing analysis and the amount and complexity of the input code are two examples of the variables that affect these processing timeframes. It’s significant that these measures cover the whole process from beginning to end: from sending the decompiled code to the Vertex AI Gemini 1.5 Flash API, to having the model analyse it, to getting the whole answer back on Google Compute Engine instance. This end-to-end view demonstrates the fast and low latency that Google Gemini 1.5 flash can achieve in actual production settings.
Example 1: It Takes 1.51 Seconds to Dispel a False Positive
This binary was processed the quickest out of the 1,000 binaries Google cloudexamined, demonstrating the exceptional performance of Gemini 1.5 Flash. One anti-virus detection was triggered by the file goopdate.dll (103.52 KB) on VirusTotal, which is a frequent occurrence that frequently necessitates a laborious human review.
Imagine that your SIEM system sent out an alert due to this file, and you urgently need a response. In just 1.51 seconds, Google Gemini 1.5 Flash analyses the decompiled code and gives a clear explanation, stating that the file is a straightforward executable launcher for the “BraveUpdate.exe” application, which is probably a web browser component. Analysts can safely reject the warning as a false positive thanks to this quick, code-level knowledge, avoiding needless escalation and saving crucial time and resources.
Taking Care of Another False Positive, Example 2
Another file that needs more investigation is BootstrapPackagedGame-Win64-Shipping.exe (302.50 KB), which was reported as suspicious by two anti-virus engines on VirusTotal. In just 4.01 seconds, analyses the decompiled code and discovers that the file is a game launcher.
Gemini describes the functionality of the sample, which includes locating and running redistributable installations, verifying for dependencies such as DirectX and Microsoft Visual C++ Runtime, and finally starting the main game executable. This degree of comprehension enables analysts to classify the file as legitimate with confidence, saving time and effort by preventing the needless investigation of a possible false positive.
Example 3: Using obfuscated code, the longest processing time
During Google cloud investigation, the file svrwsc.exe (5.91 MB) stood out for needing the longest processing time 59.60 seconds. The lengthier analysis time was probably caused by elements like the quantity of the decompiled code and the use of obfuscation methods like XOR encryption. Still, Google Gemini 1.5 flash took less than a minute to finish its study. Considering that it could take a human analyst several hours to manually reverse engineer such a binary, this is a noteworthy accomplishment.
Gemini identified the sample’s backdoor functionality which is intended to exfiltrate data and establish a connection with command-and-control (C2) servers situated on Russian domains and correctly concluded that it was harmful. Numerous indicator of compromises (IOCs) are revealed by the study, including probable C2 server URLs, mutexes employed for process synchronisation, changed registry entries, and dubious file names. Security teams can quickly analyse and address the threat thanks to this information.
Example 4: A miner of cryptocurrency
In this example, the decompiled code of a cryptominer called colto.exe is analysed by Google Gemini 1.5 flash. It’s crucial to remember that VirusTotal provides no further metadata or context for the model; it just receives the decompiled code as input. Google Gemini 1.5 flash completed a thorough analysis in 12.95 seconds, recognising the malware as a cryptominer, highlighting obfuscation strategies, and extracting important IOCs like the file location, wallet address, download URL, and mining pool.
Example 5: Using an Agnostic Approach to Understand Legitimate Software
In this example, Gemini 1.5 Flash analyses 3DViewer2009.exe, a valid 3D viewer programme, in 16.72 seconds. Knowing a program’s functionality can be useful for security reasons even if it is goodware. It is critical to note that the model does not receive any extra metadata from VirusTotal, such as whether the binary is digitally signed by a trusted institution, and that it just receives the decompiled code for analysis, exactly like in the prior examples. While standard malware detection algorithms frequently consider this information, Google cloud are taking a code-centric approach.
Google Gemini 1.5 flash is able to identify the main function of the application, which is to load and show 3D models, as well as the particular kind of 3D data that it works with, which is DTM. The examination emphasises the use of custom file classes for data management, configuration file loading, and rendering using OpenGL. Security teams may find it easier to distinguish between genuine software and malware that might try to replicate its actions with this degree of comprehension.
This functional-only, agnostic method of code analysis may prove especially helpful when examining digitally signed binaries, which may not necessarily receive the same level of security scrutiny as unsigned files. This creates new opportunities to spot possibly harmful activity even in software that is meant to be trusted.
Taking a Closer Look at a Zero-Hour Keylogger
This illustration demonstrates the actual value of looking for malicious activity in code: it can identify risks that are missed by conventional security solutions. When the executable AdvProdTool.exe (87KB) was first uploaded and analysed, it avoided being detected by any anti-virus engines, sandboxes, or detection systems on VirusTotal. But Google Gemini 1.5 flash reveals its actual nature. The model analyses the decompiled code in 4.7 seconds, recognising it as a keylogger and even disclosing the IP address and port where stolen data is exfiltrated.
The research focuses on how the code creates a secure TLS connection to the IP address on port 443 by using OpenSSL. The use of keyboard input capture functions and their link to data transmission over the secure channel are crucially called out by Gemini.
The ability of code analysis to detect zero-hour risks in the early phases of development, as this keylogger seems to be, is demonstrated by this example. It also draws attention to a crucial benefit of Gemini 1.5 Flash: even in cases when malicious intent is concealed by metadata or detection evasion strategies, examining the basic operation of code might uncover it.
Overview of Workflow
Gemini Flash 1.5
Three essential steps comprise Google’s malware analysis pipeline: unpacking, decompilation, and Google Gemini 1.5 flash code analysis. The first two stages are driven by two key processes: large-scale decompilation and automated unpacking. Google cloud use in-house cloud-based malware analysis service, Mandiant Backscatter, to dynamically unpack incoming binaries.
Next, a cluster of Hex-Rays Decompilers running on Google Compute Engine processes the unpacked binaries. Gemini can analyse decompiled and disassembled code, but we have chosen to use decompilation in Google pipeline.
Given the token window limits of big Large language models, the deciding decision was that decompiled code was 5–10 times more concise than disassembled code, making it a more efficient alternative. Google Gemini 1.5 flash is finally used to analyse this decompiled code.
Google cloud handle a vast amount of binaries, including the full daily flood of over 500,000 new binaries submitted to VirusTotal, by coordinating this workflow on Google  Cloud.
Read more on Govindhtech.com
0 notes
codeonedigest · 5 months ago
Video
youtube
Unlocking the Cloud: Your First Postgres Database on Google VM 
 Check out this new video on the CodeOneDigest YouTube channel! Learn how to create Virtual Machine in Google Cloud Platform, Setup Google Compute Engine VM. Install Postgres Database in GCE Virtual Machine. #codeonedigest @codeonedigest @googlecloud @GoogleCloud_IN @GoogleCloudTech @GoogleCompute @GooglecloudPL #googlecloud #googlecomputeengine #virtualmachine #nodejsapi
0 notes
industryglobalnews · 5 years ago
Link
The #Japanese government has offered a contract worth 30 billion yen ($273 million) to @AmazonWebServices to help move human resource systems and document management tools onto the #cloud. #HereAtAWS #ProudAmazonian #BePeculiar #cloudservices #cloudcomputing #iaas #telecomunicaciones #telecom #telecoms #telecommunications #carrier #amazonwebservices #googlecomputeengine #softlayer #cloudfest #metisfiles #datacloud #datacloudeurope #cloudexpo #cloudexpoeurope #cloudexpoasia Read More: https://www.industryglobalnews24.com/amazon-to-build-cloud-for-japanese-government
0 notes
lemacksmedia · 6 years ago
Text
Nerdcore Party Convention site launch
https://lemacksmedia.com/news/05/02/nerdcore-party-convention-site-launch/
Nerdcore Party Convention site launch
Nerdcore Party Convention site launches today!
Nerdcore Party Convention is a music and entertainment convention hosted by YouTuber’s JT Music and Rockit Gaming. The 2019 convention is held in Nashville, Tennessee, location in the White Avenue Studio! Lemacks Media is happy to have been able to work on this project for Nerdcore Party Convention, JT Music and Rockit Gaming as they host this exclusive event for their biggest fans, and excited for future events!
Developed on WordPress and hosted from Lemacks Media on our Google Cloud Compute Engine Network and Content Delivery Network with Apache Mod Pagespeed today! The Nerdcore Party Convention website additionally uses Mailgun’s API for email automation and reliable deliverability, Google reCAPTCHA v3 to prevent spammers from submitting forms, a free SSL Certificate from Lemacks Media via Let’s Encrypt, and OneSignal Push Notifications. Lemacks Media manages The Nerdcore Party Convention Google Business listing, Bing Places, as well as Facebook Business while utilizing Google Analytics and Facebook Pixel. WooCommerce integration with FooEvents has been implemented on the site for attendee ticketing solutions, eliminating the need for a middle man like Ticketmaster or other ticket sales platforms that take a percentage of the ticket sales.
Check out the Nerdcore Party Convention website today by visiting https://nerdcorepartycon.com .
Looking for a new custom website or application? At Lemacks Media, we provide an ever widening array of options and solutions to fit the needs of customers and clients at nearly every budget! We also have a growing networks of local North Carolina Developers/Programmers, as well as Partners across the United States to help you implement your new solutions when the time comes! Contact Lemacks Media to find out how we can help you grow your business and online presence with a new custom website, application, or other solutions like Google G Suite!
Visit Lemacks Media https://lemacksmedia.com/news/05/02/nerdcore-party-convention-site-launch/ for updates and more content. #Google #GoogleCloud #GoogleCloudCompute #GoogleCloudComputeEngine #GoogleCloudPlatform #GoogleComputeEngine #GoogleMaps #GoogleSearch #Hosting #LemacksMedia #ModPageSpeed #PageSpeed #SearchEngineOptimization #Seo #SSLCertificate #WebDesign #WebHosting #WebsiteHosting #YouTube
0 notes
yostivanich · 12 years ago
Link
Google delivered some news for users of its Cloud Platform stable of services at its I/O event on Wednesday. Its Compute Engine service — which competes with Amazon Web Services — will now be available to all users, not just those willing to shell out $400 for support. But it also announced the addition of the most commonly requested feature for its App Engine platform cloud: support for the PHP programming language. Platform clouds like App Engine are designed to make life easier for developers by building and hosting application environments. App Engine, which launched in 2008, already supports Python, Java and Google’s own programming language Go. The fact that Google is adding PHP support, before newer and ostensibly hipper languages such as Ruby or Node.js, reflects PHP’s ongoing importance. Since PHP’s creation, in 1995, it has become both a blessing and a bane to developers. Lately, perhaps, more of a bane. Last year StackExchange co-founder Jeff Atwood cataloged a number of calls for the death of PHP, calling on developers to help make the alternatives good enough to replace PHP. But while Atwood chose Ruby for his new company Discourse, the language lives on for a number of reasons. One of the biggest reasons PHP sticks around is that so many people know how to use it. It also underpins many popular applications, including WordPress, Drupal and Magento. Another plus: it’s very easy to deploy. Developers — even novices — can quickly get it up and running on almost any web server. Ruby and Python can take a little more work, especially if you’re a newbie. Ironically, the cloud technology that was supposed to help get rid of PHP once and for all is now extending its life. When platform clouds like App Engine and Heroku were first created, in part, to make it easier to deploy and host Python and Ruby applications. But soon companies saw the opportunity to apply the platform model to PHP as well. Startups like Orchestra (later acquired by Engine Yard) and PHPFog (now AppFog) emerged. Zend jumped in with its own PHP platform cloud offering. Now most of the major platform clouds — including App Engine and Heroku — support PHP. It makes sense: even companies and developers who are dumping PHP still have old apps that they need to run, and they want to run all their apps in one place. Now, thanks to Google, it will be that much easier to keep that PHP code kicking around for a few more years.
0 notes
govindhtech · 11 months ago
Text
Introduction to the new Vertex AI Model Monitoring
Tumblr media
Model Monitoring Vertex AI
In order to provide a more adaptable, extensible, and consistent monitoring solution for models deployed on any serving infrastructure even those not affiliated with Vertex AI, such as Google Kubernetes Engine, Cloud Run, Google Compute Engine, and more Google is pleased to introduce the new Vertex AI Model Monitoring, a re-architecture of Vertex AI’s model monitoring features.
The goal of the newly released Vertex AI Model Monitoring is to help clients continuously monitor their model performance in production by centralizing the management of model monitoring capabilities. This is made possible by:
Support for models (e.g., GKE, Cloud Run, even multi-cloud & hybrid-cloud) housed outside of Vertex AI
Unified job management for online and batch prediction monitoring
Simplified metrics visualization and setup linked to the model rather than the endpoint
This blog post will provide you with an overview of the fundamental ideas and functionalities of the recently released Vertex AI Model Monitoring, as well as demonstrate how to utilise it to keep an eye on your production models.
Overview of the recently developed Vertex AI Model Monitoring
Prediction request-response pairs, or input feature and output prediction pairs, are obtained and gathered in a standard model version monitoring procedure. These inference logs are used for planned or on-demand job monitoring to evaluate the model’s performance. The inference schema, monitoring metrics, objective thresholds, alert channels, and scheduling frequency can all be included in the design of the monitoring job. When anomalies are found, notifications can be delivered to the model owners via a variety of channels, including Slack, email, and more. The model owners can then look into the anomaly and begin a fresh training cycle.
The model version with the Model Monitor resource and the related monitoring task with the Model Monitoring job resource are represented by the new Vertex AI Model Monitoring.Image credit to Google cloud
A particular model version in the Vertex AI Model Registry is represented as a monitoring representation by the Model Monitor. A set of monitoring objectives you specify for the model’s monitoring, as well as the default monitoring configurations for the production dataset (referred to as the reference dataset) and training dataset (referred to as the baseline dataset), can be stored in a Model Monitor.
One batch execution of the Model Monitoring setup is represented by a Model Monitoring task. Every task analyses input, computes a data distribution and related descriptive metrics, determines how much a data distribution of features and predictions deviates from the training distribution, and may even sound an alert based on thresholds you set. Additionally, a Model Monitoring job with one or more modifiable monitoring objectives can be scheduled for continuous monitoring over a sliding time window or run on demand.
Now that you are aware of some of the main ideas and features of the recently released Vertex AI Model Monitoring, let’s look at how you can use them to enhance the way your models are monitored in real-world scenarios. This blog post explains specifically how to monitor a referenced model that is registered but not imported into the Vertex AI Model Registry using the new Vertex AI Model Monitoring feature.
Using Vertex AI Model Monitoring to keep an eye on an external model
Assume for the moment that you develop a customer lifetime value (CLV) model to forecast a customer’s value to your business. The model is being used in production within your own environment (e.g., GKE, GCE, Cloud Run), and you would like to keep an eye on its quality using the recently released Vertex AI Model Monitoring.
The baseline and target datasets must first be prepared in order to use the new Vertex AI Model Monitoring for monitoring a model that is not housed on Vertex AI (also known as a referenced model). These datasets in this case reflect the production values that correspond to the training feature values at a specific point in time (the target dataset) and the baseline dataset.
After that, you can put the dataset in BigQuery or Cloud Storage. Vertex AI Model Monitoring is compatible with multiple data sources, including BigQuery and Cloud Storage.
Subsequently, you designate the related Model Monitor and register a Reference Model in the Vertex AI Model Registry. You can select the model to monitor as well as the related model monitoring schema, which contains the feature name and related data types, using the Model Monitor. Additionally, for governance and repeatability, the generated model monitoring artefacts will be documented in the Vertex AI ML Metadata using a Reference Model. This is an illustration of how to use the Python Vertex AI SDK to develop a Model Monitor.
You are now ready to define and execute the Model Monitoring job after creating the Model Monitor and preparing your baseline and target datasets. You must select what to monitor and how to monitor the model in the event of an unforeseen event when you define the Model Monitoring job. The new Vertex AI Model Monitoring allows you to specify multiple monitoring goals for what needs to be monitored.
Utilize the newly released Vertex AI Model Monitoring to monitor input feature drift, output prediction, and feature attribution data drift for all kinds of data-numerical, categorical, and feature against a threshold that you specify. You can assess the difference between the training (baseline) and production (target) feature distributions using Jensen-Shannon divergence and L-infinity distance metrics, depending on the type of data your features are made of. As a result, an alert will sound whenever the distance disparities for any feature exceed the corresponding threshold.
Regarding model monitoring, you may choose to schedule recurring jobs for continuous monitoring or execute a single monitoring operation using the new Vertex AI Model Monitoring feature. Additionally, you may evaluate monitoring results using Gmail and Slack, two notification channels supported by the new Vertex AI Model Monitoring.
How Telepass uses the new Vertex AI Model Monitoring to keep an eye on ML models
Telepass Italy
Telepass has become a prominent player in the quickly changing toll and mobility services market in Italy and several other European nations. Telepass decided to use MLOps in order to strategically expedite the development of machine learning solutions in recent years. Telepass has successfully deployed a solid MLOps platform during the last year, allowing for the deployment of multiple ML use cases.
As of right now, the Telepass team has been using this MLOps framework to create, test, and smoothly implement over 80 training pipelines that run on a monthly basis through continuous deployment. More than ten different use cases are covered by these pipelines, such as data-driven customer clustering methods, propensity modelling for customised client interactions, and accurate churn prediction for projecting customer attrition.
Notwithstanding these successes, Telepass recognised that they were missing an event-driven re-training mechanism that was triggered by abnormalities in the distribution of data, as well as a system for detecting feature drift. As one of the first users of Vertex AI Model Monitoring, Telepass teamed together with Google Cloud to solve this need and incorporate monitoring into their current MLOps framework to automate the re-training procedure.
According to Telepass:
By integrating the new Vertex AI Model Monitoring strategically with Google’s existing Vertex AI infrastructure, Google team has reached previously unheard-of levels of model quality assurance and MLOps efficiency. Google continuously improve their performance and meet or beyond the expectations of our stakeholders by enabling prompt retraining.”
Notice of Preview
The “Pre-GA Offerings Terms” found in the General Service Terms section of the Service Specific Terms apply to this product or feature. Pre-GA features and goods are offered “as is” and may only have a restricted amount of support. See the descriptions of the launch stages for further details. Please wait until Vertex AI Model Monitoring is generally available (GA) before relying on it for business-critical workloads or handling private or sensitive data, even if it is designed to monitor production model serving.
Read more on Govindhtech.com
0 notes
govindhtech · 11 months ago
Text
Reduce the Google Compute Engine Cost with 5 Tricks
Tumblr media
Google Compute Engine Cost
Compute Engine provides several options for cutting expenses, such as optimising your infrastructure and utilising sales. Google Cloud is sharing some useful advice to help you save Google Compute Engine cost in this two-part blog post. This guide has something for everyone, regardless of whether you work for a huge organisation trying to optimise its budget or a small business just starting started with cloud computing.
Examine your present budgetary plan
It would be helpful to have a map of your present circumstances and spending structure before you embark on a journey to optimise your Google Compute Engine cost. This will allow you to make well-informed decisions regarding your next course of action. That billing panel is the Google Cloud console. It provides you with a detailed breakdown of your spending, tracking each expense to a specific SKU. It can be used to examine the overall financial picture of your company and to determine how much a given product will cost to use for a given project.
You can find resources you are no longer paying for but no longer require by taking a closer look at your spending. Nothing is a better method to save money than simply not spending it, after all.
Examine the automated suggestions
On the page where your virtual machines are listed, have you noticed the lightbulbs next to some of your machines? These are Google Cloud’s automated suggestions for things you could do to cut costs. The following project management categories cost, security, performance, reliability, management, and sustainability are addressed by Recommendation Hub, a new technology. The recommendations system can make suggestions for actions that you might think about based on its understanding of your fleet structure. Helping you cut costs without sacrificing fleet performance is Google Cloud’s main objective.Image credit to Google Cloud
The machine can be scaled down according to its utilisation, or the type of machine can be changed (e.g., from n1 to e2). You get a summary of the recommended modification along with the expected cost savings when you click on one of the recommendations.  You have the option of applying the modification or not. Recall that the instance must be restarted in order for modifications to take effect.Image credit to Google Cloud
Check the types of discs you have
You must attach at least one persistent disc to each virtual machine in your fleet. Google Cloud offers a variety of disc formats with varying features and performance. The kinds that are offered are:
Hyperdisk
With a full range of data durability and administration features, Hyperdisk is a scalable, high-performance storage solution built for the most demanding mission-critical applications.
Hyperdisk Storage Pools 
Hyperdisk Storage Pools are pre-aggregated volumes, throughput, and IOPS that you can reserve in advance and allocate to your apps as required.
Persistent Disk 
Your virtual machines default storage option is called Persistent Disc. It may be regional or zonal. has four variations:
Standard
The desktop computer’s equivalent of an HDD disc. offers the least expensive storage with a slower I/O speed.
SSD
A speed-focused option with excellent I/O performance, albeit at a higher cost per gigabyte.
Balanced
The default setting for newly created compute instances; it strikes a compromise between “Standard” and “SSD.”
Extreme
Suitable for the hardest workloads. enables you to manage the disk’s IOPS in addition to its size.
Local SSD
An SSD that is physically attached to the host that powers your virtual machine is called a local SSD. incredibly quick but transient.
Since persistent disc storage is the most widely used type of storage, let’s concentrate on it. The Balanced disc, which offers a decent compromise between performance and cost, is the default disc type used when building a new virtual machine. Although this works well in a lot of situations, it might not be the ideal choice in every situation.
Fast I/O to disc is not needed, for instance, by stateless apps that are a component of auto-scaling deployments and keep all pertinent data in an external cache or database. These apps are excellent candidates for switching to Standard discs, which, depending on the region, can be up to three times less expensive per gigabyte than Balanced discs.
A list of the discs used in your project can be obtained using: the list of gcloud compute discs with the format “table(name, type, zone, sizeGb, users)”
You must clone the disc and make changes to the virtual machines that use it in order to start using the new disc in order to alter the disc type.
Free up any unused disc space
Moving on to storage, there are other factors besides disc type that influence price. You should also consider how much disc utilisation affects your budget.You will be paid for the full 100 GB of persistent disc space allocated for your project, whether you use 20%, 70%, or 100%. You may still want to monitor your boot discs closely even if your application does not use Persistent Discs for data storage.
If your stateless programme really needs a disc with many gigabytes of free space, think about reducing the size of the discs to match your actual needs. Because they enjoy round numbers, people frequently build 20 GB discs even when they only require 12 GB. Save money and act more like a machine.
Agree to make use of CUDs, or committed use discounts
Compute Engine is not the only product to which this advice is applicable. You can receive a significant discount if you can guarantee that you’ll use a specific number of virtual machines for three or more years, or at least a year! You can get substantially cheaper costs for local SSDs, GPUs, vCPUs, memory, sole-tenant nodes, and software licences by using a range of (CUDs). You are not even limited to allocating your vCPU and memory to a certain project, area, or machine series when using Flex CUDs.
Discounts for committed use are offered on a number of Google Cloud products. If you’re satisfied with Google Cloud and have no intention of switching providers anytime soon, you should seriously think about utilising CUDs whenever you can to save a lot of money. When it comes to computing, you can buy CUDs straight from the Google Cloud dashboard.
Read more on govindhtech.com
0 notes
codeonedigest · 5 months ago
Video
youtube
Unlocking the Google Cloud: Step-by-Step Guide to Hosting Web API on GCE...
  Check out this new video on the CodeOneDigest YouTube channel! Learn how to create Virtual Machine in Google Cloud Platform, Setup Google Compute Engine VM. Deploy & run JS API application in GCE Virtual Machine. #codeonedigest @codeonedigest @googlecloud @GoogleCloud_IN @GoogleCloudTech @GoogleCompute @GooglecloudPL #googlecloud #googlecomputeengine #virtualmachine #nodejsapi
0 notes
codeonedigest · 1 year ago
Video
youtube
Master the Google Cloud Run Serverless Service | Run Nodejs API in Cloud... Full Video Link -           https://youtu.be/59jF_IaQHfE Check out this new video on the CodeOneDigest YouTube channel! Learn how to setup google cloud run #serverless service. Run #nodejs API in #cloudrun service. #codeonedigest @codeonedigest @googlecloud @GoogleCloud_IN @GoogleCloudTech @GoogleCompute @GooglecloudPL #googlecloud #googlecomputeengine #virtualmachine #nodejsapi
0 notes
codeonedigest · 1 year ago
Video
How to Master Google Cloud Run Serverless Service | Run Nodejs API in Cl... Full Video Link -          https://youtu.be/P4RDFZSppx8 Check out this new video on the CodeOneDigest YouTube channel! Learn how to setup google cloud run #serverless service. Run #nodejs API in #cloudrun service. #codeonedigest @codeonedigest @googlecloud @GoogleCloud_IN @GoogleCloudTech @GoogleCompute @GooglecloudPL #googlecloud #googlecomputeengine #virtualmachine #nodejsapi
0 notes
codeonedigest · 1 year ago
Video
youtube
Master Google Cloud: Deploying Node JS APIs on VM Full Video Link -     https://youtu.be/gxZ-iJNCbAM     Check out this new video on the CodeOneDigest YouTube channel! Learn how to create Virtual Machine in Google Cloud Platform, Setup Google Compute Engine VM & Deploy run JS APIs in VM. #codeonedigest @codeonedigest @googlecloud @GoogleCloud_IN @GoogleCloudTech @GoogleCompute @GooglecloudPL #googlecloud #googlecomputeengine #virtualmachine #nodejsapi
0 notes
lemacksmedia · 6 years ago
Text
Native Roaming Photography site launch
https://lemacksmedia.com/news/05/06/native-roaming-photography-site-launch/
Native Roaming Photography site launch
Native Roaming Photography site launches today!
Native Roaming Photography is a Traveling (Roaming) Photography company lead by Deadra Lynn of Kansas. Daedra Lynn specializes in Western Wedding, Couples & Family, as-well-as Senior’s Photo sessions, traveling the country to capture their moments! Lemacks Media is happy to have been able to work on this project for Daedra Lynn and her company Native Roaming!
Developed on WordPress and hosted from Lemacks Media on our Google Cloud Compute Engine Network and Content Delivery Network with Apache Mod Pagespeed today! The Native Roaming Photography website additionally uses Mailgun’s API for email automation and reliable deliverability, Google reCAPTCHA v3 to prevent spammers from submitting forms, a free SSL Certificate from Lemacks Media via Let’s Encrypt. Lemacks Media manages Native Roaming Photography Google Business listing, Bing Places, as well as Facebook Business while utilizing Google Analytics and Facebook Pixel. Additional integrations include leveraging Native Roaming’s current solutions from Honeybook for client bookings and Pic-Time for digital image hosting and printing.
Native Roaming Photography is also utilizing Google G Suite Basic Edition business solutions, sold and supported by Lemacks Media for email and more.
Check out the Native Roaming Photography website today by visiting https://nativeroaming.com .
#gallery-3 margin: auto; #gallery-3 .gallery-item float: left; margin-top: 10px; text-align: center; width: 100%; #gallery-3 img border: 2px solid #cfcfcf; #gallery-3 .gallery-caption margin-left: 0; /* see gallery_shortcode() in wp-includes/media.php */
Visit Lemacks Media https://lemacksmedia.com/news/05/06/native-roaming-photography-site-launch/ for updates and more content. #GSuite #Google #GoogleBusiness #GoogleCloud #GoogleCloudCompute #GoogleCloudComputeEngine #GoogleCloudPlatform #GoogleComputeEngine #GoogleSearch #Hosting #LemacksMedia #ModPageSpeed #PageSpeed #SearchEngineOptimization #Seo #SmallBusiness #SSL #SSLCertificate #SSLSecurity #WebDesign #WebDevelopment #WebHosting #WebsiteHosting
0 notes
lemacksmedia · 6 years ago
Text
Best Day Ever, A Dog Boutique site launch
https://lemacksmedia.com/news/04/20/best-day-ever-a-dog-boutique-site-launch/
Best Day Ever, A Dog Boutique site launch
The Best Day Ever, A Dog Boutique site launches today!
The Best Day Ever is a Premium Dog Boutique in Wilmington, North Carolina, location on Racine Drive in Racine Commons, next door to Blue Surf Cafe! Lemacks Media is happy to have been able to work on this project for Best Day Ever as they start their new journey providing premium healthy and organic dog food to the Wilmington/Cape Fear community!
Developed on WordPress and hosted from Lemacks Media on our Google Cloud Compute Engine Network and Content Delivery Network with Apache Mod Pagespeed today! The Best Day Ever website additionally uses Mailgun’s API for email automation and reliable deliverability, Google reCAPTCHA v3 to prevent spammers from submitting forms, a free SSL Certificate from Lemacks Media via Let’s Encrypt, and OneSignal Push Notifications. Lemacks Media manages The Best Day Ever Google Business listing, Bing Places, as well as Facebook Business while utilizing Google Analytics and Facebook Pixel. WooCommerce integration with Square Point-of-Sale has been implemented on the site, and features including: dog food purchases, custom paracord leashes, subscription dog food delivery, and more will be coming soon!
Check out the Best Day Ever, A Dog Boutique website today by visiting https://bestdayeverdogboutique.com and visit them at Best Day Ever located at 250 Racine Dr #4, Wilmington, NC 28403!
Visit Lemacks Media https://lemacksmedia.com/news/04/20/best-day-ever-a-dog-boutique-site-launch/ for updates and more content. #Google #GoogleBusiness #GoogleCloud #GoogleCloudCompute #GoogleCloudComputeEngine #GoogleCloudPlatform #GoogleComputeEngine #GoogleSearch #LemacksMedia #ModPageSpeed #PageSpeed #SearchEngineOptimization #Security #Seo #SSLCertificate #WebDesign #WebDevelopment #WebHosting #WebsiteHosting
0 notes
lemacksmedia · 6 years ago
Text
Marine Mud Run site launch
https://lemacksmedia.com/news/01/14/marine-mud-run-site-launch/
Marine Mud Run site launch
Marine Mud Run site launches today!
The Marine Mud Run is a community charity support project Presented by St. Joe Valley Marine Corps League Detachment 095. Established by Marine Corps 1stSGT (R) Sam Alameda in 2004, the Marine Mud Run event is held annually in South Bend/Mishawaka, IN to support the Michiana Toys for Tots to drive. Marine Mud Run launches today on WordPress/WooCommerce with custom event ticketing, custom URL shortening service, digital and email marketing solutions, and social media integrations leveraging hosting on Lemacks Media’s Google Cloud Compute Engine network! Check out the Marine Mud Run website, live today at https://marinemud.us!
#gallery-3 margin: auto; #gallery-3 .gallery-item float: left; margin-top: 10px; text-align: center; width: 100%; #gallery-3 img border: 2px solid #cfcfcf; #gallery-3 .gallery-caption margin-left: 0; /* see gallery_shortcode() in wp-includes/media.php */
Marine Mud Run site by Lemacks Media
Marine Mud Run site by Lemacks Media
Marine Mud Run site by Lemacks Media
Visit Lemacks Media https://lemacksmedia.com/news/01/14/marine-mud-run-site-launch/ for updates and more content. #Charity #GoogleCloudComputeEngine #GoogleCloudPlatform #GoogleComputeEngine #LemacksMedia #ModPageSpeed #Philanthropy #SearchEngineOptimization #Seo #SmallBusiness #WebDevelopment #WebHosting #WebsiteHosting
0 notes
lemacksmedia · 6 years ago
Text
Alexander York Fitness site launch
https://lemacksmedia.com/news/01/09/alexander-york-fitness-site-launch/
Alexander York Fitness site launch
Alexander York Fitness site launches today!
Alexander York Fitness, the business passion project of Entrepreneur Alex Croteau, launches today on WordPress/WooCommerce leveraging hosting on Lemacks Media’s Google Cloud Compute Engine network! Check out Alexander York Fitness’ website, live today at https://alexanderyorkfitness.com!
#gallery-1 margin: auto; #gallery-1 .gallery-item float: left; margin-top: 10px; text-align: center; width: 100%; #gallery-1 img border: 2px solid #cfcfcf; #gallery-1 .gallery-caption margin-left: 0; /* see gallery_shortcode() in wp-includes/media.php */
Alexander York Fitness homepage preview
Alexander York Fitness about preview
Alexander York Fitness online preview
Alexander York Fitness in-person preview
Alexander York Fitness store preview
Need a website for yourself or your business? Request a free quote today!
Visit Lemacks Media https://lemacksmedia.com/news/01/09/alexander-york-fitness-site-launch/ for updates and more content. #GoogleCloudComputeEngine #GoogleCloudPlatform #GoogleComputeEngine #LemacksMedia #ModPageSpeed #SearchEngineOptimization #Seo #SmallBusiness #WebDevelopment #WebHosting #WebsiteHosting
0 notes