#TechNews2024
Explore tagged Tumblr posts
maastechbd · 5 months ago
Text
Tumblr media
Databricks Raises $10bn in The Biggest US Venture Deal This Year
READ DETAILS!
1 note · View note
news-of-news · 6 months ago
Text
Tumblr media
New License Requirement for WhatsApp Group Admins: A Creeping Invasion of Your Privacy
In a shocking move, WhatsApp group administrators are now being forced to register and pay a fee to the government in order to continue managing their groups. This draconian regulation requires all group admins to obtain a license from the Post and Telecommunication Regulatory Authority, making it easier for authorities to track and control online communication.
What does this mean for users? Group admins will now face the burden of costly registration, and the government will have unprecedented control over private conversations and group activities. Whether you run a family chat, a community group, or a business forum, your every move could be monitored.
This new rule is part of a broader push to regulate and restrict digital spaces, raising concerns about privacy, freedom of expression, and the growing power of authorities to control what happens in private online spaces. Prepare for a digital world where your freedom to communicate may come with a price—and a license.
#WhatsAppAdminLicense #GovernmentControl #PrivacyInvasion #OnlineCensorship #SurveillanceState #DigitalFreedomUnderThreat #OnlineRegulation #BigBrotherIsWatching #WhatsAppUsers #TechNews2024 #WhatsAppBann #WhatsAppBann  #NOFN #NEWS_OF_NEWS #NOFN_News_Of_News
0 notes
knowledge-wale · 7 months ago
Text
Tumblr media
Xiaomi has raised the delivery goal for the all-electric SU7 by 20%. The SU7 is in high demand, as seen by the company's revised delivery target.
CNEV Post reports that during its earnings call, the new vehicle maker revealed its updated 2024 SU7 delivery target. From 100,000 to 120,000 units, Xiaomi increased its goal. https://t.ly/F6qwK
0 notes
govindhtech · 1 year ago
Text
Zurich Instruments launches SHF+ quantum computing platform
Tumblr media
Zurich Instruments developed a platform called SHF+ specifically for quantum computing technology.
Better performance for qubits the fundamental units of quantum computing is promised by the SHF+. Higher precision when executing quantum algorithms is the result of this. Lower noise levels result in less interference during measurements and better coherence, which is an essential qubit characteristic.
SHF+ is primarily intended for researchers who are creating large-scale quantum processors and high-quality qubits. Leading laboratories are working with Zurich Instruments to make sure the platform can handle the demands of developing quantum technology.
Zurich Instruments’ SHF+
Intended for use with quantum computing systems.
Seeks to enhance the performance of qubits, resulting in quantum algorithms that are more precise.
Accomplishes this by:
Decreased levels of noise: less interference in the course of the measurements.
Enhanced qubit coherence is an essential qubit characteristic.
Focuses on the research projects that researchers are working on:
Creating qubits of superior grade.
Constructing massive quantum computing systems.
Zurich Instruments is working with top labs to make sure the platform can adapt to the changing requirements of quantum computing.
Restrictions on the information available now:
The SHF+’s technical specs are not made available to the general public.
The platform’s specific uses and features are not properly explained.
The Engineering Toolkit You Need for a Quantum Edge
A new benchmark for high-fidelity qubit control and readout is established by the SHF+ product line. The SHF+ instruments offer superior analogue performance for your lab with an even higher signal-to-noise ratio (SNR) and lower phase noise thanks to a new analogue front end. The SHFQC+ Qubit Controller, SHFSG+ Signal Generator, and SHFQA+ Quantum Analyzer all come with the redesigned front end, which makes Zurich Instruments the best option for pursuing quantum advantage.
Increased Loyalty
The SHF+ products’ signal outputs are all among the highest on the market thanks to a 10 dB better SNR. Lower effective qubit temperature and higher gate integrity are associated with improved SNR for qubit control. Less measurement-induced dephasing results from higher SNR for qubit readout. Furthermore, for measurements on even the most sensitive qubits, the new fast output muting feature enables you to further muffle the output channels in the intervals between pulses.
In the control of long-lived qubits, phase errors can be suppressed thanks to a significantly improved phase noise. Zurich Instruments specifically focused on the phase noise at low offset frequencies because this has a significant effect when pulses are spaced out in time.
When these crucial parameters are performed at their best, the fidelity of the quantum computing algorithm can be maximised.
Quicker Processes
LabOne Q, the software foundation for quantum computing that speeds up your progress in the lab, is included with all SHF+ devices. High-level coverage of the entire experimental workflow is provided by LabOne Q, which also handles all instrument synchronisation and programming. With LabOne Q’s vast example collection, you can spend more time concentrating on your quantum engineering discoveries and less time programming.
Tested in Premier Laboratories
Real-world qubit measurements are the best available test. To make sure that the technical specifications of the new instruments result in exceptional performance gains in the lab, Zurich Instruments cooperated with some of the top labs in the world, located in Switzerland, Korea, Germany, and the US. Would you also like to unleash power over qubits in your lab? Contact Zurich Instruments right now to arrange a demo!
Important SHF+ Series Platform Features:
Broad Range of Frequencies:
The wide frequency range covered by the SHF+ series is essential for applications needing extreme speed and precision.
Superior Signal Accuracy
Because of its excellent signal fidelity, this platform is perfect for sensitive measurements in cutting-edge research domains like quantum computing.
Combined Approaches:
The SHF+ series provides integrated solutions that streamline setup and cut down on the need for extra equipment by combining several functions into a single device.
Interface That’s Easy to Use:
The SHF+ series is user-friendly, with a straightforward interface that frees researchers and engineers to concentrate on their investigations rather of being distracted by complex instrumentation.
Support for Advanced Software:
The platform is enhanced by the capabilities of the hardware through the use of complex software that offers extensive control and analysis tools.
Uses for Quantum Computing:
The SHF+ series is ideal for creating and testing quantum computing systems because to its precise control and measurement capabilities.
Frequency-High Electronics:
It encourages the advancement of high-frequency electronics research and development, encompassing radar systems and communication technologies.
Scientific Investigations:
The platform is useful in many fields of science research where accurate and consistent measurements are essential.
FAQS
What is SHF+?
Zurich Instrument created the SHF+ platform especially for quantum computing technology.
What are the benefits of using SHF+?
It seeks to enhance the functionality of qubits, which are the fundamental units of quantum computing. This may result in: Decreased levels of noise: reduced interference throughout the measurement process to increase precision. Improved qubit coherence: An essential qubit characteristic that improves quantum algorithm performance.
How does SHF+ compare to other quantum computing platforms?
It is challenging to directly compare SHF+ with other platforms in the absence of additional information about its functionality.
What is the cost of SHF+?
It’s pricing details are not made available to the general public.
When will SHF+ be commercially available?
It’s scheduled release date has not yet been disclosed.
How will SHF+ contribute to the advancement of quantum computing?
It provides a platform that enhances qubit coherence and reduces noise, which could help scientists create quantum computers that are more dependable and potent.
SHF+ for?
It focuses on scientists who are creating high-fidelity qubits. Constructing massive quantum computing systems.
Recall that the material in this FAQ is incomplete. This FAQ can be updated with further information as Zurich Instruments releases it, giving a more complete picture of SHF+.
Read more on Govindhtech.com
0 notes
sdreatechprivatelimited · 1 year ago
Text
Explore the limitless possibilities of IIoT automation! 🤩
0 notes
thetechmaster · 1 year ago
Link
0 notes
t00l-xyz-ai-news · 6 months ago
Link
0 notes
dwibitech · 8 months ago
Text
Top iOS Features Why iPhone Users Love It
0 notes
siliconsphere · 1 year ago
Text
📱 Explore the future of mobile tech! Our blog offers insights into the latest trends and innovations. #MobileTech2024
#MobileTech2024 #DigitalLandscape #TechTrends2024 #Innovation #SmartTechnology #futuretech #TechNews2024 #gadgets#techworld
0 notes
govindhtech · 1 year ago
Text
Ultimate Guide to Supply Chain Security Best Practices
Tumblr media
Supply chain security
Contemporary software development frameworks and methodologies prioritise shared ownership among software stakeholders in addition to product delivery speed and dependability.
Secure Software Supply Chain
Many other DevOps approaches help produce software that is more safe, in addition to the concept of shifting left on security. Practices that can enhance software security include increased stakeholder participation, work visibility, reproducible builds, automated testing, and gradual modifications. Actually, the Accelerate State of DevOps Report 2022 discovered that the usage of CI/CD aids in the implementation of security procedures, and that cultures with higher levels of trust are more likely to embrace techniques to fortify the software supply chain.
Modern development frameworks, however, do not provide organisations with the direction they need to comprehend software hazards, evaluate their capacity to identify and address threats, and put mitigations in place. Additionally, they frequently overlook outside variables that may have an impact on the integrity of applications in favour of concentrating only on the code and internal organisational procedures. An attack that compromises an open-source software package, for instance, affects any code that depends on it, either directly or indirectly. Attacks like these on the software supply chain have significantly grown around 2020.
Software Supply Chain Security
A software supply chain is made up of all the code, personnel, procedures, and organisational structures that go into creating and delivering software, both internally and externally to your company. It consists of:
The software you use to develop, produce, package, install, and run your software, as well as the dependencies it has, are all included in this.
procedures and guidelines for testing, reviewing, monitoring, providing comments, communicating, and approving access to the system.
Systems you can rely on to design, construct, store, and execute your dependencies and software.
There are many ways to make unauthorised changes to the software that you provide to your consumers, given the scope and intricacy of the software supply chain. Throughout the programme life cycle, several attack vectors are present. While some attacks, like the one on the Solar Winds build system, are directed, other risks are indirect and slip into the supply chain as a result of carelessness or process flaws.
In December 2021, for instance, the Google Open Source Insights team mentioned in a blog post on the remote execution vulnerability in Apache log4j that more than 17,000 packages in Maven Central were impacted. The majority of the impacted packages had dependencies that needed the vulnerable log4j-core package, but they did not directly depend on it.
Security Supply Chain
Process flaws that allow harmful code to inadvertently get into the supply chain include the absence of security requirements for production deployment or code review. Similar to this, if you package and deploy apps from systems outside your trusted build system and artefact repositories or create with source code outside your trusted version control system, harmful malware may enter your programme.
The 2021 State of the Software Supply Chain saw more open source and supply chain attacks:
The number of software supply chain attacks increased 650% in 2021.
Open source apps were downloaded 73% more in 2021 than 2020.
Popular open source projects tend to have the highest frequency of vulnerabilities.
Comprehending your organization’s security posture is crucial for safeguarding the integrity of your software, as it determines your ability to identify, address, and resolve security risks.
Frameworks for assessments and compliance requirements
Government regulations that are particular to supply chain security have been created as a result of growing concerns about supply chain security. These policies include:
The Executive Orders of the United States
Supply Chains in America Boosting Cybersecurity in the Country
The Network and Information Security 2 Directive of the European Union
Organisations can evaluate their security posture and learn about threat mitigation with the aid of new frameworks that are being developed.
Google’s software security procedures served as the model for the open-source framework Supply Chain Levels for Software Artefacts (SLSA).
Frameworks created by governmental bodies, like:
NIST produces the Secure Software Development Framework (SSDF) and Cybersecurity Assessment Framework (UK).
These frameworks structure well-established software security techniques to make it easier to identify security problems and determine how to reduce them.
On Google Cloud, safeguard your software supply chain
On Google Cloud, Software Delivery Shield offers a completely managed software supply chain security solution. It integrates best practices, including those found in NIST SSDF and SLSA frameworks. You progressively accept the solution’s components in accordance with your demands and objectives.
For contemporary businesses, maintaining the security of the software supply chain is a challenging task. Improving overall security requires first securing the software supply chain, including build artefacts like container images.They are introducing software supply chain security analytics for your Google Kubernetes Engine workloads in the GKE Security Posture dashboard to give you integrated, centralised visibility into your applications.
Your GKE clusters’ and your containerised workloads’ security posture can be enhanced with the help of Google cloud integrated GKE Security Posture dashboard, which offers expert advice. Workload configuration checks and insights into vulnerabilities are included. Additionally, the dashboard makes it evident which workloads are impacted by security issues and offers practical advice on how to fix them.
GKE Security Posture
GKE security posture dashboard transparency
Within the GKE Security posture dashboard, Google cloud are introducing a new “Supply Chain” card to increase transparency and control over your software supply chain. This functionality is now in public preview and gives you the ability to visualise supply chain risks related to your GKE workloads.
In this first release, offer two important insights
Images that are out of date: Find any picture that hasn’t been updated in the last 30 days. This could expose you to new vulnerabilities.
Get information about photos that are still using the generic “latest” tag, which impedes accurate version management and traceability.
Your images operating in GKE clusters are scanned by Google cloud Binary Authorization service. On the “Supply Chain” card, you can see an overview of the issues, and on the “Concerns” tab of the GKE Security Posture dashboard, you can drill down for more information.
To view the supply chain concerns, take the following actions
Open the Google Cloud console and navigate to the GKE Security Posture page. Note: If you haven’t already, you must enable Security Posture.
Select “Enable Binary Authorization API” from the “Supply Chain” card, and then click “Enable.”
Select “Enable” on the “Supply Chain” pop-up that appears next.
Within fifteen minutes, issues with “image freshness” or “latest tag” will show up on the “Supply Chain” card.
Select a concern to view its details. A list of the workloads that are impacted by the selected issue will appear on the “Affected Workloads” page.
Start now
As part of Google cloud continuous effort to improve workload security, Google cloud are releasing this initial release of GKE Security Posture to address supply chain concerns. Google cloud want to provide more advanced supply chain issues in the upcoming months, which will strengthen security and increase workload transparency for you.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
How Mantle Simplifies Equity Management with Gemini
Tumblr media
Mantle’s Equity Management
Every startup has a fantastic idea and a tonne of paperwork. When it comes to tracking equity, a company’s paperwork for options, shares, SAFEs, and other agreements is likely the most crucial layer. It is the one source of truth that auditors and attorneys will carefully review to ensure that the business operates ethically and that reported figures are correct.
The conventional connection between platforms and documents
However, managing documents is a laborious task that necessitates users to manually enter data from files onto platforms. These platforms are frequently handled as independent entities and have poor integration with other business systems. Founders are accustomed to this pattern: in order to begin using any data-intensive and auditable platform correctly, you must first read your files/documents, enter the information into a platform, and then “attach a file.” Although this is a terrific method to arrange documentation next to data, files are never treated as integrated parts of the user experience on traditional platforms; instead, they are treated as adjacent datasets.
Mantle
For contemporary founders, Mantle is a next-generation equity platform. By integrating platforms and documents, it overcomes these difficulties by decreasing the need for human data entry. Time is saved and human mistake is reduced.
Mantle enhances the accuracy and efficiency of the platform by using Vertex AI to extract data from documents. This allows customers to concentrate on their primary business tasks, secure in the knowledge that their equity data is current and correct.
Using Gemini to handle documents as data
Image credit to Google Cloud
Vertex AI data extraction tools have reduced these operations from hours to minutes, substantially boosting your confidence that the data you view on the platform is correct and up to date.
Mantle uses Gemini to expedite these processes so that the important information in the documents is understood. Mantle can quickly transform the papers in your data room into a cap table for your on boarding and review procedures, notifying users when platform values deviate from document values.
Mantle can “template” and produce documentation for standard workflows following onboarding. Spreadsheets and worksheets can be uploaded to be processed and imported for particular cases like mass updates. This implies that your worksheet doesn’t need to be modified or recreated in a format unique to the platform.
Vertex AI for data extraction
Image credit to Google Cloud
Within seconds, the Mantle process begins. It has the ability to categorise papers and examine them for particular information. Real-time document extraction and processing are used to extract and process data such as purchase amounts, dates, and vesting schedules. There is a library to produce, experiment with, and test prompts based on data definitions, making this realistic and manageable at scale.
Mantle gives you the impression that it is an assistant that is assisting you in completing your work more quickly by always presenting document findings for inspection and verification.
Mantle’s dedication to accuracy, privacy, and user-centric solutions is centred around Gemini, which positions Mantle as the platform of choice for entrepreneurs looking for dependable and efficient equity management, hence saving time for founders, attorneys, auditors, and other stakeholders involved in business equity management.
Mantle and Google Cloud Combined, better generation AI
Mantle uses Google Cloud to automate time-consuming procedures related to traditional document workflows and streamline equity administration. Vertex AI and Gemini, two crucial Google technologies, are the foundation of this collaboration.
Vertex AI: Accurately extracting data
The unified machine learning platform from Google Cloud, Vertex AI, is essential to Mantle’s data extraction procedure. Vertex AI intelligently pulls important data from corporate documents, including option and shareholder agreements, by utilising cutting-edge machine learning models. By doing away with the necessity for manual data entry, time is saved and the possibility of human error is decreased.
Gemini: Converting papers into understanding
Google’s most recent large language model, Gemini, elevates document integration to a new level. It assists Mantle in comprehending the context and meaning contained in materials, going beyond mere data extraction. As a result, Mantle is able to produce insights and automatically add pertinent data like purchase quantities, dates, and vesting schedules to the platform.
A complementary pair
Vertex AI and Gemini work together to power Mantle’s creative equity management strategy. Gemini assists in converting the data that Vertex AI gathers into insights that may be put to use. By doing away with the need for users to manually enter data and read documents, this combination frees them up to concentrate on their primary business tasks.
The advantages of Google Cloud and Mantle
Using Google Cloud technology, Mantle provides a number of important advantages:
Enhanced productivity
Founders, solicitors, and auditors can save countless hours by automating data extraction and document processing.
Amazing accuracy
By removing the possibility of human error, Vertex AI and Gemini guarantee that the data in Mantle is correct and current.
Simple user interface
Managing equity data and gaining insights is made simple for users by Mantle’s user-friendly platform.
A new era of accuracy and efficiency in equity management is being ushered in by Mantle and Google Cloud. Using AI and machine learning, Mantle frees up professionals and entrepreneurs to concentrate on what really matters.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Galaxy Technologies Gains Accessible Communication
Tumblr media
Galaxy technologies
With the ability to provide connection, creativity, entertainment, and knowledge, mobile technology is a potent instrument. Since it is crucial to Samsung ability to interact with friends, family, and the outside world, they think that everyone should have fair access to it.
At Samsung, they create meaningful, human-centered inventions that provide people more power and life-enriching opportunities. By offering a range of Galaxy features, like as easy-to-use gestures, audio assistants, and vision upgrades,they want to provide accessibility to technology for individuals with diverse abilities.
In honour of Global Accessibility Awareness Day this year, let’s examine a few of these aspects and the ways in which they are still breaking down barriers.
Enhancing the Accessibility of the Galaxy Experience
Relumino Mode Samsung
Relumino Mode, which was developed within Samsung’s internal incubator, aims to enhance the quality of life for individuals with low vision by making text and images on screens more visible. Users can more clearly distinguish content on their smartphones thanks to this function, which also improves the screen’s contrast, brightness, and sharpness of image outlines and shapes.
Samsung worked with academics, engineers, programmers, testers, advisers with low vision, and other stakeholders to understand the needs and perspectives of Samsung users in order to create an inclusive visual display solution. Years of research and development went into creating Relumino Mode, which aims to help Samsung achieve its mission of “Screens for All” by enhancing the viewing experience for people with low vision. Starting with the Samsung Galaxy S24 series, this new capability is accessible.
Relumino Samsung
Relumino Mode makes it easier for people with low vision to interact with the world and consume the content that is most important to them, whether it’s following a ball during a sporting event or seeing tiny print on a news programme.
Turning on Relumino
Giving TalkBack Audio Descriptions
TalkBack, commonly referred to as Voice Assistant, is a function that makes it easier for those with low vision to get the most out of their devices without having to look at their screens. With this function, Galaxy tablets and smartphones become user-friendly audio interfaces. Talkback can speak comments to users as they move around their devices, highlighting or selecting items like emails, notifications, and menus. Moreover, TalkBack shortcuts can be turned on for smoother navigation.
Getting about the screen is easy. TalkBack gives consumers convenient control over their devices with simple motions like Swipe Left, Double Tap, and Use Two Fingers To Scroll.
TalkBack
1) Open the Settings application and choose Accessibility. 2) Press TalkBack 3) Toggle TalkBack on by tapping the switch.Image credit to Samsung
Utilising Live Captions to Bring Media to Life
For those who are hard of hearing, live captioning makes it easy to follow their preferred audio and video content. Real-time transcription of audio is done by this feature as it is played on the device. Users can enjoy audio messages, voicemails, podcasts, phone conversations, video calls, and videos more fully when they have live captioning enabled.
The following languages are supported by live captioning: Hindi, Italian, Japanese, French, German, and Spanish.
Turning on Live Captions
1) Go into Device Settings 2) Press the Accessibility button. 3) Press the button for improved hearing 4) Select “Live Captioning.”Image credit to Samsung
Promoting Inclusivity with Wearable and Accessories
Enhancing the World Through Ambient Audio
The Ambient Sound setting on the Galaxy Buds 2 Pro adjusts background noise at five different amplification levels to meet a variety of demands and scenarios. Users can personalise their sound experience and hearing with this function. Users may engage in social conversations while being aware of their surroundings thanks to the Ambient Sound function, which can be used to enhance the volume of a conversation or increase road noise at a crosswalk.
Configuring the background noise
1) Place the two buds in your ears. 2) Launch the wearable Galaxy app. 3) Press the earphone settings. 4) Press the Accessibility button. 5) Press the button for ambient sound.
An inventive method of using your Galaxy Watch without touching it is through Universal Gestures. Without having to push down or forcefully touch the screen, users can browse the Galaxy interface, access apps, scroll messages, and more with only four simple gestures: Make Fist, Make Fist Twice, Pinch, and Double Pinch.
Universal Gestures to Encourage
1) Go into Device Settings 2) Press the Accessibility button. 3) Navigate to the section on interaction and dexterity. 4) Activate Universal Motions
This a fantastic topic! Recently, Samsung has improved the accessibility features on its Galaxy devices. By removing obstacles to communication, these features hope to make the user experience more inclusive.
Here are few examples of how Galaxy technologies is making communication more accessible:
Live Translate
During phone calls, this tool translates text and audio discussions in real time. This is an excellent tool for encouraging conversation amongst speakers of different languages and breaking down language barriers.
Interpreter
This function produces written translations for in-the-moment discussions. When having multilingual talks in person or virtually, this can be useful.
Circle to Search
Using motions like circling, highlighting, or scribbling, users may utilise this AI-powered functionality to search for content on their device from anywhere. This can be very useful for users who have limited dexterity.
These are just a handful of the ways that Samsung Galaxy technologies is facilitating easier communication. You can check the Accessibility settings on your Galaxy device “Opening Accessible Communication Through Galaxy Technologies” for additional details on how to enable these capabilities.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
SYCL Capable Multi-Layer Perceptrons for Intel GPU
Tumblr media
SYCL Standouts
With an open-sourced repository for the implementation, Intel is pleased to introduce the first SYCL implementation of fully-fused Multi-Layer Perceptrons implemented on Intel GPUs that enable Intel Xe Matrix Extensions (XMX) instructions. The implementation offers many features, such as cross-platform use, multi-resolution hash encoding, adaptability to neural network architectures, compatibility with PyTorch, and high-performance computing.
The implementation beats the CUDA PyTorch version running on Nvidia’s H100 GPU by up to a factor of 19, and it beats the pre-made Intel Extension for PyTorch (IPEX) implementation running on the same Intel GPU by up to a factor of 30.
Multi Layer Perceptron
For many modern Machine Learning (ML) applications, such as representing the solution operator of partial differential equations, determining the density or colour function in Neural Radiance Fields (NeRFs) objects, and substituting Neural Ray Tracing for classical ray-tracing, Multi-Layer Perceptrons (MLPs) serve as the primary Neural Network architecture. The completely linked layers of MLPs are typified by the connections between each neuron in the layer and all the layers above and below. MLPs are ideal for fully-fusing processes since each neuron’s output is independent of its neighbours in the same layer.
The first SYCL implementation of fully-fused MLPs applied to Intel GPUs supporting Intel Xe Matrix Extensions (XMX) instructions is proudly presented by Intel, along with an open-sourced implementation repository. By combining the operations in each tier of the MLP, this implementation minimises the sluggish global memory access and maximises data reuse inside the general register file and shared local memory. Using a roofline model, Intel demonstrate that this leads to a notable rise in the arithmetic intensity and better performance, particularly for inference. Additionally, the study demonstrates the effectiveness of Intel’s SYCL implementation in three key domains: Neural Radiance Fields, Physics-Informed Machine Learning, and Image Compression.
Multi-Layer Perceptron
A SYCL implementation of Multi-Layer Perceptrons (MLPs) optimised for the Intel Data Centre GPU Max 1550 is shown in this work. Intel’s approach maximises data reuse inside the general register file and shared local memory by fusing operations in each layer of the MLP, hence minimising sluggish global memory accesses and increasing efficiency. Using a basic roofline model, Intel demonstrate that this leads to a notable rise in the arithmetic intensity and better performance, particularly for inference.
Intel Extension for PyTorch
Intel demonstrate that Intel’s implementation on the Intel Data Centre GPU beats the CUDA code on Nvidia’s H100 GPU by a ratio up to 2.84 in inference and 1.75 in training, when Intel compare Intel’s method to a similar CUDA implementation for MLPs. Additionally, the study demonstrates the effectiveness of Intel’s SYCL implementation in three key domains: Neural Radiance Fields, Physics-Informed Machine Learning, and Image Compression. Intel’s approach beats the CUDA PyTorch version on Nvidia’s H100 GPU by up to a factor 19, and the off-the-shelf Intel Extension for PyTorch (IPEX) implementation on the same Intel GPU by up to a factor of 30 in all circumstances.
SYCL Features
First among several advantages of Intel’s approach is high-performance computation; high-throughput training and inference are made possible by the system’s efficient operation on Intel Data Centre GPUs. Additionally, the technique offers Python bindings that smoothly interact with the PyTorch environment, allowing users to include GPU-accelerated MLPs into PyTorch applications. It also offers adaptability by enabling a range of neuron topologies and networks with numerous hidden layers to meet various performance needs and use cases. It also incorporates Multi-Resolution Hash Encoding, which enables the network to efficiently handle high-frequency features, and is built to function on a variety of Intel GPUs, enhancing the framework’s adaptability and usability on diverse platforms.
SYCL Achievement
Intel’s fully-fused MLP implementation improves the results of a number of popular AI tasks. Intel compared Intel’s SYCL implementation on an Intel Data Centre GPU Max 1550 with the CUDA implementation on an Nvidia H100 GPU and PyTorch utilising both the CUDA backend and the Intel Extension for PyTorch (IPEX) in order to illustrate these performance advantages.
The results demonstrate the success of Intel’s approach: in Intel’s tests, the implementation outperforms the PyTorch implementation by up to a factor of 30, and outperforms an analogous CUDA implementation for MLPs with width 64 by a ratio of up to 2.84 in inference and 1.75 in training.
Intel also demonstrated the effectiveness of Intel’s solution in three key domains: NeRF (Neural Radiance Fields), Image Compression, and Physics-Informed Machine Learning. Intel’s method showed significant increases in all three categories, with factors reaching up to 30 times over traditional PyTorch implementations and up to 2.84 times over highly optimised CUDA versions.
Considering the Future
Intel want to further optimise Intel’s approach in the future, with a particular emphasis on using registers more effectively in order to cut down on stalls. Furthermore, by enabling the loads of various weight matrices in the shared local memory (SLM) and lowering its utilisation, Intel might be able to lower the number of barriers that are required. Increasing occupancy for small batch sizes and optimising the merging of the final matrix products into the backward pass will be additional areas of focus.
Intel intend to investigate using Intel’s ESIMD SYCL extension for Intel’s implementation, as well as generalise Intel’s library to different data kinds and wider network widths, in addition to performing more speed optimisation.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Nodeshift Offers Affordable AI Cloud with Intel Tiber Cloud!
Tumblr media
A San Francisco-based firm called Nodeshift is making waves in the industry with its ground-breaking cloud-based solutions at a time when AI development is often associated with exorbitant costs. Nodeshift is democratising the development of AI by providing a worldwide cloud network for training and running AI models at a fraction of the cost of market leaders like Google Cloud and Microsoft Azure.
Nodeshift
Terraform can help automate deployments
Thanks to its integration with Terraform, you can automate the deployment of GPU and Compute virtual machines, as well as storage. This place is a great fit for your AWS, GCP, and Azure expertise.
Steps x NodeShift on GitHub
To generate resources on it and launch your code straight into GPU and compute virtual machines, utilise the GitHub Actions pipeline.
What People Say About NodeShift
With its new decentralised paradigm, it is poised to reshape cloud services, altering the dynamics of the market and opening up new avenues for innovation.
The NodeShift founders have been chosen to participate in Intel’s startup accelerators, Intel Ignite and Intel Liftoff. Their goal is to build a strong foundation for growth by collaborating with seasoned business owners, mentors, and engineers. This will facilitate in expediting the advancement of decentralisation technology and expanding its operations worldwide.
Developing business apps in the cloud securely and at a significant cost savings is made simple for developers by the NodeShift platform. By applying applied cryptography to distributed computing, it is possible to leverage technical breakthroughs.
With years of experience implementing Palantir’s business SaaS platform in the cloud, KestrelOx1 is excited to support the NodeShift project and team as they push the boundaries of what is possible in the cloud and ensure data security.
It is revolutionary that Nodeshift is a component of the Intel Liftoff Programme. It will always be able to improve their cloud services thanks to this strategic partnership, which gives them unrestricted access to state-of-the-art hardware and software.
Intel Liftoff
“The close collaboration with the Intel Liftoff Programme has significantly improved and accelerated our own development and success in the market.” The co-founder of Nodeshift, Mihai Mărcuță.
Developing and training cloud-based AI systems can be extremely expensive for small and medium-sized businesses. Leading suppliers’ exorbitant prices frequently strain budgets to the breaking point, inhibiting innovation. With savings of up to 80% over popular cloud services from Google, AWS, and Azure, Nodeshift takes on this challenge head-on.
The Cost-Cutting Method of Nodeshift
Nodeshift’s creative utilisation of already-existing, underutilised processing and storage resources is the key to its incredible cost effectiveness. As an alternative to building their own data centres, it makes use of a network of geographically dispersed virtual machines that are purchased from both major and small telecom providers. This model improves scalability and flexibility in addition to cutting expenses. Here at Nodeshift, security and data protection come first.
Because Nodeshift is SOC 2 certified and follows strict standards like GDPR, it guarantees the highest level of data security and privacy.Image Credit to Intel
Strengthened by the Tiber Developer Cloud at Intel
As a participant in the Intel Liftoff Programme, it has unmatched access to the Intel Tiber Development Cloud. You may easily find cutting-edge hardware components and necessary software tools for developing AI here. Because of this collaboration, Nodeshift engineers can make sure that their products are always at the forefront of technology by thoroughly testing and improving them.
Max security and performance for AI applications are ensured by Nodeshift because to its access to the newest AI accelerator technologies, such as Intel Gaudi 2 or Gaudi 3, and contemporary CPU developments, such Intel SGX. It keeps its solutions one step ahead of the competition by utilising Intel’s expertise to further develop them.
Gaining More Recognition and Trustworthiness
Apart from the technology assistance, it gains from the powerful startup acceleration apparatus of Intel. In addition to increased media presence, this entails introductions to mentors, prospective clients, and important industry people. Being present at Intel events helps it become more visible in the market and improves its standing with investors, staff members, and clients.
The story of its demonstrates how creative thinking and smart alliances can revolutionise the tech sector. It is opening up AI development to a wider audience at a lower cost and opening doors for new kinds of technical breakthroughs by utilising Intel’s resources and their distinctive approach to cloud infrastructure.
Intel Ignite
“Decentralisation will be the foundation of NodeShift’s new paradigm, which will redefine cloud services and alter the dynamics of the market. This will present new opportunities for innovation.” The NodeShift founders have been chosen to participate in Intel’s startup accelerators, Intel Ignite and Intel Liftoff. Their goal is to build a strong foundation for growth by collaborating with seasoned business owners, mentors, and engineers. As a result, NodeShift will be able to expand its operations internationally and quicken the advancement of decentralised technologies.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Cloud maturity models for efficiency and excellence
Tumblr media
Cloud maturity models
Global business leaders ask their teams, “Are we using the cloud effectively?” “Are we spending too much on cloud computing?” is a common concern. Managing cloud cost is a valid concern—82% of 2023 Statista poll respondents named it as a major difficulty.
Security, governance, and resource and skill shortages also top respondents’ concerns. Cloud maturity models can help organisations overcome these concerns, establish their cloud strategy, and confidently utilise the cloud.
Macro and service-level cloud maturity models (CMMs) assess an organization’s cloud adoption readiness. They evaluate how well an organisation uses cloud services and resources and how to increase security and efficiency.
Cloud migration: why?
Real-time analytics, microservices, and APIs, which benefit from cloud computing’s flexibility and scalability, are pressuring organisations to transition to the cloud. Cloud skills and maturity are crucial to digital transformation, and cloud adoption has huge potential. According to a Deloitte report, 99% of cloud leaders consider the cloud as the foundation of their digital strategy. McKinsey sees a USD 3 trillion opportunity.
Comprehensive cloud maturity assessment is needed for a successful approach. This assessment determines what measures the organisation has to take to fully realise cloud benefits and identify current limitations, such as upgrading legacy tech and altering workflows. This assessment works well with Cloud maturity models.
Organisations must choose a Cloud maturity model that suits their needs from numerous. Many organisations start with a three-phase cloud maturity evaluation employing cloud adoption, cloud security, and cloud-native models.
Cloud adoption maturity model
This approach measures an organization’s cloud maturity overall. It assesses an organization’s technology, internal knowledge, culture, DevOps team, cloud migration initiatives, and more. An organisation must complete one step before moving on because these stages are linear.
Legacy: Early adopters lack cloud-ready applications, workloads, services, and infrastructure.
Ad hoc: Next is ad hoc maturity, which likely means the organisation has started using cloud technologies like IaaS, the lowest-level cloud resource control. IaaS users pay-as-you-go for computing, network, and storage services on-demand over the internet.
Repeatable: Companies have increased cloud spending at this point. This may entail creating a Cloud Centre of Excellence (CCoE) and assessing initial cloud investments’ scalability. Most crucially, the company has automated app, workstream, and data cloud migration.
Optimised: Cloud environments perform efficiently and every new use case follows the organization’s basis.
Advanced cloud: Most of the company’s workstreams are now cloud-based. Everything functions smoothly and stakeholders know the cloud can drive company goals.
Cloud security maturity model
Any company moving to the cloud must optimise security. With strong rules and postures, cloud providers may make the cloud more secure than on-premises data centres. Prioritising cloud security is critical because public cloud breaches can take months to fix and have major financial and reputational ramifications.
Cloud service providers (CSPs)
Cloud service providers (CSPs) and clients partner on security. Clients building on cloud infrastructure can bring misconfigurations or other vulnerabilities, but CSPs certify their security. CSPs and clients must collaborate to secure environments.
IBM is a member of the Cloud Security Alliance, which has a popular cloud security maturity model. Organisations aiming to improve cloud security can use the approach.
The full model may not be necessary for organisations, but they can use its components. Five steps focus on the organization’s security automation.
No automation: Security personnel manually find and fix issues using dashboards.
Simple SecOps comprises infrastructure-as-code (IaC) deployments and account federation.
More federation and multi-factor authentication (MFA) are added in this phase, although most automation is still done manually.
Guardrails: It expands the automation library into numerous account guardrails, cloud governance regulations.
Automation everywhere: Everything is integrated into IaC and MFA, and federation is widespread.
Cloud native maturity models
The cloud-native maturity model (CNMM) assesses an organization’s capacity to build cloud-native apps and workloads, while the first two maturity models assess preparedness. Cloud leaders support cloud-native development 87% of the time, says Deloitte.
Before using this model, corporate executives should understand their aims, like with other models. The organization’s maturity level will depend on these goals. Business executives must also evaluate their enterprise apps to determine the best cloud migration plan.
Most “lifted and shifted” apps can run in the cloud but may not benefit fully. Cloud-matured companies generally choose cloud-native apps for their most crucial tools and services.
The Cloud Native Computing Foundation proposed a model
Level 1: Build: An organisation is pre-producing a proof of concept (POC) application with limited organisational assistance. Business leaders comprehend cloud native’s benefits, and team members have basic technological knowledge despite being new.
Level 2: Teams engage in training and new skills, and SMEs emerge. Developing a DevOps approach brings together cloud engineers and developers. This organisational transformation creates new teams, agile project groups, and feedback and testing loops.
Level 3: Scale: Cloud-native strategy is desired. Stakeholder buy-in, competency, and cloud-native focus are expanding. The company is implementing shift-left regulations and training all personnel on security. This level has strong centralization and clear roles, although bottlenecks might down the process.
Level 4: Improve: All services use the cloud. Leadership and the team prioritise cloud cost optimisation. The organisation seeks ways to improve and streamline procedures. Self-service tools are pushing cloud expertise from developers to all employees. Multiple groups use Kubernetes to deliver and manage containerised apps. A solid base allows decentralisation to begin.
Level 5: Optimize: The business trusts the IT team and all employees are aware of the cloud-native environment. Self-sufficient teams own services. DevOps and DevSecOps are skilled, operational, and scalable. Teams are comfortable experimenting and using data to make business decisions. Accurate data procedures improve optimisation and enable FinOps adoption. The company has a versatile foundation, easy operations, and met original targets.
What benefits my company?
The benefits and extent of a cloud migration depend on an organization’s cloud maturity level. Not every organisation will or wants to reach the top maturity level in all three models. Gartner predicts that 70% of workloads will be in the cloud by 2024, making it likely that organisations would struggle to compete without cloud maturity.
The cloud benefits an organisation as its cloud infrastructure, security, and cloud-native application posture mature. An organisation can maximise cloud benefits and efficiency by assessing current cloud capabilities and developing a maturity strategy.
Using IBM to advance cloud maturity
Using IBM Instana Observability, organisations may achieve cloud maturity and ensure smooth application and infrastructure transfer during the planning, migration, and execution phases. Instana helps organisations mature cloud environments and processes by creating performance baselines, right-sizing infrastructure, finding bottlenecks, and monitoring end-user experience.
Digital transformation requires more than transferring apps, infrastructure, and services to the cloud. To discover possible issues that could affect cloud resources and application performance, organisations require a rigorous cloud monitoring strategy that tracks key performance metrics including response time, resource utilisation, and error rates.
Instana gives complete, real-time cloud environment visibility. IT teams may proactively monitor and manage AWS, Microsoft Azure, and Google Cloud Platform cloud resources.
The IBM Turbonomic platform optimises compute, storage, and network resources across stacks to reduce overprovisioning and boost ROI. The Turbonomic platform’s AI-powered automation can reduce costs and maintain performance with automatic, ongoing cloud optimisation for cloud-first, hybrid, and multicloud strategies.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Guided SAP S/4HANA Deploy Automation Reduces Complexity
Tumblr media
SAP S/4HANA
Google Cloud are pleased to announce that Guided Deployment Automation for SAP on Google Cloud is now generally available. By letting users define what they want to deploy, this new Workload Manager feature expedites the deployment of SAP workloads on Google Cloud. Deployment automation, best practices, and expert advice are then integrated straight into the console.
SAP S/4HANA Cloud
When deploying SAP S/4HANA on Google Cloud, clients can take advantage of multiple major advantages offered by this service
Efficiency
By automating infrastructure provisioning, operating system configuration, high availability cluster setup, and the installation of the selected application, end-to-end automation streamlines the laborious and prone to error deployment process.
Reliability
Without requiring you to manually go through countless pages of documentation, built-in checks and safeguards assist guarantee you are automatically adhering to best practices and the most recent architecture guidelines from both SAP and Google Cloud.
Flexibility
Select to deploy with “one click” straight from the console, or create and download the corresponding Ansible and Terraform files to add to or further customise already-existing deployment pipelines.
Customers are responsible for paying for any underlying resources or other services created and used in the deployment, such as discs and virtual machine instances, but this deployment service is free to use.
How does it operate?
Terraform Cloud
The guided interface assists you in customising and configuring your workload after selecting from the list of compatible SAP products and versions. The comparable infrastructure as code (Terraform, Ansible) is generated depending on your selections.A Cloud Build job is generated to run the Terraform and provision the necessary resources in your project when you deploy straight from the console. Although the precise resources generated will depend on your setup, the high-level architecture for a Distributed High Availability deployment is depicted in the following diagram. Further details about the resources generated during the deployment are included in the documentation.Image credit to Google Cloud
Apart from the resources needed for your SAP workload, a provisional virtual machine instance is also set up to manage the coordination and implementation of Ansible. The remaining steps in the deployment process are completed with Ansible, which also handles the following activities and tasks:
Configuration of the operating system
HANA installation and first backup
Setting up OS Clusters
HANA System Replication (HSR) Enablement
S/4HANA installation
the initial database load being executed
Installation of the necessary agents (Google Cloud’s SAP Agent, SAP Host Agent)
S/4HANA Cloud
Implement a SAP S/4HANA workload
Make sure you have finished the requirements to utilise Workload Manager’s deployment service before starting. Deploying SAP workloads requires a few more steps, like transferring the necessary SAP installation files to Cloud Storage.
Then, in the console, go to Workload Manager Deployment, which is located in the search bar at the top or nested under Compute in the left navigation pane. To get started, click the Create SAP Deployment option at the top.
Deployment Basics page
This page lets you select among supported applications and architectures while gathering basic deployment information. The choices you make on this page will pre-populate some of the ensuing inputs and assist in determining which information appears on the tabs that follow.
Location & Networking Tab
Specify the region, zone, and network to be used, as well as where the system should be installed, on this tab. You can also choose to pick the network from the Host Project if you have set up a Shared VPC.
During the deployment process, external internet access is necessary. If the selected network does not currently have access, you can choose to create external IP addresses. Lastly, you have the option to choose an existing DNS or have a new one generated automatically.
Database Tab
The HANA database layer configuration process will now commence. Here, you can modify the instance number and virtual machine names in addition to entering the HANA SID. To safely save any credentials used throughout the deployment process, Secret Manager is completely integrated. You can pick your own Custom Image or choose from a list of approved public operating systems.
Next, select the storage type you want from the list of approved HANA machine shapes. The best practices for the size you have selected are automatically used to compute disc volumes and sizes.
Application Page
You will repeat a similar procedure and enter data on the application layer and central services on the Application tab. You can choose various operating systems or machine sizes for the ASCS in contrast to the application servers, for example, by making separate selections for each.
You will select the certified machine shapes from the list and indicate the number of application servers that should be placed in each zone.
Preview Tab
To avoid errors later in the process, this last page not only provides a summary of your choices but also carries out some extra proactive checks for things like quotas. A list of the necessary APIs and services for the deployment will also be visible to you.
To initiate the deployment in the console, click Create at the bottom of the page. Alternatively, you may click Download Equivalent Terraform to build and download the equivalent Infrastructure as Code to deploy in your current automation pipeline or customise further.
You can return to the deployment dashboard by clicking Create. Upon completion of the deployment, which could take two to three hours, you will receive a message. By selecting the deployment and then the links for the Terraform or Ansible logs on the next screen, you can monitor the status and see real-time logs.duties after deployment
Following deployment, you can use standard tooling like HANA Studio or the SAP GUI to connect to your SAP S/4HANA system by entering the credentials you provided during configuration.
Read more on Govindhtewch.com
0 notes