#generativeAImodel
Explore tagged Tumblr posts
govindhtech · 6 months ago
Text
NVIDIA Earth-2 NIM Microservices Exposed For Faster Forecast
Tumblr media
Faster Predictions: To Introduces NVIDIA Earth-2 NIM Microservices to Deliver Higher-Resolution Simulations 500x Faster. Weather technology firms can now create and implement AI models for snow, ice, and hail predictions with to new NVIDIA NIM microservices.
Two new NVIDIA NIM microservices that can 500x the speed of climate change modeling simulation results in NVIDIA Earth-2 were unveiled by NVIDIA today at SC24
NVIDIA Earth-2 NIM microservices
High-resolution, AI-enhanced, accelerated climate and weather models with interactive visualization.
Climate Digital Twin Cloud Platform
NVIDIA Earth-2 simulates and visualizes weather and climate predictions at a global scale with previously unheard-of speed and accuracy by combining the capabilities of artificial intelligence (AI), GPU acceleration, physical models, and computer graphics. The platform is made up of reference implementations and microservices for simulation, visualization, and artificial intelligence.
Users may employ AI-accelerated models to optimize and simulate real-world climate and weather outcomes with NVIDIA NIM microservices for Earth-2.
The Development Platform for Climate Science
GPU-Optimized and Accelerated Climate Simulation
To increase simulated days per day (SDPD), the Earth-2 development platform is tuned for GPU-accelerated numerical climate simulations at the km-scale.
Data Federation and Interactive Weather Visualization
Extremely large-scale, high-fidelity, interactive projections of global weather conditions are made possible by NVIDIA Omniverse. A data federation engine included into Omniverse Nucleus provides transparent data access across external databases and real-time feeds.
A digital twin platform called Earth-2 is used to model and visualize climate and weather phenomena. To help with forecasting extreme weather occurrences, the new NIM microservices give climate technology application developers cutting-edge generative AI-driven capabilities.
While maintaining data security, NVIDIA NIM microservices aid in the quick deployment of foundation models.
The frequency of extreme weather events is rising, which raises questions about readiness and safety for disasters as well as potential financial effects.
Nearly $62 billion in natural disaster insurance losses occurred in the first half of this year. Bloomberg estimates that is 70% greater than the 10-year average.
The CorrDiff NIM and FourCastNet NIM microservices are being made available by NVIDIA to assist weather technology firms in producing more accurate and high-resolution forecasts more rapidly. When compared to conventional systems, the NIM microservices also provide the highest energy efficiency.
New CorrDiff NIM Microservices for Higher-Resolution Modeling
Image Credit To NVIDIA
NVIDIA a generative AI model for super resolution at the kilometer scale is called CorrDiff. At GTC 2024, it demonstrated its potential to super-resolve typhoons over Taiwan. In order to produce weather patterns at a 12x better resolution, CorrDiff was trained using numerical simulations from the Weather Research and Forecasting (WRF) model.
Meteorologists and companies depend on high-resolution forecasts that can be shown within a few kilometers. In order to evaluate risk profiles, the insurance and reinsurance sectors depend on comprehensive meteorological data. However, it is frequently too expensive and time-consuming to be feasible to achieve this level of precision using conventional numerical weather forecast models like WRF or High-Resolution Rapid Refresh.
Compared to conventional high-resolution numerical weather prediction utilizing CPUs, the CorrDiff NIM microservice is 10,000 times more energy-efficient and 500 times quicker. Additionally, CorrDiff is currently functioning at a 300x greater scale. In addition to forecasting precipitation events, such as snow, ice, and hail, with visibility in kilometers, it is super-resolving, or enhancing the quality of lower-resolution photos or videos, for the whole United States.
Enabling Large Sets of Forecasts With New FourCastNet NIM Microservice
Image Credit To NVIDIA
High-resolution predictions are not necessary for all use cases. Larger forecast sets with coarser resolution are more advantageous for some applications. Due to computational limitations, state-of-the-art numerical models like as IFS and GFS can only provide 50 and 20 sets of predictions, respectively.
Global, medium-range coarse predictions are provided by the FourCastNet NIM microservice, which is now accessible. Providers may provide predictions over the following two weeks 5,000 times faster than with conventional numerical weather models by using the initial assimilated state from operational weather centers like the National Oceanic and Atmospheric Administration or the European Centre for Medium-Range Weather predictions.
By estimating hazards associated with extreme weather at a different scale, climate tech providers may now anticipate the chance of low-probability occurrences that are missed by present computational processes.
Read more on govindhtech.com
0 notes
top10aitoolsined · 5 months ago
Text
0 notes
otiskeene · 1 year ago
Text
TELUS International Launches Experts Engine To Provide Highly Accurate, Predictably Faster And Cost-optimized Training Data For Generative AI Models
Tumblr media
Leader in digital customer experience solutions TELUS International introduces Experts Engine, a managed sourcing solution that pairs generative AI (GenAI) tasks with human expertise. This breakthrough expedites the production of superior training datasets for sophisticated models, including large language models (LLMs). As a component of TELUS International's AI Data Platform, it integrates management and job execution capabilities and is supported by a machine learning model that profiles more than a million contributors worldwide.
Within TELUS International's AI Community, Experts Engine optimizes task allocation, ensuring that the appropriate experts undertake certain tasks based on their domain knowledge and task complexity. This approach preserves quality control workflows for various GenAI activities, eliminates over-allocation of expertise, and improves efficiency by tailoring the matching algorithm.
The importance of training data in enhancing model quality and safety is emphasized by Siddharth Mall, vice president of product at AI Data Solutions. The introduction of Experts Engine shows TELUS International's dedication to resolving customer issues by fusing human insight for dependability and automation for efficiency, promoting objective and inclusive content across markets and applications.
Read More - https://www.techdogs.com/tech-news/business-wire/telus-international-launches-experts-engine-to-provide-highly-accurate-predictably-faster-and-cost-optimized-training-data-for-generative-ai-models
0 notes
govindhtech · 6 months ago
Text
RIKEN And Cleveland Clinic Use Qiskit For Quantum Innovation
Tumblr media
IBM Introduces Its Most Cutting-Edge Quantum Computers, Advancing Quantum Advantage and New Scientific Value. The most effective quantum software in the world, Qiskit, can accurately expand the length and complexity of specific circuits to 5,000 two-qubit operations on IBM quantum computers. Rensselaer Polytechnic Institute advances quantum-centric supercomputing, while RIKEN and Cleveland Clinic use Qiskit to combine quantum and classical resources to investigate novel, scientifically important challenges.
IBM, Algorithmiq, Qedma, QunaSys, Q-CTRL, and Multiverse Computing’s Qiskit services may boost performance while making it easier to create next-generation algorithms.
IBM today revealed quantum hardware and software advances to run complicated algorithms on IBM quantum computers with unprecedented speed, accuracy, and scalability at its first-ever IBM Quantum Developer Conference.
Qiskit may now be used to precisely execute certain classes of quantum circuits with up to 5,000 two-qubit gate operations on IBM Quantum Heron, the company’s most powerful quantum processor to date and available in IBM’s worldwide quantum data centers. These features now allow users to further investigate how quantum computers might address scientific issues in a variety of fields, including high-energy physics, chemistry, materials science, and life sciences.
As IBM and its partners get closer to quantum advantage and IBM’s cutting-edge, error-corrected system, which is scheduled for 2029, this further advances the era of quantum utility and continues to meet milestones on IBM’s Quantum Development Roadmap.
Certain mirrored kicked Ising quantum circuits with up to 5,000 gates may be executed with the combined enhancements of IBM Heron and Qiskit. This is about twice as many gates as IBM’s 2023 demonstration of quantum usefulness. Through this effort, IBM’s quantum computers’ performance is significantly enhanced beyond what can be achieved using brute-force conventional simulation techniques.
The utility experiment from 2023, which was published in Nature, showed the speed findings in terms of processing time per data point, which came to 112 hours. Using the same data points, the same experiment was conducted on the newest IBM Heron CPU, which is 50 times quicker and can be finished in 2.2 hours.
To make it easier for developers to create intricate quantum circuits with speed, precision, and stability, IBM has further developed Qiskit into the most powerful quantum software in the world. This is supported by data collected and posted on arXiv.org using Benchpress, an open-source benchmarking tool that IBM used to evaluate Qiskit on 1,000 tests, most of which were from third parties. The results showed that Qiskit was the most dependable and high-performing quantum software development kit when compared to other platforms.
New Software Tools to Advance Development of Next-Generation Algorithms
With additional Qiskit services like generative AI-based capabilities and software from IBM partners, the��IBM Quantum Platform is further broadening possibilities and enabling a growing network of specialists from many sectors to develop next-generation algorithms for scientific research.
This includes tools like the Qiskit Transpiler Service, which powers the effective optimization of quantum circuits for quantum hardware with AI; Qiskit Code Assistant, which assists developers in creating quantum code using generative AI models based on IBM Granite; Qiskit Serverless, which runs initial quantum-centric supercomputing approaches across quantum and classical systems; and the IBM Qiskit Functions Catalog, which makes services from IBM, Algorithmiq, Qedma, QunaSys, Q-CTRL, and Multiverse Computing available for features like managing quantum noise performance and simplifying the development of quantum algorithms by abstracting away the complexities of quantum circuits.
By utilizing steps towards quantum-centric supercomputing approaches, Algorithmiq’s tensor error network mitigation algorithm (TEM), accessible through the IBM Qiskit Functions Catalog, provides the fastest quantum runtime it’ve yet to provide to users while providing state-of-the-art error mitigation for circuits at utility scale.
“This are expanding TEM’s capabilities to support circuits with up to 5,000 entangled quantum gates, a milestone for scaling quantum experiments and solving complicated issues, given the recent breakthroughs it’ve achieved to integrate quantum computers with post-processing on GPUs. This may pave the way for quantum calculations and simulations that were previously limited by noise constraints.
The goal of Qedma is to provide services that enable the customers to operate the longest and most complicated quantum circuits, and advancements in IBM quantum hardware and software are essential to this goal. Together with to own successes in error mitigation, which can provide through Qedma’s service in the IBM Qiskit Functions Catalog, its are eager to continue it goal of empowering users worldwide to develop algorithms using today’s quantum systems and produce results that are more and more accurate and valuable to science.
Qiskit Fuels Quantum and Classical Integration Towards Future of Computing
IBM’s vision of quantum-centric supercomputing, the next step in high-performance computing, aims to combine sophisticated quantum and classical computers running parallelized workloads to easily deconstruct complex problems using powerful software. This will allow each architecture to solve specific portions of an algorithm for which it is most appropriate. Such software is being developed to rapidly and easily piece issues back together, enabling the execution of algorithms that are difficult or impossible for each computer paradigm to do alone.
The Cleveland Clinic, a renowned academic medical center and biomedical research institution with an on-site and utility-scale IBM Quantum System One, and RIKEN, a national scientific research institute in Japan, are investigating algorithms for electronic structure problems that are essential to chemistry.
In order to properly mimic complicated chemical and biological systems a job that has long been thought to need fault-tolerant quantum computers these endeavors mark the beginning of quantum-centric supercomputing technologies.
Methods based on the parallel classical processing of individual quantum computer samples are early examples of these kinds of operations. Researchers from IBM and RIKEN have carried out sample-based quantum diagonalizations in quantum-centric supercomputing environments, building on earlier methods like QunaSys’s QSCI method. These methods use quantum hardware to precisely model the electronic structure of iron sulfides, a compound that is widely found in organic systems and nature.
The Cleveland Clinic is using this same technique, which is now available as a deployable Qiskit service, to investigate how it might be applied to implement quantum-centric simulations of noncovalent interactions molecule-to-molecule bonds that are crucial to many processes in chemical, biological, and pharmaceutical science.
This study exemplifies the success of to research collaboration, which combines the world-renowned healthcare and life sciences expertise of Cleveland Clinic with IBM’s cutting-edge technology. Using state-of-the-art tools like Qiskit, it are working together to push beyond established scientific limits in order to further study and discover novel therapies for patients worldwide.
Intermolecular interactions are crucial for possible future applications in drug discovery and design, and it were able to study them for the first time on the on-site IBM Quantum System One at Cleveland Clinic by utilizing the partners at IBM’s sophisticated electronic structure algorithm for quantum computing.
RIKEN Center for Computational Science
Through the Japan High Performance Computing-Quantum (JHPC-Quantum) project, which is being carried out by the RIKEN Center for Computational Science (R-CCS), it supercomputer, Fugaku, will be integrated with an on-premises IBM Quantum System Two that is powered by an IBM Quantum Heron processor in order to create a quantum-HPC hybrid computing platform. The director of the Quantum-HPC Hybrid Platform Division at the RIKEN Center for Computational Science stated. “It will strongly support the initiative’s goal of demonstrating quantum-centric supercomputing approaches by using it platform as a first step towards this new computing architecture in the era of quantum utility.”
Read more on govindhtech.com
0 notes
govindhtech · 6 months ago
Text
Develop Haunted Sanctuary Effects Using AI And NVIDIA RTX
Tumblr media
There Are Ghosts at the “Haunted Sanctuary,” Constructed Using RTX and AI, creator Sabour Amirazodi creates eerily intricate Halloween displays for his house using ComfyUI and Adobe Firefly.
About Haunted Sanctuary
“Haunted Sanctuary” is the name of an episode of the TV show Sanctuary. In the episode, Magnus and her team rescue the crew and passengers of a sinking ship and bring them to the Sanctuary for medical treatment. After one of the refugees is murdered, Magnus is forced to use her stun gun on Druitt and lock him in a containment cell. However, a power surge locks all the doors in the Sanctuary.
Creator Sabour Amirazodi, a tech marketing and workflow expert at NVIDIA, is one of the artists using AI to improve and speed up their creative pursuits.
Using his more than two decades of multi-platform expertise in media production and location-based entertainment, he creates an amazing Halloween exhibit called the Haunted Sanctuary each year to adorn his house.
The project is a huge endeavor that calls on a variety of skills, including compositing, editing in Adobe After Effects and Premiere Pro, projection mapping, and the development and assembly of 3D scenes. Amirazodi’s NVIDIA RTX 6000 GPU and the NVIDIA Studio content production platform were used to speed up the development process.
As part of the exhibit this year, Amirazodi used new AI procedures in ComfyUI, Adobe Firefly, and Photoshop to produce digital images that were influenced by his family.
Give ’em Pumpkin to Talk About
A node-based interface called Give ’em Pumpkin to Talk About uses text to create visuals and movies. Because of its great degree of customization, users may create processes, change parameters, and see results right away. To get further control, it may integrate several AI models and third-party extensions.
To assist eliminate any unwanted visual impacts, the procedure below, for instance, calls for supplying a prompt, the specifics and attributes of the intended picture, and a negative prompt.
Amirazodi began by using Run IP Adapters, which employ reference photos to guide created material since he wanted his digital creations to closely resemble his family.
He then adjusted the parameters to give each character the appearance and feel he wanted.Image Credit To NVIDIA
ComfyUI can produce visuals from prompts up to 60% quicker with to NVIDIA TensorRT acceleration.Image Credit To NVIDIA
In Darkness, Let There Be Light
The Adobe Firefly series of creative generative AI models supports creative processes and provides fresh approaches to ideation and creation. They were trained using NVIDIA GPUs on licensed material like Adobe Stock Images and public domain information whose copyright has expired to be safe for children commercial usage.
Amirazodi had to enlarge the backdrop in order to get the correct fit for the digital pictures.
By using the Crop tool to expand the picture’s border, artists may use Adobe Photoshop’s Generative Fill tool, Generative Expand, to automatically fill the empty area with information that complements the original image.
Additionally, Photoshop has “Neural Filters,” which save artists from hours of laborious, manual labor by enabling them to experiment with innovative concepts and make intricate picture alterations in a matter of seconds.
By using a slider, artists may effortlessly experiment with face features like lighting angles and gaze direction while using Smart Portrait Neural Filters. By modifying the colors, textures, depth blur, and facial expressions, Amirazodi used the tool to add the finishing touches to his photos.Image Credit To NVIDIA
NVIDIA RTX GPUs speed up Photoshop’s Neural Filters and support AI-based activities. Tasks in content production, gaming, and daily living are already being accelerated and automated by AI, and these speedups are only increased with a machine that has an NVIDIA RTX or GeForce RTX GPU installed.
Read more on Govindhtech.com
0 notes
govindhtech · 10 months ago
Text
Updates to Azure AI, Phi 3 Fine tuning, And gen AI models
Tumblr media
Introducing new generative AI models, Phi 3 fine tuning, and other Azure AI enhancements to enable businesses to scale and personalise AI applications.
All sectors are being transformed by artificial intelligence, which also creates fresh growth and innovation opportunities. But developing and deploying artificial intelligence applications at scale requires a reliable and flexible platform capable of handling the complex and varied needs of modern companies and allowing them to construct solutions grounded on their organisational data. They are happy to share the following enhancements to enable developers to use the Azure AI toolchain to swiftly and more freely construct customised AI solutions:
Developers can rapidly and simply customise the Phi-3-mini and Phi-3-medium models for cloud and edge scenarios with serverless fine-tuning, eliminating the need to schedule computing.
Updates to Phi-3-mini allow developers to create with a more performant model without incurring additional costs. These updates include a considerable improvement in core quality, instruction-following, and organised output.
This month, OpenAI (GPT-4o small), Meta (Llama 3.1 405B), and Mistral (Large 2) shipped their newest models to Azure AI on the same day, giving clients more options and flexibility.
Value unlocking via customised and innovative models
Microsoft unveiled the Microsoft Phi-3 line of compact, open models in April. Compared to models of the same size and the next level up, Phi-3 models are their most powerful and economical small language models (SLMs). Phi 3 Fine tuning a tiny model is a wonderful alternative without losing efficiency, as developers attempt to customise AI systems to match unique business objectives and increase the quality of responses. Developers may now use their data to fine-tune Phi-3-mini and Phi-3-medium, enabling them to create AI experiences that are more affordable, safe, and relevant to their users.
Phi-3 models are well suited for fine-tuning to improve base model performance across a variety of scenarios, such as learning a new skill or task (e.g., tutoring) or improving consistency and quality of the response (e.g., tone or style of responses in chat/Q&A). This is because of their small compute footprint and compatibility with clouds and edges. Phi-3 is already being modified for new use cases.
Microsoft and Khan Academy are collaborating to enhance resources for educators and learners worldwide. As part of the partnership, Khan Academy is experimenting with Phi-3 to enhance math tutoring and leverages Azure OpenAI Service to power Khanmigo for Teachers, a pilot AI-powered teaching assistant for educators in 44 countries. A study from Khan Academy, which includes benchmarks from an improved version of Phi-3, shows how various AI models perform when assessing mathematical accuracy in tutoring scenarios.
According to preliminary data, Phi-3 fared better than the majority of other top generative AI models at identifying and fixing mathematical errors made by students.
Additionally, they have optimised Phi-3 for the gadget. To provide developers with a strong, reliable foundation for creating apps with safe, secure AI experiences, they launched Phi Silica in June. Built specifically for the NPUs in Copilot+ PCs, Phi Silica expands upon the Phi family of models. The state-of-the-art short language model (SLM) for the Neural Processing Unit (NPU) and shipping inbox is exclusive to Microsoft Windows.
Today, you may test Phi 3 fine tuning in Azure AI
Azure AI’s Models-as-a-Service (serverless endpoint) feature is now widely accessible. Additionally, developers can now rapidly and simply begin developing AI applications without having to worry about managing underlying infrastructure thanks to the availability of Phi-3-small via a serverless endpoint.
The multi-modal Phi-3 model, Phi-3-vision, was unveiled at Microsoft Build and may be accessed via the Azure AI model catalogue. It will also soon be accessible through a serverless endpoint. While Phi-3-vision (4.2B parameter) has also been optimised for chart and diagram interpretation and may be used to produce insights and answer queries, Phi-3-small (7B parameter) is offered in two context lengths, 128K and 8K.
The community’s response to Phi-3 is excellent. Last month, they launched an update for Phi-3-mini that significantly enhances the core quality and training after. After the model was retrained, support for structured output and instruction following significantly improved.They also added support for |system|> prompts, enhanced reasoning capability, and enhanced the quality of multi-turn conversations.
They also keep enhancing the safety of Phi-3. In order to increase the safety of the Phi-3 models, Microsoft used an iterative “break-fix” strategy that included vulnerability identification, red teaming, and several iterations of testing and improvement. This approach was recently highlighted in a research study. By using this strategy, harmful content was reduced by 75% and the models performed better on responsible AI benchmarks.
Increasing model selection; around 1600 models are already accessible in Azure AI They’re dedicated to providing the widest range of open and frontier models together with cutting-edge tooling through Azure AI in order to assist clients in meeting their specific cost, latency, and design requirements. Since the debut of the Azure AI model catalogue last year, over 1,600 models from providers such as AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, Stability AI, and others have been added, giving us the widest collection to date. This month, they added Mistral Large 2, Meta Llama 3.1 405B, and OpenAI’s GPT-4o small via Azure OpenAI Service.
Keeping up the good work, they are happy to announce that Cohere Rerank is now accessible on Azure. Using Azure to access Cohere’s enterprise-ready language models Businesses can easily, consistently, and securely integrate state-of-the-art semantic search technology into their applications because to AI’s strong infrastructure. With the help of this integration, users may provide better search results in production by utilising the scalability and flexibility of Azure in conjunction with the highly effective and performant language models from Cohere.
With Cohere Rerank, Atomicwork, a digital workplace experience platform and a seasoned Azure user, has greatly improved its IT service management platform. Atomicwork has enhanced search relevancy and accuracy by incorporating the model into Atom AI, their AI digital assistant, hence offering quicker, more accurate responses to intricate IT help enquiries. Enterprise-wide productivity has increased as a result of this integration, which has simplified IT processes.
Read more on govindhtech.com
0 notes
govindhtech · 10 months ago
Text
Discover AI-Assist Art With Adobe Firefly On NVIDIA RTX PCs
Tumblr media
Uncovering AI-Assisted Art With Adobe Firefly Programs Using NVIDIA RTX RTX PCs and workstations with over 100 AI-powered capabilities opens up countless opportunities for content makers.
Applications from Adobe Creative  Cloud that leverage NVIDIA RTX GPUs are meant to boost users’ creativity by enabling them to work more quickly and concentrate on their work. With their smooth integration into current creative workflows, these tools offer power and precision together with increased productivity.
Consider the Light
Generative AI uses existing data to learn and produce new data in the form of text or visuals. It facilitates the generation of material that accurately visualizes and matches user descriptions while also helping to open up new creative possibilities.
Adobe Firefly series of creative generative AI models helps creative processes using generative  AI by providing fresh approaches to ideation and creation. They were trained on licensed content, such as Adobe Stock Images, and public domain content whose copyright has expired, using NVIDIA GPUs. They are intended to be safe for usage in commercial settings.
Adobe Firefly Photoshop
The most widely used creative programs from include Adobe Firefly functionality.Image Credit To Nvidia
With the help of straightforward description prompts, the Generative Fill tool in Adobe Photoshop makes it simple to add content from photos. Users can also upload a sample image to obtain image results that are more similar to their intended output with the most recent Reference Image feature, which is presently in beta.
With the Crop tool, artists can use Generative Expand to stretch the border of their image and fill up larger canvases with newly added content that automatically blends in with the original.
Neural filters that are accelerated by RTX, like Photo Restoration, allow for intricate modifications including applying  artificial intelligence to style transfers and coloring monochrome images. Based on research from NVIDIA, the Smart Portrait filter enables non-destructive manipulation using filters.
Using the most recent version of the Adobe Firefly Vector Model, Adobe Illustrator’s Generative Shape Fill (beta) enables users to quickly fill shapes with colour and detail in their own unique fashions, speeding up design workflows. Designers may quickly generate a vast array of editable and scalable vector graphic alternatives by simply matching the style and colour of their own artwork with Generative Shape Fill.
With just a text prompt, designers may quickly experiment with unique colour schemes and themes for their vector artwork with Adobe Illustrator’s Generative Recolour function.Image Credit To Nvidia
NVIDIA and Adobe will keep collaborating to enable sophisticated generative AI models, with an emphasis on deep integration into the programs used by the top creators worldwide.
Taking Action on Video
Adobe Premiere Pro is a well regarded and potent video editing software.
With the aid of RTX acceleration, the Enhance Speech tool employs AI to enhance dialogue snippets and eliminate extraneous sounds, resulting in a polished recording. On RTX PCs, it is up to 4.5 times quicker.
Another tool in Adobe Premiere is Auto Reframe, which intelligently reframes video footage for various aspect ratios by using GPU acceleration to find and track the most relevant elements in a film. Prior to starting the video editing process, Scene Edit Detection automatically locates the original edit locations in a video.
Graphics
In many visual effects and compositing operations, separating a foreground object from a backdrop is an essential step.
A new feature in Adobe After Effects isolates an object using a matte, allowing for selective effect application to the foreground and background replacement.
Artists can apply strokes to specific parts of the background and foreground elements by using the Roto Brush tool. With fewer clicks and cleaner cutouts, After Effects creates a segmentation border between the foreground and background objects using that information.
Producing 3D Product Images
Adobe Firefly answer for 3D material writing, texturing, and rendering is the Substance 3D Collection, which enables users to quickly produce incredibly photorealistic 3D content, including models, materials, and lighting.
Finding the ideal setting for the goods to live in can take some effort, but it can be appealing to visualize designs and products in relation to a location. This problem is resolved by the Adobe Firefly-powered Substance 3D Stager’s Generative Background tool, which enables artists to swiftly examine created backdrops to combine 3D models.
Stager can automatically adjust the lighting and perspective to the created background after an environment has been chosen.
AI-Powered Material Authoring
The Substance 3D Collection includes Adobe Substance 3D Sampler, which is intended to convert photos of surfaces and objects into photorealistic 3D models, high-dynamic range ambient lighting, and physically based rendering (PBR) materials. Sampler is making it simpler than ever for artists to experiment with variations while generating materials for anything from product visualization projects to the newest AAA games, thanks to the recent release of new generative processes powered by Adobe Firefly.
With Sampler’s Text-to-Texture tool, users can create tiled graphics by providing comprehensive text prompts. Then, using any Sampler filter or the machine learning-powered Image-to-Material functionality, these created images can be tweaked and turned into photorealistic PBR materials.
Similar to this, Image-to-Texture allows tiled textures to be created from reference photos, offering an additional means of inspiring and producing variations from already visual information.
Using text prompts, Sampler’s Text-to-Pattern tool creates tiling patterns that can be used as base colours or as inputs for different filters, such the Cloth Weave filter, which creates unique fabric materials. RTX GPU-powered generative AI technologies in the Substance 3D Collection are all intended to speed up ideation and creation for 3D artists.
Captivating Photographic Elements
The AI-powered Raw Details tool in Adobe Lightroom enhances the image without affecting its original quality by producing sharper detail and more precise edge renditions, improving colour rendering, and reducing artefacts. When fine details are visible on large monitors or printouts, this function comes in handy.
Super Resolution doubles the linear resolution while producing an improved image that is comparable to Raw Details. This indicates that the enlarged image will contain four times as many pixels overall, or twice the width and height of the original image. This is particularly helpful for boosting cropped imagery’s resolution.
With a single click, users can construct intricate masks for speedier editing thanks to AI-powered, RTX-accelerated masking tools like Select Subject, which removes individuals from an image, and Select Sky, which captures the sky.
Read more on govindhtech.com
0 notes
govindhtech · 10 months ago
Text
Meta Unveils Llama 3.1: A Challenger in the AI Arena
Tumblr media
Meta launches new Llama 3.1 models, including anticipated 405B parameter version.
Meta released Llama 3.1, a multilingual LLM collection. Llama 3.1 includes pretrained and instruction-tuned text in/text out open source generative AI models with 8B, 70B, and 405B parameters.
Today, IBM watsonx.ai will offer the instruction-tuned Llama 3.1-405B, the largest and most powerful open source language model available and competitive with the best proprietary models.It can be set up on-site, in a hybrid cloud environment, or on the IBM cloud.
Llama 3.1 follows the April 18 debut of Llama 3 models. Meta stated in the launch release that “[their] goal in the near future is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve overall performance across LLM capabilities such as reasoning and coding.”
Llama 3.1’s debut today shows tremendous progress towards that goal, from dramatically enhanced context length to tool use and multilingual features.
An significant step towards open, responsible, accessible AI innovation
Meta and IBM launched the  AI Alliance in December 2023 with over 50 global initial members and collaborators. The AI Alliance unites leading business, startup, university, research, and government organisations to guide AI’s evolution to meet society’s requirements and complexities. Since its formation, the Alliance has over 100 members.
Additionally, the AI Alliance promotes an open community that helps developers and researchers accelerate responsible innovation while maintaining trust, safety, security, diversity, scientific rigour, and economic competitiveness. To that aim, the Alliance supports initiatives that develop and deploy benchmarks and evaluation standards, address society-wide issues, enhance global AI capabilities, and promote safe and useful AI development.
Llama 3.1 gives the global  AI community an open, state-of-the-art model family and development ecosystem to explore, experiment, and responsibly scale new ideas and techniques. The release features strong new models, system-level safety safeguards, cyber security evaluation methods, and improved inference-time guardrails. These resources promote generative AI trust and safety tool standardisation.
How Llama 3.1-405B compares to top models
The April release of Llama 3 highlighted upcoming Llama models with “over 400B parameters” and some early model performance evaluation, but their exact size and details were not made public until today’s debut. Llama 3.1 improves all model sizes, but the 405B open source model matches leading proprietary, closed source LLMs for the first time.
Looking beyond numbers
Performance benchmarks are not the only factor when comparing the 405B to other cutting-edge models. Llama 3.1-405B may be built upon, modified, and run on-premises, unlike its closed source contemporaries, which can change their model without notice. That level of control and predictability benefits researchers, businesses, and other entities that seek consistency and repeatability.
Effective Llama-3.1-405B usage
IBM, like Meta, believes open models improve product safety, innovation, and the  AI market. An advanced 405B-parameter open source model offers unique potential and use cases for organisations of all sizes.
Aside from inference and text creation, which may require quantisation or other optimisation approaches to execute locally on most hardware systems, the 405B can be used for:
Synthetic data can fill the gap in pre-training, fine-tuning, and instruction tuning when data is limited or expensive. The 405B generates high-quality task- and domain-specific synthetic data for LLM training. IBM’s Large-scale Alignment for chatBots (LAB) phased-training approach quickly updates LLMs with synthetic data while conserving model knowledge.
The 405B model’s knowledge and emergent abilities can be reduced into a smaller model, combining the capabilities of a big “teacher” model with the quick and cost-effective inference of a “student” model (such an 8B or 70B Llama 3.1). Effective Llama-based models like Alpaca and Vicuna need knowledge distillation, particularly instruction tailoring on synthetic data provided by bigger GPT models.
LLM-as-a-judge: The subjectivity of human preferences and the inability of standards to approximate them make LLM evaluation difficult. The Llama 2 research report showed that larger models can impartially measure response quality in other models. Learn more about LLM-as-a-judge’s efficacy in this 2023 article.
A powerful domain-specific fine-tune: Many leading closed models allow fine-tuning only on a case-by-case basis, for older or smaller model versions, or not at all. Meta has made Llama 3.1-405B accessible for pre-training (to update the model’s general knowledge) or domain-specific fine-tuning coming soon to the watsonx Tuning Studio.
Meta  AI “strongly recommends” using a platform like IBM watsonx for model evaluation, safety guardrails, and retrieval augmented generation to deploy Llama 3.1 models.
Every llama 3.1 size gets upgrades
The long-awaited 405B model may be the most notable component of Llama 3.1, but it’s hardly the only one. Llama 3.1 models share the dense transformer design of Llama 3, but they are much improved at all model sizes.
Longer context windows
All pre-trained and instruction-tuned Llama 3.1 models have context lengths of 128,000 tokens, a 1600% increase over 8,192 tokens in Llama 3. Llama 3.1’s context length is identical to the enterprise version of GPT-4o, substantially longer than GPT-4 (or ChatGPT Free), and comparable to Claude 3’s 200,000 token window. Llama 3.1’s context length is not constrained in situations of high demand because it can be installed on the user’s hardware or through a cloud provider.. Llama 3.1 has few usage restrictions.
An LLM can consider or “remember” a certain amount of tokenised text (called its context window) at any given moment. To continue, a model must trim or summarise a conversation, document, or code base that exceeds its context length. Llama 3.1’s extended context window lets models have longer discussions without forgetting details and ingest larger texts or code samples during training and inference.
Text-to-token conversion doesn’t have a defined “exchange rate,” but 1.5 tokens per word is a good estimate. Thus, Llama 3.1’s 128,000 token context window contains 85,000 words. The Hugging Face Tokeniser Playground lets you test multiple tokenisation models on text inputs.
Llama 3.1 models benefit from Llama 3’s new tokeniser, which encodes language more effectively than Llama 2.
Protecting safety
Meta has cautiously and thoroughly expanded context length in line with its responsible innovation approach. Previous experimental open source attempts produced Llama derivatives with 128,000 or 1M token windows. These projects demonstrate Meta’s open model commitment, however they should be approached with caution: Without strong countermeasures, lengthy context windows “present a rich new attack surface for LLMs” according to recent study.
Fortunately, Llama 3.1 adds inference guardrails. The release includes direct and indirect prompt injection filtering from Prompt Guard and updated Llama Guard and CyberSec Eval. CodeShield, a powerful inference time filtering technology from Meta, prevents LLM-generated unsafe code from entering production systems.
As with any generative  AI solution, models should be deployed on a secure, private, and safe platform.
Multilingual models
Pretrained and instruction tailored Llama 3.1 models of all sizes will be bilingual. In addition to English, Llama 3.1 models speak Spanish, Portuguese, Italian, German, and Thai. Meta said “a few other languages” are undergoing post-training validation and may be released.
Optimised for tools
Meta optimised the Llama 3.1 Instruct models for “tool use,” allowing them to interface with applications that enhance the LLM’s capabilities. Training comprises creating tool calls for specific search, picture production, code execution, and mathematical reasoning tools, as well as zero-shot tool use—the capacity to effortlessly integrate with tools not previously encountered in training.
Starting Llama 3.1
Meta’s latest version allows you to customise state-of-the-art generative  AI models for your use case.
IBM supports Llama 3.1 to promote open source AI innovation and give clients access to best-in-class open models in watsonx, including third-party models and the IBM Granite model family.
IBM Watsonx allows clients to deploy open source models like Llama 3.1 on-premises or in their preferred cloud environment and use intuitive workflows for fine-tuning, prompt engineering, and integration with enterprise applications. Build business-specific AI apps, manage data sources, and expedite safe AI workflows on one platform.
Read more on govindhtech.com
0 notes
govindhtech · 10 months ago
Text
NVIDIA’s SIGGRAPH 2024 Pioneering Simulation & AI Research
Tumblr media
Mile-High  AI: Simulation and Gen AI Advancements to Be Showcased by NVIDIA Research at SIGGRAPH
SIGGRAPH 2024
A variety of rendering, simulation, and generative AI innovations are being brought by NVIDIA to SIGGRAPH 2024, the world’s leading  computer graphics conference, which is being held in Denver from July 28 to August 1.
NVIDIA Research has published more than 20 papers outlining new developments in inverse rendering technologies and synthetic data generators that can be used to train next-generation models. By improving picture quality and opening up new possibilities for creating 3D representations of imagined or real-world environments, NVIDIA’s AI research is improving simulation.
The studies centre on physics-based simulation, increasingly lifelike AI-powered rendering, and diffusion models for visual generative AI. These include partnerships with academic institutions in the United States, Canada, China, Israel, and Japan; they also involve two technical Best Paper Award winners and researchers from firms like Adobe and Roblox.
These projects will contribute to the development of tools that businesses and developers may use to construct intricate virtual settings, characters, and items. The creation of synthetic data can subsequently be used to create compelling visual narratives, support scientists’ comprehension of natural occurrences, or help with simulation-based training for autonomous cars and robotics.
Text to Image Generation and Texture Painting Are Improved by Diffusion Models
The time it takes to bring ideas to life can be decreased by using diffusion models, a common technique for turning text prompts into images. Diffusion models can assist artists, designers, and other creators in quickly creating visuals for storyboards or production.
The capabilities of these generative AI models are being advanced by two articles coauthored by NVIDIA.
Researchers at Tel Aviv University and NVIDIA collaborated to create ConsiStory, a tool that makes it simpler to create several images with a single main character. This is a crucial feature for use cases involving narrative, like creating a storyboard or drawing a comic strip. By using a method known as subject-driven shared attention, the researchers’ approach cuts the time required to produce consistent visuals from thirteen minutes to just thirty seconds.
Last year, at SIGGRAPH’s Real-Time Live event, NVIDIA researchers earned the Best in Show award for their AI models that create personalised textured materials based on text or image cues. This year, they will be presenting a work that enables interactive texture painting on 3D meshes using 2D generative diffusion models. This will allow artists to create intricate textures in real time using any reference image.
Getting Things Started in Physics-Based Simulation
Physics-based simulation, a collection of methods to make virtual characters and objects move like real-world items, is helping graphics researchers close the gap between virtual and real-world objects.
Advancements in the subject are highlighted in a number of NVIDIA Research articles, such as SuperPADL, a project that addresses the difficulty of modelling intricate human gestures using text prompts (see the video up top).
The researchers showed how the SuperPADL framework can be trained to replicate the movements of over 5,000 talents using a combination of supervised and reinforcement learning. It can also operate in real time on a consumer-grade NVIDIA GPU.
An approach to neural physics described in another NVIDIA article uses artificial intelligence (AI) to learn the behaviour of things, whether they are represented as a 3D mesh, a NeRF, or a solid object created by a text-to-3D model, as they are moved within an environment.
SIGGRAPH papers
In a report published in cooperation with researchers at Carnegie Mellon University, a novel type of renderer is developed that can handle fluid dynamics, electrostatics, and thermal analysis in addition to modelling actual light. The technique, which was recognised as one of the top five papers at SIGGRAPH, presents new possibilities for accelerating engineering design cycles because it is simple to parallelize and doesn’t require laborious model cleanup.
Further simulation papers present a pipeline that ten times faster fluid simulation as well as an improved method for modelling hair strands.
Increasing the Bar for Diffraction Simulation and Rendering Realism
In a separate group of publications, NVIDIA presents novel methods for simulating diffraction effects, which are utilised in radar modelling to train self-driving cars, up to 1,000 times quicker than current methods for modelling visible light.
In a publication, researchers from NVIDIA and the University of Waterloo address the optical phenomena known as free-space diffraction, which occurs when light disperses or bends around the edges of objects. With up to 1,000x acceleration, the team’s approach can be integrated with path-tracing techniques to improve the accuracy of reproducing diffraction in intricate settings. The model could be used to replicate longer wavelengths of radio waves, radar, or sound in addition to visible light.
Path tracing generates a lifelike image by sampling many pathways, or multi-bounce light beams moving through a scene. ReSTIR is a route-tracing method that was initially presented at SIGGRAPH 2020 by NVIDIA and academics from Dartmouth College. Two SIGGRAPH publications enhance the sampling quality of ReSTIR, which has been essential in bringing path tracing to real-time rendering products such as games.
In one of these articles, a partnership with the University of Utah, a novel approach to path reusing is shared, which leads to an effective sample count increase of up to 25x, hence improving the quality of the images. The other modifies a portion of the light’s path at random to enhance sample quality. This improves the performance of denoising techniques and reduces visual artefacts in the final output.
Educating AI to Understand 3D
At SIGGRAPH, NVIDIA researchers are also exhibiting versatile  AI technologies for 3D design and representation.
A GPU-optimized framework for 3D deep learning that fits the size of the real world, called fVDB, is introduced in one study. The huge spatial scale and high resolution of city-scale 3D models and NeRFs, as well as the segmentation and reconstruction of large-scale point clouds, are made possible by the  AI infrastructure provided by the fVDB framework.
An award-winning Best Technical Paper, coauthored alongside Dartmouth College academics, presents a framework for depicting the interactions between 3D objects and light. The idea integrates a wide range of appearances into one paradigm.
Additionally, a real-time algorithm that creates smooth, space-filling curves on 3D meshes is introduced by a collaboration between Adobe Research, the University of Toronto, and the University of Tokyo. This framework operates in seconds and gives users a great degree of control over the result to enable participatory design, whereas earlier methods took hours.
SIGGRAPH with NVIDIA
Attend SIGGRAPH to learn more about NVIDIA. Special events will feature a fireside talk on the topic of robotics and artificial intelligence (AI) in industrial digitalization, including Jensen Huang, the CEO and founder of NVIDIA, and Lauren Goode, a senior writer at WIRED.
OpenUSD Day by NVIDIA, a full-day event that showcases how developers and industry leaders are adopting and expanding OpenUSD to construct 3D pipelines enabled by  artificial intelligence, will also be presented by NVIDIA researchers.
With teams concentrating on  AI,  computer graphics,  computer vision, robotics, self-driving cars, and computer graphics, NVIDIA Research employs hundreds of scientists and engineers globally. View additional of their recent work.
SIGGRAPH 2024 location
From July 28 to August 1, 2024, SIGGRAPH 2024 will take place in the Colorado Convention Centre in Denver, Colorado, USA.
Read more on govindhtech.com
0 notes
govindhtech · 1 year ago
Text
With IBM Consulting, Casper Labs Builds Blockchain Solution
Tumblr media
Blockchain-Powered Solutions by Casper Labs
Casper Labs, a provider of enterprise blockchain software and services, and IBM Consulting have announced their partnership to assist clients in utilizing blockchain technology to enhance the transparency and auditability of their artificial intelligence systems. In collaboration with IBM Consulting, Casper Labs and the latter will create a new Casper Labs solution that will be built on top of blockchain technology and utilize IBM Watsonx.governance to create an extra layer of analytics and policy enforcement for managing AI training data across enterprises.
From the original model creator to the end user organization, several organizations are involved in the training, development, and deployment of generative AI models. The outputs of various organizations change as a result of new data sets being integrated or models being altered, and numerous organizations must be able to monitor, audit, and correctly identify and fix problems in addition to tracking these changes. By minimizing the possibility of unneeded data sharing across organizational boundaries and facilitating the sharing of trusted context information through metadata in the ledger indicating model changes, blockchain technology can assist organizations in sharing their knowledge.
The goal of Casper Labs’ solution is to train generative AI systems throughout organizations by monitoring and measuring highly serialized input and output data. It will be built on Casper, a tamper-resistant and highly serialized ledger, and utilizing IBM Watsonx.governance and Watsonx.ai.
Organizations can anticipate being able to better protect sensitive data stored in the solution from being accessible to external actors because of the hybrid nature of the Casper Blockchain and its permissioning system, which gives them control over who can access what data. Additionally, the solution will be designed to support version control by utilizing blockchain’s serialization capabilities. This will enable organizations to quickly and effectively roll back AI system iterations in the event that performance problems or biased outputs arise.
Casper Labs will build the solution with help from IBM Consulting’s AI governance and technology experts. The solution is anticipated to be made available to clients in beta in the first quarter of 2024, and then more widely through Casper Labs’ channels and the IBM Cloud Marketplace.
“An AI system’s efficacy is ultimately as good as an organization’s ability to govern it,” said Shyam Nagarajan, Global Partner, Blockchain and Responsible AI Leader at IBM Consulting. Businesses require solutions that reduce risk, improve explainability, and cultivate trust. IBM said that they are excited to work with Casper Labs to develop a new solution that adds a crucial layer of transparency and risk mitigation for businesses implementing AI at scale, by bringing IBM Consulting and technology to the table.
The new solution is intended to assist businesses in deploying AI responsibly at scale throughout their ecosystem of technology and service providers, including financial services, healthcare, and retail. In addition, the solution seeks to provide:   
Compliance Dashboard: A centralized dashboard designed to help organizations monitor and manage AI systems used throughout the organization in order to support their processes for adhering to ethical standards.
Tools to track the effectiveness and quality of AI systems, as well as an interface to improve the comprehensibility and transparency of AI outputs, comprise the Quality Control Toolkit.
Version control refers to the capability of “rolling back” to earlier versions of an AI system that did not exhibit problems in order to address performance or other issues.
A system for auditing AI procedures and producing thorough reports using context metadata gathered by Casper Labs’ ledger is called the Audit and Reporting System.
The CEO of Casper Labs, Mrinal Manohar, stated, “While generative AI has rightfully excited organizations for its transformative potential, its practical applications have been severely limited by an inability to monitor and react to the data feeding AI systems.” With IBM’s assistance, we’re dedicated to providing a clearer path for correcting behavior in the event of hallucinations or performance problems, as well as a better means of understanding why AI systems behave as they do. AI’s long-term potential will be determined by how well organizations comprehend, manage, and respond to ever-larger AI training data sets.
IBM Consulting and Casper Labs to Present AI Governance Solution in Davos
Mrinal Manohar and Shyam Nagarajan will present a live demonstration of this solution at The Hub in Davos on Tuesday, January 16.
Concerning Casper Labs
The industry leader in enterprise blockchain software is Casper Labs. In order to meet the scale and operational requirements of businesses, Casper Labs created the first layer-1 blockchain, which completely transparently recorded all business transactions. Casper Labs provides businesses and governments with software and services that increase revenue and drastically improve efficiency. Their goal is to construct the fundamental framework required for a completely new era of client value and commercial success.
Read more on Govindhtech.com
0 notes