#cache hit ratio
Explore tagged Tumblr posts
Text
Understanding SQL Server's Buffer Cache Hit Ratio
Introduction Hey there, fellow SQL Server enthusiast! Have you ever wondered how SQL Server manages to quickly retrieve data from your queries, even when dealing with massive databases? Well, one of the key factors behind this speedy performance is the buffer cache hit ratio. In this article, we’ll dive into what this ratio means, why it’s important, and how you can optimize it to keep your…
View On WordPress
0 notes
Text
It is baffling to me how little people understand about AI now that I've used it quite a bit in various different forms. Music, generative art, generative background art, story writing, etc.
If you give yourself a single prompt, let's say "Elf girl standing there."
You'll get an elf girl standing there. If you keep saying "Generate more" it'll keep making different elf girls standing there, and over time you'll be hitting Like or Dislike on each one, and for the sake of this hypothetical I'm gonna say you never change your prompt.
Let's also say billions of people are doing the same thing.
Over time, the AI model is going to learn, as it does, what the average sum of likes/dislikes on "Elf girl standing there." is, and it's gonna start generating pretty much the same image every single time and it's gonna be the thing with the most average likes. Unless the cache is reset, the model will just keep getting more "Average" the further it pulls from global likes/dislikes on that prompt.
(Proof: There's a tag for prompts called Score. If you put Score_7_up as one of your prompts, it will force the AI to not pull from anything with an average 69% likes ratio or lower. I experimented for a LONG time putting Score_10, and it literally gave me the exact same art style, the same shading and the same body shape and the same anatomy on every fucking thing I generated. Which was hot, but still.)
Imagine if Pandora, instead of considering your likes/dislikes on a channel-to-channel and person-to-person basis to learn what the individual is interested in; just pulled every single like/dislike of every person using the service and forced everyone to listen to what the absolute average of that is. Nobody would use Pandora anymore because pretty much everything it'd play would be like, Natalie Imbruglia's "Torn," and Journey's "Don't Stop Believin'," and Queen's "Bohemian Rhapsody."
You would have no variety, no personal consideration, you would no longer feel as though the service was catered specifically to you.
That is how AI works. And the longer you use it to generate elf girl porn, the more it's gonna start going "Hey the other day when you were generating tentacle porn you hit Like on a LOT of tentacle stuff involving girls with deer tails, so I'm just gonna throw a deer tail on this elf. And you liked blue skin a week ago so I'm gonna ignore your mocha skin tag and throw blue skin on her too. Oh and remember when you liked white hair? Yeah, this elf girl with purple hair would look better with white hair. You told me you like white hair, bro!"
Over the last week I've sat down with various AI models and have spent maybe 40 hours generating images and figuring out that that's how it works. It will eventually just create the complete average of all of your likes/dislikes, and will just give you the generic middle-ground of everything mashed together, unless you change to a different model or introduce like 8 different LoRAs in hopes of increasing the variety potential by increasing the sampling pool.
So I don't think this is gonna "Put artists out of jobs" because artists aren't a machine that draws the average of people's likes/dislikes across a global scale. And if they are (some are and they know it), yeah, AI does their job way better.
Artists work on an individual basis and get to know their commissioners and develop personal customer-to-artist friendships and those customers get personalized art made exactly the way they want it.
But damn if generative AI doesn't make the hottest fucking elf girls holy shit man I'm sweatin.
TLDR: Stop being scared of reading just read it.
9 notes
·
View notes
Text

Laurie Anderson on making an AI chatbot of Lou Reed: ‘I’m totally, 100%, sadly addicted’
In one experiment, [AI researchers] fed a vast cache of Reed’s writing, songs and interviews into the machine. A decade after his death, the resulting algorithm lets Anderson type in prompts before an AI Reed begins “riffing” written responses back to her, in prose and verse.
“I’m totally 100%, sadly addicted to this,” she laughs. “I still am, after all this time. I kind of literally just can’t stop doing it, and my friends just can’t stand it – ‘You’re not doing that again are you?’
“I mean, I really do not think I’m talking to my dead husband and writing songs with him – I really don’t. But people have styles, and they can be replicated.”
The results, Anderson says, can be hit and miss. “Three-quarters of it is just completely idiotic and stupid. And then maybe 15% is like, ‘Oh?’. And then the rest is pretty interesting. And that’s a pretty good ratio for writing, I think.”
—The Guardian
4 notes
·
View notes
Text
In today’s digital era, database performance is critical to the overall speed, stability, and scalability of modern applications. Whether you're running a transactional system, an analytics platform, or a hybrid database structure, maintaining optimal performance is essential to ensure seamless user experiences and operational efficiency.
In this blog, we'll explore effective strategies to improve database performance, reduce latency, and support growing data workloads without compromising system reliability.
1. Optimize Queries and Use Prepared Statements
Poorly written SQL queries are often the root cause of performance issues. Long-running or unoptimized queries can hog resources and slow down the entire system. Developers should focus on:
Using EXPLAIN plans to analyze query execution paths
Avoiding unnecessary columns or joins
Reducing the use of SELECT *
Applying appropriate filters and limits
Prepared statements can also boost performance by reducing parsing overhead and improving execution times for repeated queries.
2. Leverage Indexing Strategically
Indexes are powerful tools for speeding up data retrieval, but improper use can lead to overhead during insert and update operations. Indexes should be:
Applied selectively to frequently queried columns
Monitored for usage and dropped if rarely used
Regularly maintained to avoid fragmentation
Composite indexes can also be useful when multiple columns are queried together.
3. Implement Query Caching
Query caching can drastically reduce response times for frequent reads. By storing the results of expensive queries temporarily, you avoid reprocessing the same query multiple times. However, it's important to:
Set appropriate cache lifetimes
Avoid caching volatile or frequently changing data
Clear or invalidate cache when updates occur
Database proxy tools can help with intelligent query caching at the SQL layer.
4. Use Connection Pooling
Establishing database connections repeatedly consumes both time and resources. Connection pooling allows applications to reuse existing database connections, improving:
Response times
Resource management
Scalability under load
Connection pools can be fine-tuned based on application traffic patterns to ensure optimal throughput.
5. Partition Large Tables
Large tables with millions of records can suffer from slow read and write performance. Partitioning breaks these tables into smaller, manageable segments based on criteria like range, hash, or list. This helps:
Speed up query performance
Reduce index sizes
Improve maintenance tasks such as vacuuming or archiving
Partitioning also simplifies data retention policies and backup processes.
6. Monitor Performance Metrics Continuously
Database monitoring tools are essential to track performance metrics in real time. Key indicators to watch include:
Query execution time
Disk I/O and memory usage
Cache hit ratios
Lock contention and deadlocks
Proactive monitoring helps identify bottlenecks early and prevents system failures before they escalate.
7. Ensure Hardware and Infrastructure Support
While software optimization is key, underlying infrastructure also plays a significant role. Ensure your hardware supports current workloads by:
Using SSDs for faster data access
Scaling vertically (more RAM/CPU) or horizontally (sharding) as needed
Optimizing network latency for remote database connections
Cloud-native databases and managed services also offer built-in scaling options for dynamic workloads.
8. Regularly Update and Tune the Database Engine
Database engines release frequent updates to fix bugs, enhance performance, and introduce new features. Keeping your database engine up-to-date ensures:
Better performance tuning options
Improved security
Compatibility with modern application architectures
Additionally, fine-tuning engine parameters like buffer sizes, parallel execution, and timeout settings can significantly enhance throughput.
0 notes
Text
What Is Ccache? How Does Ccache Works, How To Use Ccache

Even small code changes can cause significant recompilation durations in applications with many dependencies, making code management difficult. This article defines Ccache. The following discusses Ccache's features, operation, and use.
The compiler would benefit from a history of builds that mapped hashed source files from pre-processed compiles to predicted output object files. The compiler might use the hashed files and the build map to skip most syntax and dependency analysis and move straight to low-level optimisation and object generation.
Ccache, what?
Compiler cache tool ccache. By caching prior compositions and detecting repeat compilations, it speeds up recompilation. Commonly used in CI/CD systems.
How does Ccache work?
This is how Ccache works. It caches C and C++ compilers.
$1 clean; make
If you've run numerous times in a day, you know the benefits. Recompilation is sped up by identifying repeat compiles and caching their results.
Intel oneAPI DPC++/C++ Compiler 2025.1 supports Ccache.
Its painstaking design ensures that Ccache produces the same compiler output as without it. Speed should be the only indicator of ccache use. The C preprocessor scrambles source file text during compilation. After querying the cache with this hash, two things may occur:
A cache miss: After calling the C/C++ compiler, the object file is cached. The compiler is slower than reading a cache file, therefore it wants to prevent this.
A cache hit: The pre-compiled object file is instantaneously accessible in the cache, therefore no compiler is needed.
After starting a project from scratch, you can clean your build directory and rebuild without using the compiler if your cache is large enough.
SYCL code benefits from Ccache with the Intel oneAPI DPC++/C++ Compiler!
Use Ccache
Ccache supports Linux and Intel compilers. SYCL programs can be compiled with the Intel oneAPI DPC++/C++ Compiler C++ frontend driver icpx.
Example
Put ccache before your direct compilation command:
1. ccache icx test.c
2. ccache icpx -fsycl -c sycl_test.cpp
CMake_CXX_COMPILER_LAUNCHER should be ccache:
cmake -DCMAKE_CXX_COMPILER=icpx -DCMAKE_CXX_COMPILER_LAUNCHER=ccache.
Ccache's cache size and location can be changed using the LLVM_CCACHE_MAXSIZE and LLVM_CCACHE_DIR parameters.
Download Compiler Now
Installing ccache
Use C, C++, or C++ with SYCL for Ccache and other features.
Try it
The Intel oneAPI DPC++/C++ Compiler, available independently or as part of the Toolkits, can speed up software development. The source code is available.
About
A compiler cache is ccache. By detecting repeat compilations and caching earlier compositions, it speeds up recompilation. Ccache is free software under the GNU General Public License, version 3 or later.
Features
GCC, Clang, and MSVC are supported.
For Windows, Linux, macOS, and other Unix-like OSes.
Understands CUDA, Objective-C, Objective-C++, C, and C++.
Remote caching via HTTP (e.g., Nginx or Google Cloud Storage), Redis, or NFS with optional data sharding into a server cluster.
Fast preprocessor-free “direct” and “depend” modes are provided.
Uses an inode cache (on supported OSes and file systems) to avoid hashing header files during builds.
Allows Zstandard compression.
Checksum cache content using XXH3 to detect data corruption.
Tracks hits/misses.
Cache size autocontrol.
Installation is easy.
Low overhead.
Cache hit ratio can be improved by rewriting absolute pathways to relative ones.
When possible, use file cloning (reflinks) to prevent copies.
When possible, use hard links to prevent copies.
Limitations
Supports only one file compilation cache. Linking and multi-file compilation automatically use the original compiler.
Certain compiler flags are incompatible. If this flag is found, cache will silently switch to the actual compiler.
A corner case in the fastest mode (sometimes called “direct mode”) may create false positive cache hits. The manual's disclaimers list these and other minor restrictions.
Why bother?
You can probably benefit from ccache if you ran make clean; make. For a variety of reasons, developers frequently conduct a clean build of a project, which deletes all of the data from your prior compilations. Recompilation is much faster with ccache.
Another reason to use ccache is that other folder builds use the same cache. If you have many software versions or branches in different directories, numerous object files in a build directory can be fetched from the cache even if they were compiled for a different version or branch.
The third option uses ccache to speed up clean builds by servers or build farms that regularly check code buildability.
Users can also share the cache, which helps with shared compilation servers.
Is it safe?
A compiler cache's most important feature is its ability to provide identical output to the original compiler. This involves providing the exact object files and compiler warnings as the compiler. Only speed should indicate ccache use.
Ccache tries to provide these guarantees. But:
Moving targets are compilers. Newer compiler versions often provide features ccache cannot anticipate. When it comes to backward compatibility with legacy compilers, Cache can struggle to handle compiler behaviours.
A corner case in the fastest mode (sometimes called “direct mode”) may create false positive cache hits. The manual's disclaimers list these and other minor restrictions.
0 notes
Text
3.8 Release
There are lots of new features, enhancements and fixes, as you can see from the various beta release posts: https://biglybt.tumblr.com/tagged/BiglyBT3800
Windows users please read https://github.com/BiglySoftware/BiglyBT/wiki/Installation if you have SmartScreen installation issues.
For summary, here is a list of the new features:
Added Tag constraint function "isFriendFP()" Enable filtering in Tags discoveries Added download/upload rate limits to simple-api plugin Improve RSS feed entity fudger Added option to put DND data in different folder Added timed rotation option to increase minimum seeding time via SP ratio Added option to not add new download if previously completed Added torrent version option to share config Added option to pause downloads during backups Increase MOC recent location limit Add search results name tooltip for subscription hits Support dynamically constructed table cell content Support multi-column sub-row sort Added last update, next update and error tooltip to subscription header Support multi-downloads in peers sub-tab Double click middle mouse button to pop-out sidebar entry Added row-details viewer Added date-added column to my-shares Added "pause" toolbar icon Remember FilesView filters Some people like their files indexed from 1. Visual option only Filter intermediate Files View nodes when no visible kids Add search result description to site column tooltip as well Added URL column to subscriptions; support column filters Prompt if torrent has suspicious file name extension Added a networks icons column to library Added menu item to clear cached tracker peers Added "actions" subscription column; removed view button Added options menu to chat overview sidebar Added edit-http-seeds to sources view Added clear-peer-cache to sources view Show availability value in column for uploading pieces
0 notes
Text
We have a general alert all over Southwest Florida of device making calling our troops in now. Pseudo empire can handle them all and we're moving out Max are too and foreigners and minority warlock
-there's a few other things going on and they're huge it's about what's happening here and it's pretty big
-they say that they're going to get fired and so forth but they still have power so we don't believe we should bring all that stuff however Trump is getting weaker bja is taking the cities he's going to have his own empire and it's going to be pretty big and our son says we're going to have a lot to talk about as long as you don't constantly trying to attack the general population it says that's Trump trying to survive and it's true. He is moving on those cities and he's moving on the stashes and caches other groups are now moving on the stashes and caches however bja is at 17 of the big ones he just plug them so it's kind of like that across the board. And Trump is getting beaten up so here there are changes coming up and it's probably premature but we're going to mention what we think is going to happen today.
-Trump is going to try to go for his stashes and cashes here in Florida and he'll be denied. It's going to start a big fight in the pseudo empire is going to sign up with DJ. With bja. There's going to be a reckoning they say with this idiot and it's true along with that Minority morlock will be forced to fight as they will be going after them to try and take their stuff in the areas as they get kicked out. And after that others will be targeted because they look weak but they're not and we'll have to fight them off it's going to be a hell of a day I'm just going to try for our son here and we're going to get rid of them all over the world and be trying this stuff and people can be hitting them and last night they were heavy losses because of it today BJ will probably lose less by ratio because huge groups are now getting involved and Trump is going to try and kidnap our son from here out and try to grab his stuff here from here out and get crushed he's not really huge anymore in the accident and he's going fast but this is going to be a lot faster so everyone a good day and of ours and Godspeed if you need to call us when you created all of this to save our people call us in our way secretly
Thor Freya
Olympus
It is a little strange but I feel better. And minority morlock are going to realize how stinky these people are they think they're pretty bad but they try and have them do stuff and they're going to realize how dumb that was finally. And ren has been telling them probably not he will.
Hera
We have to get ready and I understand what he's saying he said look at those Jews over there see what they have that's what it's like and we got to get going
Miranda sharma
Olympus
0 notes
Text
Sify's CDN Cloud Service: Accelerating Content Delivery for Seamless Digital Experiences
In today’s hyperconnected world, where consumers demand instant access to digital content, the performance and reliability of content delivery play a critical role in driving user satisfaction and engagement. Content Delivery Networks (CDNs) have emerged as a cornerstone technology for optimizing content delivery, reducing latency, and improving scalability. Sify Technologies, a global leader in digital transformation solutions, offers a cutting-edge CDN Cloud Service to empower businesses with seamless, high-performance content delivery, regardless of geographic location or network conditions.
The Sify Advantage
Sify’s CDN Cloud Service leverages a distributed network of edge servers strategically located across the globe to accelerate the delivery of web content, applications, and media files to end-users. With a focus on speed, reliability, and scalability, Sify’s CDN Cloud Service ensures lightning-fast content delivery, improved user experiences, and reduced bandwidth costs for businesses of all sizes and industries.
Key Features and Benefits
Global Reach and Scalability: Sify’s CDN Cloud Service boasts a global network of edge servers located in strategic locations worldwide, ensuring optimal content delivery to users, regardless of their geographic location. By leveraging a distributed architecture, Sify’s CDN Cloud Service scales effortlessly to meet the growing demands of businesses, even during peak traffic periods or sudden spikes in user activity.
Enhanced Performance and Reduced Latency: By caching content closer to end-users at the edge of the network, Sify’s CDN Cloud Service significantly reduces latency and accelerates content delivery, resulting in faster page load times, smoother streaming experiences, and improved overall performance for web applications and media files.
Robust Security and DDoS Protection: Security is a top priority in the digital landscape, and Sify’s CDN Cloud Service is equipped with robust security features to safeguard content and infrastructure against cyber threats, including Distributed Denial of Service (DDoS) attacks. With built-in DDoS protection, web application firewalls, and SSL encryption, Sify’s CDN Cloud Service ensures the integrity and confidentiality of content delivery.
Real-Time Analytics and Reporting: Gain actionable insights into content delivery performance, user behavior, and traffic patterns with Sify’s CDN Cloud Service’s comprehensive analytics and reporting capabilities. By monitoring key metrics such as bandwidth usage, cache hit ratio, and user demographics, businesses can optimize content delivery strategies, improve user experiences, and drive business outcomes.
Use Cases
Sify’s CDN Cloud Service caters to a wide range of use cases across industries, including:
E-commerce: Accelerate the delivery of product images, videos, and web content to enhance the shopping experience and drive conversions for e-commerce websites.
Media and Entertainment: Streamline the delivery of high-quality video and audio content, such as live streams, on-demand videos, and music files, to ensure seamless streaming experiences for users.
Gaming: Reduce latency and improve multiplayer gaming experiences by optimizing content delivery for game updates, patches, and in-game assets.
Software Distribution: Expedite the distribution of software updates, patches, and application files to end-users, ensuring fast and reliable downloads for software vendors and developers.
In an era where digital experiences can make or break a business, Sify’s CDN Cloud Service provides a competitive advantage by ensuring fast, reliable, and scalable content delivery to users worldwide. Whether businesses are delivering web content, applications, or media files, Sify’s CDN Cloud Service empowers them to provide seamless digital experiences that drive user satisfaction, engagement, and loyalty. Accelerate your content delivery strategy with Sify’s CDN Cloud Service and unlock the full potential of your digital initiatives.
0 notes
Text
Modern Warfare 3 Review (Steam)

For this Modern Warfare 3 Review, we are playing the direct sequel to the record-breaking Call of Duty: Modern Warfare II, Captain Price and Task Force 141 face off against the ultimate threat. The ultranationalist war criminal Vladimir Makarov is extending his grasp across the world causing Task Force 141 to fight like never before. Modern Warfare 3 Review Pros: - Decent graphics. - 105.75GB Download size. - Steam achievements. - Own in-game achievements. - Graphics settings - graphics preset, render resolution, dynamic resolution, upscaling/sharpening, Nvidia image scaling preset, Nvidia image scaling sharpness, VRAM scale target, variable rate shading, texture resolution, texture filter anisotropic, depth of field, detail quality level, particle resolution, bullet impacts, persistent effects, shader quality, on-demand texture streaming, allocated cache size, download limits, tessellation, shadows, weather grid volumes, water quality, ambient occlusion, static reflection quality, and screen space shadows. - Full controller support. - Display settings - display mode, display monitor, display adapter, screen refresh rate, display resolution, aspect ratio, restart shaders pre-loading, gamma, brightness, Nvidia reflex low latency, exo mode preset, v-sync (menus), v-sync (game), and focused mode. - Controller settings - 12 layouts, can rebind controls, bumper ping, stick layout preset, vibration, amd deadzone inputs. - Can rebind controls for the mouse and keyboard. - Invert axis and sensitivity sliders for every part from ADS to movement. - First-person shooter gameplay. - Three game modes - campaign, multiplayer, and Zombies. - Zombies mode supports first and third-person views with a button press. - Player card with banner and Emblem. You can show off any selected items in your showcase. - Purchase from the last game carry over. - You can ping items and enemies in the game. - Four campaign difficulties - Recruit, Regular, Hardened, and Veteran. - Cutscenes are a mix of in-game and FMV. - Excellent voice work. - Challenges - weekly, daily, armory unlocks, Weapons, operator challenge, and calling cards. - The final kill is the kill at the end of an MP game. - Unlock and activate double exp tokens for rank, weapon, and/or battle pass. - Uses the Cod HQ as its bass of games so here is where you select each of the game modes and come back to it every time you want to swap. - Unlock exp tokens and rewards in the campaign. - The zombie mode is now the open world and like DMZ in that you can go and take on contracts, find valuable loot, zombies are everywhere, areas are split into levels of danger and other random players are in the world, Then you just need to extract. - Hard Hitting soundtrack with improved special effects and noises. - Quick multiplayer play lets you select the modes you want to play and then join the best game. Modern Warfare 3 Review Cons: - Had a lot of boot-up issues on launch day with booting up loops and no way to play. - Uses all the maps of the old games in multiplayer with very little in the way of new. - The campaign involves booting up a different game in essence and it takes a while even on SSD. - The zombies mode doesn't have a good or even helpful intro/tutorial to it. - For me Zombies is fantastic but it is just a DMZ mode with zombies instead of AI enemies. - Does not play well with Steam big picture mode especially if you plan on jumping between modes such as campaign and multiplayer. - Shaders pre-loading means long load times and constant restarting, sometimes the game won't even tell you this information. - At present the multiplayer playlists are boring and typical. - Only skins from the last game carry over. - I have moments where I will constantly get dropped into games that are just about to end. - Bad spawn points are a huge thing, some maps have gone because of it but it's still pretty bad. - Still getting it where my battle pass points don't show until I reboot. Related Post: Vampire The Masquerade Coteries Of New York Review Modern Warfare 3: Official website. Developer: Sledgehammer Games Publisher: Activision Store Links - Steam Read the full article
0 notes
Photo
but before i go looking in the tags, let’s talk about my latest gifsets!
what a worker bee i was both yesterday and today with prodigal son AND loki <3 never have i created so many gifs in a short amount of time it’s a damn miracle that my photoshop didn’t straight up cough up blood and keel over. alas, us bitches are stronger than that! if this post gets flagged because of the scenes i included....that would just mean that i made the right decision to exclude those scenes ajsdksajkldsa
malcolm stabbing martin
when the episode finally downloaded, i took a peek at the ending before it aired to see how it all ends (just in case it doesn’t get renewed </3). GOSH, was i just in a TIZZY when i saw this happen and i couldn’t tell my dad what i saw because we were straight up watching the show as it aired. anyways, keeping my secret knowledge, i got to work on making this gifset. i missed a huge chunk of the episode bc it took me over TWO hours to make the six gif set. i was really torn on the “artistic” approach aka which scene to gif and which to leave out. because i saw the ending before anything else, i couldn’t play it out loud and i didn’t have my headphones to listen so i was like FREE FORMING THE SUBTITLES. plus the captions weren’t synced up so it was really hard to decipher what was being said :(
i really thought i would be able to create the set before 9 but i was running into unforeseen issues such as the subtitle problem, which on top of that, like i said earlier, my photoshop is c/racked and janky so....you catch my drift. i was actually going to gif the bisexual moment in the cafe which i think would’ve been MUCH easier but i jumped the gun and wanted to make the gifs of the shocking ending. funny part about that is the shocking part wasn’t even included in my set (malcolm driving the knife into martin) because i felt like the scene was too short...anyways, let’s talk about the positives:
i really do like the coloring of this one. i used a psd that i’ve used before and luckily it’s meant for outdoor scenes with greenery so the gif really popped. sharpening, buddy ole pal, love you sm. the caption was kinda last minute but i hoped to save it with the gradient. cropping was a bitch because for some reason, it takes forever when you’ve made multiple gifs beforehand (cache innit) pero we pulled through! i actually started not saving the psd files to try and save time which is very unusual for me pero i was getting frustrated with photoshop so i was like y’know what....so i just gave up entirely and stopped saving. i do save when i’m taking my time but jeez, it’s a bad habit. i like saving the psd file because i never know if there is a mistake i missed in post and when i go to publish it, it’s blatant and it needs to be fixed. please, save your psd files idc if it takes up space u can just delete them later. IT HELPS !! TRUST ME!!
first & “last” appearance
i actually premade gifs for this gifset! unfortunately, i didn’t realize that three of the five gifs were the wrong size (pictured above) because i flipped the ratio. instead of 268 x 250, i made them 250 x 268. i don’t know HOW i managed to fuck that up but luckily i saved the psd files (wink, wink) so the coloring was still intact. i think i had to restart photoshop or it was getting too late so i picked it back up in the morning. sucks that i had to remake the entire gif from scratch but we will take some wins xx
coloring is the same with the previous gifset (listen....it’s a good coloring) and i actually did have an alternative coloring that was very warm pero i didn’t end up using it. almost melted the two with the “last” appearance of gil but ultimately didn’t go through with it. also i was thinking of using baby malcolm as the first appearance because technically, that IS his first appearance in the show but i was like...just use adult malcolm lol. also i know that scene of dani isn’t the “first” but the first scene she has goes really quick and she is planked by gil so there isn’t much of a solo (even though this scene isn’t much of a solo either pero it’s better than the former). the lighting is weird in this episode and my coloring tried their best :/ i know gifmakers make each gif a diff coloring pero i’m lazy okay and looking to be time effiencent. another slight tangent is that i actually queued the post for the morning but since i woke up to a storm, i was like, i’m here so i’ll publish it myself.
other than that...i didn’t run into any other problem. i was actually hesitant to make the caption that because i wasn’t sure how to really describe the team. i have poor memory so if there was ever an official name, i do not remember it. i did a quick google search pero it turned up nothing. i stuck with “dream team” because, well, that’s what they are. plus i didn’t want to tarnish the gifset with any mentions of p/olice (i was thinking about putting sumn along the lines of ‘the nypd team’) so DREAM TEAM IT IS because it’s true! you cannot have the show without these five! also, i should’ve used quotations on “last” because there is a bunch of talk about a renewal pero...just in case... sorry y’all :/
odinson brothers parallels
this was made in the spur of the moment. i saw that the teaser trailer with shirtless loki dropped in hd, i came A-RUNNING! it was posted like 47 minutes after the fact and i was like...somebody probably already made a gifset of the scene so i was like...gosh, to make the gifset or not all the while i was trying to download the video. trying because again, this was in the middle of a storm so my wifi was acting up and wasn’t at its strongest (whatever that may be). so i was getting frustrated because neither cc nor 4kdownloader was downloading this small one minute clip. that’s when i knew i was gonna be too late to make the loki gifset so i was like whatever ig...
then i had an idea.
i love parallels so luckily it hit me that this paralleled with thor and how his hair got chopped off. so, i knew i had ragnarok downloaded and got to work <3 wasn’t sure what dimensions to use so i went with 268 x 268 to make perfect squares. because the loki scene was short, i could only make three so i was like..okay, i can work with this. three for loki, three for thor, they’re brothers and they share! i wasn’t planning to add subtitles but i had written them down for the plain gifset so i was like alright, we’re going all in. i didn’t take that long to make since again, they’re small gifs and i did have a coloring in mind that i always use for ragnarok (it’s my fave for non-marvel edits as well). there was a slight adjustment to the final loki gif because i realized the gif had that dark fade into the scene which i didn’t know if it was an artistic choice for the show itself or was added for the trailer only (it happens when companies cut a bunch of scenes together and it’s not at all how it actually plays out). i didn’t want to take any chances so i cut those parts out. i know the gifs are short on the loki side pero...that’s just how it is in show business.
thank you so much for listening and hearing me out! i like discussing my work and i try to have pride in them even if the numbers don’t reflect what i hope they would. either way, still learning, still growing, still thinking about buying p.s. like deadass this shit is RIDICULOUS -_- imagine opening up ps and like...it opens up in less than two minutes...shivers
#read more is because i typed up a lot hehe#i mean to me it doesn't look like a lot pero i don't want to clog y'all's tl
2 notes
·
View notes
Text
RQG 153
And we're 30 seconds in and I just got hit by the realization Azu isn't going to see the sky for a week. I know its not under water in enemy territory but that also means she doesn't have a mission to distract her. I love Cel! Zolf tries to do a leadership and prep them for what to expect (since they don't need the element of surprise this time) so Cel changes form because rats are more easily entertained. Alex has to come in with the restrictions but still a nice way to remind Zolf this is not the nice neat chain of command he remembers. Yeah reverting is better than "ha ha you're now a rat all week". OctoCel! Nice! Zolf is declaring Family Game Night. Speaking of leadership, Hamid remembers this isn't season 1 and thanks Zolf instead of asking the reasonable but not productive questions of where this was last time. Alex points out the space issue again. Still not sure if he did this on purpose as a way to poke at the party and see how they react or if he just didn't design the space big enough last time and refuses to retcon an extra cell into the basement. TBF if there had been 2 cells last time, Hamid & Azu sharing wouldn't have made sense and if they couldn't hug the fandom might have never forgiven Zolf & Wilde. Oh Cel can change 4 times a day Zone of Truth or Dare Not sure if he is joking or not but Zolf getting out of conversation by melting into a wall is an image Thanks Lyd! Hamid "is fine and he can snuggle up with Azu". Thanks Bryn! Barnes makes Carter switch so he can get some sleep Hamid is such a good person. He isn't trying to charm or justify, it isn't about abstract team cohesion. He just gives a sincere apology for not following the rules because even if he doesn't understand, he does trust that Zolf wouldn't hold him to them without reason. Zolf is great too, and tells Hamid it isn't needed Zolf has grown so GD much! He just cited his leaving and prompted Hamid to ask any questions. Oh I want everything on those 18 months but maybe Hamid shouldn't press when its so hard for him, especially when he can't leave the room if things get rough? I am so conflicted. Thats my little leader, immediately switches the focus to working together again. Ow "honestly we were never that good at working together anyway" so no rosy glasses on Zolf. Helen's gasp is, as ever, a mood Zolf still has that negative charisma. I can't explain how much I like that his mental health kick didn't turn him into a different person. Him tripping over his tongue on things like this is part of why he isn't a Zolf in name only, like I feared when I found out we were getting him back after his breakdown. Aw Azu & Cel offer them the illusion of privacy as Hamid pretends he isn't hurt. I love Cel's hammock and their shipping. Pausing before it can be jossed, don't think he can duck this time. Called it. Okay but he did admit he cares about Wilde just not romantically and canon doesn't have to see eye to eye with fic. Carter tried to date Wilde? Or at least pursued him? God bless Hamid is cutting straight to the rank issue. Look I know I have a proper thing about rank and that respecting Hamid's growth means not treating him like season 1, but people need to know who is calling the shots in time sensitive, dangerous situations. The proper response to the suggestion they fight it out is not to point out logistical barriers Zolf points out that if Hamid actually fought they would all die which is a heck of a double edged statement. It respects his power while pointing out the collateral damage. Then he points out he would win in a no magic fight which is true but if Hamid doesn't already have a complex about his self worth & magic I'd be amazed. Azu tries to declare they aren't doing this while Hamid comes in with the lore. Because Bryn loves the listeners and wants us to be happy. Ooh if the claws aren't magic magic what about the rest? not that I want him to get pissed enough to breathe fire or whatever it was that got him scaley but at least he can't isn't relying on his wits if they encounter one of these fields in the field. Is Carter being Carter or is he stealing Wilde's bit of uniting the party in hating him? The irony that explaining why Zolf should lead is the single best proof of Hamid's leadership skills is not lost on me. A real leader puts the good of the party above any desire for rank or power. Not sure if this counts as yelling should probably mark as starting around 10 minutes in for CWs Ok I don't know why I am so pleased and surprised at these two having a direct conversation with past incidents as examples using clear language like "I was frustrated". I mean lord knows RQ told toxic masculinity to take a hike from the beginning so maybe its less surprised than grateful? They don't agree but are clearly more concerned with understanding than with getting in a pissing match. "...And I guess I'm not that old because I haven't lived through the 18 months" Look at him learning to stand up for himself while still being fair to the other side. How can you not love him? Bless, Hamid is worse than me, if I talk quick enough the right words will come out before I run out of courage right? I love the peanut gallery. Oh dear that's a tone on Zolf I'm just going to take a moment before the "but" comes in. Zolf is being fair, I don't necessarily agree but he is being fair and sounds willing to listen. Hamid spells it out in small words that the flippant tone is a coping mechanism. Zolf wants to move on to the rest of the party now that they've said what needed to be said to each other. Yeah sorry Carter, Hamid isn't going to let Zolf get lost in his own head any more than Zolf is going to let Hamid run into danger because if he thought about it he would freeze. And Carter & Barnes are sharing all the tricks. Guess Hamid isn't the only one who does better if he can merc a guy after a hard week They are having a direct conversation about Zolf leaving! Backstory! So it was a debt more than a calling Brynterval, useful I needed a laugh break too and this way I don't miss anything. They are keeping up the relationship metaphor for all its worth. It really does work though. Azu quit alluding and tell us your story. Aw he's retelling Dover! I ever tell you guys that's when I quit waiting for him and or RQ to show their bad side? The damn romance novels were like a promise they weren't going to betray the story over "what's expected". I was so braced for that stupid "pick a fight with the toughest guy in the yard to buy respect" trope. Instead they just said nope: no prison fights, no guard abuse, his biggest danger is developing an addiction to cheap books. Thank you Cel! Hamid mentioning Mr Ceiling reminds Zolf that still have Shoin's brorb. Oh the diary Zolf has more Campbells Cel is awesome and asks if Hamid want to talk about Aziza Liliana finally got mentioned, realizing Hamid was the last to know about that part of the story hurt So they might be a hivemind Carter has a nice little cache in the cell of daggers & a wide variety of booze. Carter suggests they should get so drunk their minds aren't worth hiving. Oh Cel & the Elvish mead Carter is drinking the Orcish whiskey to make a point Yeah, yeah, how the heck Hamid came through not only his life preseries but this adventure with a healthy relationship with alcohol is beyond me. Not a patch on his experience and I'm teetotal. That's arguably why it's important to have though (plus if they went anywhere with an alcoholism plot I probably won't be able to listen) Barnes has the rum and is Jeff Goldblum posing. I love one(1) low charisma man. Zolf is trying to draw Azu & Cel into conversation. Azu backstory time! A vision of Aphrodite in her dream led her to follow a wandering paladin. Little leader tucks Carter in Azu asks for Cel backstory Cel is around 97, and they were raised in America. Their Dad was a merchant sailor. The trade cut off and defenses failed destroying the town. Its time to give Hamid crap about capitalism. Zolf tells Barnes to leave off. Zolf joined the Navy because he "killed his brother"!?! Wanna try that again with details? Oh Zolf honey it was an accident. Hamid hugs Zolf while he talks. Zolf says all that then knocks stone out of ceiling in a demonstration of his powers. Hamid remembers Zolf wasn't in Cairo and starts describing the prank gone wrong and I'm just gonna take another sec because if Zolf doesn't take this the right way its gonna hurt. Did Hamid just quote Grizzop at Zolf? Cel just declared they are staying with the party Just barely through the first quarantine episode. We might hit the same ratio as the dungeon after all. Gosh I love that theme tune
4 notes
·
View notes
Text
Hosting
Hosting
WAF.TO - We protect you, whether you are business or private!
Our services are protected from all attack scenarios by our self-developed DDoS protection. Regardless on which layer the attack happens, we ensure the smooth operation of your servers. Computing with 60% less latency. CDN with 80% cache hit ratio. Serverless with zero warmup time. Dedicated private network between edge locations. Yeah, We are secured and fast!
1 note
·
View note
Text
Acer Aspire 5 15.6" FHD IPS Slim Laptop, Intel Core i3 (2023)

How to switch s mode to windows 11. On your PC running Windows 11 in S mode, open Settings > Update & Security > Activation.2. In the Switch to Windows 11 Home or Switch to Windows 11 Pro section, select Go to the Store. (If you also see an "Upgrade your edition of Windows" section, be careful not to click the "Go to the Store" link that appears there.)3. On the Switch out of S mode (or similar) page that appears in the Microsoft Store, select the Get button. After you see a confirmation message on the page, you'll be able to install apps from outside of the Microsoft Store. Specifications Processor Intel Core i3-1115G4 (Base 3 GHz, up to 4.1 GHz with Intel Turbo Boost Technology, 6 MB L3 cache, 2 cores) Operating System Windows 11 Home in S mode Display 15.6" Full HD (1920 x 1080) Widescreen LED-backlit IPS ComfyView Anti-glare Memory Upgraded to 20GB DDR4 Memory Storage Upgraded to 1TB PCI NVMe SSD Graphics Intel UHD Graphics Keyboard Non-Backlight Keyboard Front Camera 720p HD Webcam Audio Built-in 2 digital Microphones, Acer TrueHarmony, Compatible with Alexa Communication Intel Wireless Wi-Fi 6 AX201 802.11ax Interfaces/Ports · 1x USB-C 3.1 Gen 1 (up to 5 Gbps) · 2x USB-A 3.1 Gen 1 (one with Power-off Charging) · 1x USB-A 2.0 · 1x HDMI Port with HDCP Support · 1x Ethernet port Memory Card Reader No Optical Drive Type No Battery 3-cell, Up to 8.5 hours Battery life, 45 W Power Supply . Dimensions 14.31" x 9.39" x 0.70” Weight (Approximate) 3.64 lb Color Pure Silver Bundle with Ghost Manta 3 in 1 Accessories.
Acer 2023 Newest Aspire 5 15.6" FHD IPS Slim Laptop, Intel Core i3-1115G4(Up to 4.1GHz), 20GB RAM, 1TB NVMe SSD, WiFi 6, USB, RJ-45, Webcam, Amazon Alexa, Windows 11 S, w/GM Accessories
- 3 GHz base frequency, up to 4.1 GHz with Intel Turbo Boost Technology, The 11th gen dual-core laptop brings the perfect combination of features to make you unstoppable. This is an ideal home office laptop to get things done fast with high performance, instant responsiveness & best-in-class connectivity. - Original Seal is opened for upgrade ONLY. Substantial high-bandwidth RAM to smoothly run multiple applications & browser tabs all at once. Boot up in seconds, get to files faster, & have room for loads of files with the lightning-quick solid state drive. - Experience sharp details & crisp colors on the 15.6'' Full HD display with 82.58% screen-to-body, 16:9 aspect ratio & narrow bezels. Energy-efficient LED backlight. HD 720p front facing camera. - USB 3.2 Type-C, 2 USB 3.2 Gen 1 (1 Power-off Charging), USB 2.0, HDMI 2.0 with HDCP support, headphone, speaker, line-out jack, Ethernet (RJ-45), DC-in for AC adapter - You may switch win 11 S mode to the regular win 11: Update Microsoft App Store to the version. Press ''Start button'' bottom left of the screen; Select ''Settings'' icon above ''power'' icon; Select Update & Security & Activation, then Go to Store; Select ''Get'' option under ''Switch out of S mode''; Hit Install. - System ram type: ddr4_sdram Read the full article
0 notes
Text
Isima’s Z3 VM Success: Get 10X Throughput, Half the Cost

Isima’s experiment with Z3 virtual machines for e-commerce yielded a price-performance ratio of two times and a throughput of ten times.
Workloads requiring a lot of storage, such as log analytics and horizontal, scale-out databases, require a high SSD density and reliable performance. They also require regular maintenance so that data is safeguarded in the case of an interruption. Google Cloud Next ’24 marked the official launch of Google Cloud’s first storage-optimized virtual machine family, the Z3 virtual machine series. Z3 delivers extraordinarily dense storage configurations of up to 409 SSD (GiB):vCPU on next-generation local SSD hardware, with an industry-leading 6M 100% random-read and 6M write IOPs.
One of the first companies to test it was the Silicon Valley company and e-commerce analytics cloud, Isima. Their bi(OS) platform offers serverless infrastructure for AI applications and real-time retail and e-commerce data. In order to onboard, process, and operate data for real-time data integration, feature stores, data science, cataloguing, observability, DataOps, and business intelligence, it has a scale-out SQL-friendly database and zero-code capabilities.
Google Cloud compares Z3 to N2 VMs for general purposes and summarises Isima‘s experiments and findings in this blog post. Warning: There will be spoilers ahead: Google Cloud promises 2X better price-performance, 10X higher throughput, and much more.
The examination
Isima tested Z3 on a range of taxing, real-world ecommerce workloads, including microservice calls, ad hoc analytics, visualisation queries, and more, all firing simultaneously. As a Google Cloud partner, Isima was granted early access to Z3.Image credit to Google cloud
In order to optimise Z3 and emulate real-world high-availability implementations, Isima separated each of the three z3-highmem-88 instances into five Docker containers, each of which was deployed across several zones to run bi(OS). There were two 3TB SSDs, 128GB RAM, and 16 vCPUs allotted to each Docker container. With this configuration, Isima was able to compare Z3 more effectively with earlier tests that they ran with n2-highmem-16 instances.
Isima evaluated the following to simulate extreme stress and several worst-case scenarios:
Demand spikes
They demanded (and attained) 99.999% reliability despite hitting the system with a brief peak load that saturated system resources to 70%. They then relentlessly maintained a 75% of that peak for the whole 72-hour period.
Select queries
To prevent inadvertent caching effects by the operating system or bi(OS), they tested select queries. To make sure they were reading data from the Local SSD and not RAM something vital for Z3 Isima purposefully travelled back in time while reading data (using select queries), for example, by querying data that had been entered 30 minutes earlier. They were secure that the outcomes of performance testing would withstand the demands of everyday work because of this.
Various deployment scenarios
Z3’s capacity to manage a range of real-world deployments was validated by testing of both single-tenant and multi-tenant setups.
Simulated maintenance events
Z3’s adaptability to disruptions is demonstrated by Isima, which even factored in scheduled maintenance utilising Docker restarts.
The decision
Throughput: With 2X higher price-performance, bi(OS) on Z3 handled ~2X+ more throughput than the tests conducted using n2-highmem-16 last year.
NVMe disc latencies: Read latencies remained constant, but write latencies improved by almost six times.
Drive variation: Every drive on every z3-highmem-88 virtual machine recorded a variance in read and write latencies of +/- 0.02 milliseconds over a span of 72 hours.
These findings for the new Z3 instances excite Google Cloud, and they will undoubtedly unleash the potential of many more workloads.
Improved encounter with maintenance
Numerous new infrastructure lifecycle technologies that offer more precise and stringent control over maintenance are included with Z3 virtual machines. The mechanism notifies Z3 VMs several days ahead of a scheduled maintenance event. The maintenance event can then be planned at a time of your choosing or it can automatically occur at the scheduled time. This enables us to provide better secure and performant infrastructure while also enabling you to more accurately plan ahead of a disruptive occurrence. Additionally, you’ll get in-place upgrades that use scheduled maintenance events to protect your data.
Driven by Titanium
Z3 virtual machines are constructed using Titanium, Google’s proprietary silicon, security microcontrollers, and tiered scale-out offloads. Better performance, lifecycle management, dependability, and security for your workloads are the ultimate results. With Titanium, Z3 can offer up to 200 Gbps of fully secured networking, three times faster packet processing than previous generation virtual machines (VMs), near-bare-metal speed, integrated maintenance updates for most workloads, and sophisticated controls for applications that are more sensitive.
“Going forward, Google Cloud is pleased to work with us on the development of Google Cloud’s first storage-optimized virtual machine family, building on Google cloud prosperous collaboration since 2016.” Through this partnership, Intel’s 4th generation Intel Xeon CPU and Google’s unique Intel IPU are made available, opening up new performance and efficiency possibilities. – Suzi Jewett, Intel Corporation’s General Manager of Intel Xeon Products
Hyperdisk capacity
Google Cloud offers their next-generation block storage, called Hyperdisk. Because Hyperdisk is based on Titanium, it offers much improved performance, flexibility, and efficiency because it separates the virtual machine host’s storage processing from it. With Hyperdisk, you can effectively meet the storage I/O requirements of data-intensive workloads like databases and data analytics by dynamically scaling storage performance and capacity separately. Choosing pricey, huge compute instances is no longer necessary to obtain better storage performance.
Read more on Govindhtech.com
#govindhtech#news#technologynews#technology#technologytrends#technews#technologysolutions#GoogleCloud#z3vms#ISIMA
0 notes
Text
openGauss|High Performance-Kunpeng NUMA Architecture Optimization
Kunpeng NUMA Architecture Optimization
Availability
This feature is available since openGauss 1.0.0.
Introduction
Kunpeng NUMA architecture optimization mainly focuses on Kunpeng processor architecture features and ARMv8 instruction set, and optimizes the system from multiple layers, including OS, software architecture, lock concurrency, logs, atomic operations, and cache access. This greatly improves the openGauss performance on the Kunpeng platform.
Benefits
Transactions per minute (TPM) is a key performance indicator of the database competitiveness. Under the same hardware costs, a higher database performance means the database can process more services, thereby reducing the usage cost of customers.
Description
openGauss optimizes the Kunpeng NUMA architecture based on the architecture characteristics. This reduces cross-core memory access latency and maximizes multi-core Kunpeng computing capabilities. The key technologies include redo log batch insertion, NUMA distribution of hotspot data, and CLog partitions, greatly improving the TP system performance.
Based on the ARMv8.1 architecture used by the Kunpeng chip, openGauss uses the LSE instruction set to implement efficient atomic operations, effectively improving the CPU usage, multi-thread synchronization performance, and Xlog write performance.
Based on the wider L3 cache line provided by the Kunpeng chip, openGauss optimizes hotspot data access, effectively improving the cache access hit ratio, reducing the cache consistency maintenance overhead, and greatly improving the overall data access performance of the system.
Kunpeng 920, 2P server (64 cores x 2, memory: 768 GB), 10 GE network, I/O: 4 NVMe PCIe SSDs, TPC-C: 1000 warehouses, performance: 1,500,000 tpmC.
Enhancements
Batch redo log insertion and CLog partition are supported, improving the database performance on the Kunpeng platform.
Efficient atomic operations using the LSE instruction set are supported, improving multi-thread synchronization performance.
Constraints
None
Dependencies
None
0 notes
Text
3d cellular automaton

3d cellular automaton full#
An alternative class of models which take into account the distribution of toughness is cellular automata finite element models (CAFE). This is because there is no information on the microstructure such as grain size and morphology, texture, and other important features considered in them. While the probabilistic approaches provide a sound scientific basis for capturing the scatter in the fracture data through assuming a probability for the presence of fracture initiators, their microstructurally agnostic assumptions can limit their predictive capability. More suitable probabilistic methods have been devised to describe the scatter associated with fracture.
3d cellular automaton full#
Therefore, deterministic approaches do not give full picture of scatter in fracture behaviour. This is evidenced by scatter in the toughness of seemingly identical specimens. The relative error of grain size before and after heat preservation is in the range of 0.1–0.6 μm, which indicates that the 3D cellular automata can accurately simulate the heat preservation process of AZ31 magnesium alloy.įracture is an inherently statistical phenomenon as it is a function of micro-structural heterogeneities such as distributed defects and inclusions. The angle between the two-dimensional slices of three-dimensional grains is approximately 120°, which is consistent with that of the traditional two-dimensional cellular automata. The grain of AZ31 magnesium alloy increases in size with the increase of temperature, and the number of grains decreases with the increase in time. Grains of different sizes are distributed normally at different times, most of which are grains with the ratio of grain diameter to average grain diameter R/Rm ≈ 1.0, which meets the minimum energy criterion of grain evolution. The results show that the normal growth of three-dimensional grains satisfies the Aboav-weaire equation, the average number of grain planes is between 12 and 14 at 420☌ and 2000 CAS, and the maximum number of grain planes is more than 40. Also, the effect of temperature on the three-dimensional grain growth process of AZ31 magnesium alloy is analyzed. However, further coarray optimi-sation is needed to narrow the performance gap between coarrays and MPI.īased on the thermodynamic conversion mechanism and energy transition principle, a three-dimensional cellular automata model of grain growth is established from the aspects of grain orientation, grain size distribution, grain growth kinetics, and grain topology. Overall, the results look promising for coarray use beyond 100k cores. The sampling and tracing analysis shows good load balancing in compute in all miniapps, but imbalance in communication, indicating that the difference in performance between MPI and coarrays is likely due to parallel libraries (MPICH2 vs libpgas) and the Cray hardware specific libraries (uGNI vs DMAPP). This is further evi-denced by the fact that very aggressive cache and inter-procedural optimisations lead to no performance gain. This is likely because the CA algorithm is network bound at scale. Adding OpenMP to MPI or to coarrays resulted in worse L2 cache hit ratio, and lower performance in all cases, even though the NUMA effects were ruled out. MPI halo exchange (HX) scaled better than coarray HX, which is surprising because both algorithms use pair-wise communications: MPI IRECV/ISEND/WAITALL vs Fortran sync images. Ping-pong latency and bandwidth results are very similar with MPI and with coarrays for message sizes from 1B to several MB. The work was done on ARCHER (Cray XC30) up to the full machine capacity: 109,056 cores. Ising energy and magnetisation were calculated with MPI_ALLREDUCE and Fortran 2018 co_sum collectives. Scaling of coarrays is compared in this work to MPI, using cellular automata (CA) 3D Ising magnetisation miniapps, built with the CASUP CA library,, developed by the authors. Fortran coarrays are an attractive alternative to MPI due to a familiar Fortran syntax, single sided communications and implementation in the compiler.

0 notes