#Mainframe Application
Explore tagged Tumblr posts
shiprasharma2927 · 2 years ago
Text
Mainframe Performance Optimization Techniques
Tumblr media
Mainframe performance optimization is crucial for organizations relying on these powerful computing systems to ensure efficient and cost-effective operations. Here are some key techniques and best practices for optimizing mainframe performance:
1. Capacity Planning: Understand your workload and resource requirements. Accurately estimate future needs to allocate resources efficiently. This involves monitoring trends, historical data analysis, and growth projections.
2. Workload Management: Prioritize and allocate resources based on business needs. Ensure that critical workloads get the necessary resources while lower-priority tasks are appropriately throttled.
3. Batch Window Optimization: Efficiently schedule batch jobs to maximize system utilization. Minimize overlap and contention for resources during batch processing windows.
4. Storage Optimization: Regularly review and manage storage capacity. Employ data compression, data archiving, and data purging strategies to free up storage resources.
5. Indexing and Data Access: Optimize database performance by creating and maintaining efficient indexes. Tune SQL queries to minimize resource consumption and improve response times.
6. CICS and IMS Tuning: Tune your transaction processing environments like CICS (Customer Information Control System) and IMS (Information Management System) to minimize response times and resource utilization.
7. I/O Optimization: Reduce I/O bottlenecks by optimizing the placement of data sets and using techniques like buffering and caching.
8. Memory Management: Efficiently manage mainframe memory to minimize paging and maximize available RAM for critical tasks. Monitor memory usage and adjust configurations as needed.
9. CPU Optimization: Monitor CPU usage and identify resource-intensive tasks. Optimize code, reduce unnecessary CPU cycles, and consider parallel processing for CPU-bound tasks.
10. Subsystem Tuning: Mainframes often consist of various subsystems like DB2, z/OS, and MQ. Each subsystem should be tuned for optimal performance based on specific workload requirements.
11. Parallel Processing: Leverage parallel processing capabilities to distribute workloads across multiple processors or regions to improve processing speed and reduce contention.
12. Batch Processing Optimization: Optimize batch job execution by minimizing I/O, improving sorting algorithms, and parallelizing batch processing tasks.
13. Compression Techniques: Use compression algorithms to reduce the size of data stored on disk, which can lead to significant storage and I/O savings.
14. Monitoring and Performance Analysis Tools: Employ specialized tools and monitoring software to continuously assess system performance, detect bottlenecks, and troubleshoot issues in real-time.
15. Tuning Documentation: Maintain comprehensive documentation of configuration settings, tuning parameters, and performance benchmarks. This documentation helps in identifying and resolving performance issues effectively.
16. Regular Maintenance: Keep the mainframe software and hardware up-to-date with the latest patches and updates provided by the vendor. Regular maintenance can resolve known performance issues.
17. Training and Skill Development: Invest in training for your mainframe staff to ensure they have the skills and knowledge to effectively manage and optimize the system.
18. Cost Management: Consider the cost implications of performance tuning. Sometimes, adding more resources may be more cost-effective than extensive tuning efforts.
19. Capacity Testing: Conduct load and stress testing to evaluate how the mainframe handles peak workloads. Identify potential bottlenecks and make necessary adjustments.
20. Security Considerations: Ensure that performance optimizations do not compromise mainframe security. Balance performance improvements with security requirements.
Mainframe performance optimization is an ongoing process that requires constant monitoring and adjustment to meet evolving business needs. By implementing these techniques and best practices, organizations can maximize the value of their mainframe investments and ensure smooth and efficient operations.
0 notes
simonh · 1 month ago
Video
Converting to IBM System/360, 1964
flickr
Converting to IBM System/360, 1964 by Colorcubic™ Via Flickr: colorcubic.com/2010/06/04/ibm-system360/
0 notes
enterprisemobility · 2 years ago
Text
Digital Transformation Services
Your Switch to a Digital Future – Digital Transformation Consulting Services
Being a leading name amongst Digital Transformation Company and Service providers, Enterprise Mobility has been handholding enterprises on their Digital Transformation journeys for two decades now
1 note · View note
govindhtech · 2 years ago
Text
The Benefits of Mainframe Application Modernization
Tumblr media
The rapid development of innovative technologies, in conjunction with ever-increasing expectations from consumers and continuing disruptive market dynamics, is compelling businesses to place a greater emphasis than ever before on digital transformation. 67% of executive respondents to a recent survey conducted by the IBM Institute for Business Value in cooperation with Oxford Economics stated that their organizations need to transform quickly in order to keep up with the competition, while 57% reported that current market disruptions are placing unprecedented pressure on their IT. The survey was carried out by IBM Institute for Business Value.
Because digital transformation puts enormous demands on current applications and data, an enterprise’s heterogeneous technological environment, which may include cloud and mainframe computing, has to be modernized and integrated. It should come as no surprise that chief executive officers have listed the modernization of their companies’ technologies as one of their highest priorities. CEOs are looking to reinvent their goods, services, and operations in order to increase their organizations’ efficiencies, agility, and speed to market.
In order to run and create services in a consistent manner throughout their hybrid cloud environments, businesses want platforms that are flexible, secure, open, and tailored to their specific needs. Since mission-critical applications continue to take advantage of the capabilities offered by mainframes, the mainframe will continue to be an important component in this process. A hybrid best-fit method is one that supports the modernisation, integration, and deployment of applications. This kind of solution often incorporates both mainframes and the cloud. This improves the agility of the company and tackles the pain points that clients have, such as minimizing the talent gap, shortening the time it takes to bring a product to market, improving access to mission-critical data across platforms, and optimizing expenses.
According to the findings of recent study conducted by the IBM Institute for Business Value, over seven out of ten IT executives believe that mainframe-based applications are an integral part of their corporate and technological strategy. In addition to this, the majority of respondents (68%) believe that mainframes are an essential component of their hybrid cloud approach.
However, modernisation can be a difficult process, and businesses often find themselves up against a variety of obstacles. A poll of CEOs found that about 70 percent of them believe the mainframe-based programs in their companies are outdated and in need of being updated. The survey also finds that businesses are twelve times more likely to use existing mainframe assets in the next two years rather than construct their application estates from scratch, which may be prohibitively expensive, dangerous, or time-consuming. According to a poll of executives working for companies that are now attempting to modernize their mainframe applications, the most difficult obstacle is a shortage of the necessary resources and expertise. When questioned about it two years ago, executives mentioned the high cost of mainframes as a key obstacle. However, this is no longer seen as the case, and instead, executives are searching for other sources of value from mainframes, such as resilience, optimization, and regulatory compliance.
Given that application modernization is necessary for businesses who are concentrating on “best-fit” transformation that spans across mainframe, cloud, or even generative AI, IT executives who are interested in revitalizing their mainframe modernization need to take a few crucial activities right now:
Take a strategy that involves iteration
Consider the characteristics of your sector and the amount of work you do as part of the planning process for integrating new and current settings. Collaborate with your business opponents to co-create a business case and a “best-fit” roadmap geared to fulfill your strategic objectives. Both of these documents should be developed to match your needs. Instead of going with a huge boom, ripping everything out and starting over, you should go with a gradual and ongoing strategy to modernisation.
Perform an analysis of your portfolio, then construct your strategy
Investigate the capabilities that determine the function of the mainframe in your company at the present time, as well as the ways in which those capabilities are connected to the larger ecosystem of hybrid cloud technologies. In addition, you should make it a priority to cross-skilling employees inside the business and rely on your partners to make up for any shortages in talent or resources, whether they are new or already present.
Leverage a number of different access points for application modernization
By employing application programming interfaces (APIs), you may help offer simple access to existing mainframe programs and data. Offer a consistent experience for software developers by combining open-source technologies with a simplified workflow that emphasizes agility. Build apps that are native to the cloud on the mainframe, and containerize existing applications.
Based on a 2021 survey update by the IBM Institute for Business Value (IBV), which conducted a double-blind poll of 200 IT executives in North America in April 2023, “Application modernization on the mainframe – Expanding the value of hybrid cloud transformation.”
0 notes
vax-official · 9 months ago
Text
You might have heard of 32-bit and 64-bit applications before, and if you work with older software, maybe 16-bit and even 8-bit computers. But what came before 8-bit? Was it preceded by 4-bit computing? Were there 2-bit computers? 1-bit? Half-bit?
Well outside that one AVGN meme, half-bit isn't really a thing, but the answer is a bit weirder in other ways! The current most prominent CPU designs come from Intel and AMD, and Intel did produce 4-bit, 8-bit, 16-bit, 32-bit and 64-bit microprocessors (although 4-bit computers weren't really a thing). But what came before 4-bit microprocessors?
Mainframes and minicomputers did. These were large computers intended for organizations instead of personal use. Before microprocessors, they used transistorized integrated circuits (or in the early days even vacuum tubes) and required a much larger space to store the CPU.
And what bit length did these older computers have?
A large variety of bit lengths.
There were 16-bit, 32-bit and 64-bit mainframes/minicomputers, but you also had 36-bit computers (PDP-10), 12-bit (PDP-8), 18-bit (PDP-7), 24-bit (ICT 1900), 48-bit (Burroughs) and 60-bit (CDC 6000) computers among others. There were also computers that didn't use binary encoding to store numbers, such as decimal computers or the very rare ternary computers (Setun).
And you didn't always evolve by extending the bit length, you could upgrade from an 18-bit computer to a more powerful 16-bit computer, which is what the developers of early UNIX did when they switched over from the PDP-7 to the PDP-11, or offer 32-bit over 36-bit, which happened when IBM phased out the IBM 7090 in favor of the the System/360 or DEC phased out the PDP-10 in favor of the VAX.
154 notes · View notes
mariacallous · 3 months ago
Text
Elon Musk’s so-called Department of Government Efficiency (DOGE) has plans to stage a “hackathon” next week in Washington, DC. The goal is to create a single “mega API”—a bridge that lets software systems talk to one another—for accessing IRS data, sources tell WIRED. The agency is expected to partner with a third-party vendor to manage certain aspects of the data project. Palantir, a software company cofounded by billionaire and Musk associate Peter Thiel, has been brought up consistently by DOGE representatives as a possible candidate, sources tell WIRED.
Two top DOGE operatives at the IRS, Sam Corcos and Gavin Kliger, are helping to orchestrate the hackathon, sources tell WIRED. Corcos is a health-tech CEO with ties to Musk’s SpaceX. Kliger attended UC Berkeley until 2020 and worked at the AI company Databricks before joining DOGE as a special adviser to the director at the Office of Personnel Management (OPM). Corcos is also a special adviser to Treasury Secretary Scott Bessent.
Since joining Musk’s DOGE, Corcos has told IRS workers that he wants to pause all engineering work and cancel current attempts to modernize the agency’s systems, according to sources with direct knowledge who spoke with WIRED. He has also spoken about some aspects of these cuts publicly: "We've so far stopped work and cut about $1.5 billion from the modernization budget. Mostly projects that were going to continue to put us down the death spiral of complexity in our code base," Corcos told Laura Ingraham on Fox News in March.
Corcos has discussed plans for DOGE to build “one new API to rule them all,” making IRS data more easily accessible for cloud platforms, sources say. APIs, or application programming interfaces, enable different applications to exchange data, and could be used to move IRS data into the cloud. The cloud platform could become the “read center of all IRS systems,” a source with direct knowledge tells WIRED, meaning anyone with access could view and possibly manipulate all IRS data in one place.
Over the last few weeks, DOGE has requested the names of the IRS’s best engineers from agency staffers. Next week, DOGE and IRS leadership are expected to host dozens of engineers in DC so they can begin “ripping up the old systems” and building the API, an IRS engineering source tells WIRED. The goal is to have this task completed within 30 days. Sources say there have been multiple discussions about involving third-party cloud and software providers like Palantir in the implementation.
Corcos and DOGE indicated to IRS employees that they intended to first apply the API to the agency’s mainframes and then move on to every other internal system. Initiating a plan like this would likely touch all data within the IRS, including taxpayer names, addresses, social security numbers, as well as tax return and employment data. Currently, the IRS runs on dozens of disparate systems housed in on-premises data centers and in the cloud that are purposefully compartmentalized. Accessing these systems requires special permissions and workers are typically only granted access on a need-to-know basis.
A “mega API” could potentially allow someone with access to export all IRS data to the systems of their choosing, including private entities. If that person also had access to other interoperable datasets at separate government agencies, they could compare them against IRS data for their own purposes.
“Schematizing this data and understanding it would take years,” an IRS source tells WIRED. “Just even thinking through the data would take a long time, because these people have no experience, not only in government, but in the IRS or with taxes or anything else.” (“There is a lot of stuff that I don't know that I am learning now,” Corcos tells Ingraham in the Fox interview. “I know a lot about software systems, that's why I was brought in.")
These systems have all gone through a tedious approval process to ensure the security of taxpayer data. Whatever may replace them would likely still need to be properly vetted, sources tell WIRED.
"It's basically an open door controlled by Musk for all American's most sensitive information with none of the rules that normally secure that data," an IRS worker alleges to WIRED.
The data consolidation effort aligns with President Donald Trump’s executive order from March 20, which directed agencies to eliminate information silos. While the order was purportedly aimed at fighting fraud and waste, it also could threaten privacy by consolidating personal data housed on different systems into a central repository, WIRED previously reported.
In a statement provided to WIRED on Saturday, a Treasury spokesperson said the department “is pleased to have gathered a team of long-time IRS engineers who have been identified as the most talented technical personnel. Through this coalition, they will streamline IRS systems to create the most efficient service for the American taxpayer. This week the team will be participating in the IRS Roadmapping Kickoff, a seminar of various strategy sessions, as they work diligently to create efficient systems. This new leadership and direction will maximize their capabilities and serve as the tech-enabled force multiplier that the IRS has needed for decades.”
Palantir, Sam Corcos, and Gavin Kliger did not immediately respond to requests for comment.
In February, a memo was drafted to provide Kliger with access to personal taxpayer data at the IRS, The Washington Post reported. Kliger was ultimately provided read-only access to anonymized tax data, similar to what academics use for research. Weeks later, Corcos arrived, demanding detailed taxpayer and vendor information as a means of combating fraud, according to the Post.
“The IRS has some pretty legacy infrastructure. It's actually very similar to what banks have been using. It's old mainframes running COBOL and Assembly and the challenge has been, how do we migrate that to a modern system?” Corcos told Ingraham in the same Fox News interview. Corcos said he plans to continue his work at IRS for a total of six months.
DOGE has already slashed and burned modernization projects at other agencies, replacing them with smaller teams and tighter timelines. At the Social Security Administration, DOGE representatives are planning to move all of the agency’s data off of legacy programming languages like COBOL and into something like Java, WIRED reported last week.
Last Friday, DOGE suddenly placed around 50 IRS technologists on administrative leave. On Thursday, even more technologists were cut, including the director of cybersecurity architecture and implementation, deputy chief information security officer, and acting director of security risk management. IRS’s chief technology officer, Kaschit Pandya, is one of the few technology officials left at the agency, sources say.
DOGE originally expected the API project to take a year, multiple IRS sources say, but that timeline has shortened dramatically down to a few weeks. “That is not only not technically possible, that's also not a reasonable idea, that will cripple the IRS,” an IRS employee source tells WIRED. “It will also potentially endanger filing season next year, because obviously all these other systems they’re pulling people away from are important.”
(Corcos also made it clear to IRS employees that he wanted to kill the agency’s Direct File program, the IRS’s recently released free tax-filing service.)
DOGE’s focus on obtaining and moving sensitive IRS data to a central viewing platform has spooked privacy and civil liberties experts.
“It’s hard to imagine more sensitive data than the financial information the IRS holds,” Evan Greer, director of Fight for the Future, a digital civil rights organization, tells WIRED.
Palantir received the highest FedRAMP approval this past December for its entire product suite, including Palantir Federal Cloud Service (PFCS) which provides a cloud environment for federal agencies to implement the company’s software platforms, like Gotham and Foundry. FedRAMP stands for Federal Risk and Authorization Management Program and assesses cloud products for security risks before governmental use.
“We love disruption and whatever is good for America will be good for Americans and very good for Palantir,” Palantir CEO Alex Karp said in a February earnings call. “Disruption at the end of the day exposes things that aren't working. There will be ups and downs. This is a revolution, some people are going to get their heads cut off.”
15 notes · View notes
bah-circus · 3 months ago
Note
HAIIII could we request beatrix lebeau (slime rancher) and subspace (phighting!) ? both acrobats ! just beatrix if you don't wanna do multiple requests , we don't mind .3c !! /gen your packs r oh so cool and I love them dearly btw ,,
Thank you in advance !! - @toxin-filled-bahs
Of course dear audience! We have heard your request and have found a suitable performer for you! We hope this performance suits your needs, but you are free to make any adjustments you wish.
❣︎For Our Next Act, Please Welcome,,,❣︎
Beatrix Lebeau & Subspace & Bonus Medkit!!!
Tumblr media
°·⊱ Name: Beatrix LeBeau, Bee, Blossom, Reena, Felicity, April
°·⊱ Age: 22
°·⊱ Race/Species: Human
°·⊱ Source: Slime Rancher 1 & 2
°·⊱ Role: Caregiver, Soother
────── · · · · ──────
°·⊱ Sex: Male
°·⊱ Gender: transFem, SpringAdored, Bloomlexic, Vernal Sunsettic, Sungender, Icarusalis 
°·⊱ Pronouns: Shi/Hir; Ae/Aem; Fleu/Fleur; Sun/Suns
°·⊱ Sexuality: Sapphic 
°·⊱ Personality: Beatrix is a very ‘down-to-earth’ type of rancher. Likes to keep things well organized and well planned. Finds large cityscapes to be very overwhelming, and prefers laid back and scenic areas. 
────── · · · · ──────
°·⊱ Nicknames/Titles: The Lonely Rancher, [Prn] Who Explores, [Prn] Who Protects Slimes
°·⊱ Likes: Hard Work, Chores, Slime, Pet Care, Reading, Baking, Mochi Miles, Listening to Mochi explain Anything, Carrots, Plort Collectors, Her Twin, Silence
°·⊱ Dislikes: Jerks, Cityscapes, High Tech [Confusing], Veggie Chips, Tar Slimes [Wants to learn to reverse/cure them], Plort Market [Evil]
°·⊱ Emoji Sign-Off: 🐝🌷🐇💐🥣🌤️
°·⊱ Typing Quirk: Has a thick country / southern accent that shows through in how shi talks in almost all circumstances, very laid back typing and talking style. 
°·⊱ Faceclaim: 1 | 2
Tumblr media Tumblr media Tumblr media
°·⊱ Name: Subspace, Specimen, Creator, Nox, Abris, Leonis, Turing, Toxin, Mainframe, Ash, Hollow, Lethe
°·⊱ Age: nullage (it just doesn't care so it stopped counting)
°·⊱ Race/Species: Robloxian, Inphernal / Demon
°·⊱ Source: Phighting! (Roblox)
°·⊱ Role: Prosecutorflux, audeomate, BPD Holder (if applicable)
────── · · · · ──────
°·⊱ Sex: Null
°·⊱ Gender: RXgender, Deosueial, missingsourcegender, spitegender, honeyhimhypic, rotgender, boycorpse, fleshripped, facewoundic, canidevolin, seruadoric, acheantoxic, poisongender, !gender, mischeiviposic, gummybatgender, lovegoric, genderthreat, b?y
°·⊱ Pronouns: it/its; rot/rots; decay/decays; dea/death; vial/vials; tox/toxic; bloom/blooms; ny/nym; cy/cyr; go/gore; he/hem/hemo; wou/wound; h?/h?m; bio/bios; 🧪/🧪s; ☢️/☢️s; rad/radium; radi/radioactive; gli/glitch; pix/pixel; .exe/.exes; ..?/..?s; creepy/creeping; voi/void; cru/crux; rev/revive; gut/guts; kill/kills; end/ends; carc/carcass;  vile/viles; h+/h-m; vi/vital; no/non; that thing/that thing’s; quoi/quoir; sie/hir; ?/?s; !!/!!s; 01/01s; ⚗️/⚗️s; 💊/💊s
°·⊱ Sexuality: Abrosexual
°·⊱ Personality: Energetic, proud, boastful, obsessive, talks a lot, annoying (on purpose, but also not), 
────── · · · · ──────
°·⊱ Nicknames/Titles: Creator of Biografts, Blackrock’s Scientist, The Toxic One, (prn) who poisons, Spec, Noxxy, Abri, Leon, Tox
°·⊱ Likes: Poison, Medkit, energy drinks, candy, loud music, EDM, metal, masks, Biograft, fighting, video games, snakes, technology, inventing, poisonous/toxic flowers and plants, Blackrock, annoying people, science
°·⊱ Dislikes: Medkit, dogs, juice, losing, non-deadly flowers, minimalism, being tired, Medkit again, boring things, math (even though tox is good at it)
°·⊱ Emoji Sign-Off: 🧪☢️⚗️
°·⊱ Typing Quirk: //TYPES IN ALL CAPS LIKE THIS!!!!!!!!//
°·⊱ Extra: Has a love/hate relationship with Medkit
°·⊱ Faceclaim: 1 | 2
Tumblr media Tumblr media Tumblr media
°·⊱ Name: Medkit, Aster, Aegis, Hannibal, Cypher, Shard, Servo, Remedy, Nano
°·⊱ Age: 39
°·⊱ Race/Species: Robloxian, Inphernal / Demon
°·⊱ Source: Phighting! Roblox
°·⊱ Role: Observer, Archivist, Academic
────── · · · · ──────
°·⊱ Sex: Intersex
°·⊱ Gender: Galactic Transneu, Frilledcure, Bxy
°·⊱ Pronouns: he/him; gli/glitch; doc/docs; heal/healths; syr/syringe; pill/pills; RX/RXs; bru/bruise; rad/rads; bio/bios; haz/hazards; h?/h?m; vio/violent; cy/cyan; heal/heals; hie/hier; mal/ware/malwares; h-/h+m; via/vial, vi/virus; 👁️‍🗨️/👁️‍🗨️s
°·⊱ Sexuality: Achillean, Idiotsexual/hj
°·⊱ Personality: Stuck-up and Sarcastic. Medkit is asocial, preferring to stay to himself. Gli does not show his emotions well, and tends to snap at others fairly quickly. Though below it all heal does care, RX just doesn’t know how to show it. 
────── · · · · ──────
°·⊱ Nicknames/Titles: Blackrock Deserter, Meddy, Med, Remi, [Prn] Who Opposses Subspace, One of The Lost Temple, Member of the Church of the TRUE EYE, [Prn] Who Heals with Bullets
°·⊱ Likes: Subspace, Chess, Sword [Person], Black Coffee, Classical Music, Organization, Money, Engineering, Unseasoned Food, New Energy Sources
°·⊱ Dislikes: Subspace, Juice, Childish Things, Being a Doctor, Boombox, Biograft, Loud Music, Chaos, Being called Doctor, Crows
°·⊱ Emoji Sign-Off: 💉🩹💊🩵🧪☕
°·⊱ Typing Quirk: This Should Do Nicely For A Typing Quirk, Yes? Everything Capitalized. 
°·⊱ Faceclaim: 1 | 2
Tumblr media Tumblr media Tumblr media
An absolutely lovely request!! Thank you! It's extremely kind to hear you enjoy the packs that we provide here!! Bulb and I try our best to put as much care as possible into each one!! It's a little funny seeing you here for a Phighting! introject actually because we had plans to request one from you eventually!! Small world we live in <3 - Pest Swarm
Tumblr media Tumblr media
12 notes · View notes
seastarblue · 6 months ago
Text
Bold the Facts Tag!
YAY another one! Thanks for the tag @sunflowerrosy !
Last time was Kaiden’s, so I’m doing… Arbor today! where Kaiden is the protag of Interwoven, Arbor is the protag of AGGTRG! (A Golem’s Guide to Regaining Goodness)
Personal
Financial: wealthy / moderate / unsure / poor / in extreme poverty
Medical: fit / moderate / sickly / disabled / non-applicable (Golem = not able to be defined by humanoid standards)
Class: upper / middle / working / unsure / other
Education: qualified /unqualified / studying / other
Criminal record: yes, for major crimes / yes, for minor crimes / no / has committed crimes but not caught yet / yes, but charges were dismissed
Family
Children: has a child or children / has no children (Fiamma his feral dragon daughter 🥹)
Relationship with family: close with sibling(s) / not close with sibling(s) /has no siblings / sibling(s) is deceased / adding a has no family here
Affiliation: orphaned / abandoned / adopted / found family / disowned / raised by birth parent(s) / not applicable (He doesn’t remember :>)
Traits/Tendencies
Introverted / ambivalent / extroverted
Disorganized / organized / in between
Close-minded / open minded / in between
Calm / anxious / in between / contextual / energetic
Disagreeable / agreeable / in between
Cautions / reckless / in between / contextual
Patient / impatient / in between / contextual
Outspoken / reserved / in between / contextual
Leader / follower / in between / contextual
Empathetic / vicious bastard / in between / contextual
Optimistic / pessimistic / in between
Traditional / modern / in between
Hard working / lazy / in between
Cultured / uncultured / in between / unknown
Loyal / disloyal / unknown / contextual
Faithful / unfaithful / unknown / contextual
Beliefs
Faith: monotheistic / polytheistic / agnostic / atheist / adding an unsure (which normally I’d consider agnostic but in this case? Nah.)
Belief in ghosts or spirits: yes / no / don’t know / don’t care / in a matter of speaking
Belief in an afterlife: yes / no / don’t know / don’t care / in a matter of speaking
Artistic skills: excellent / good / moderate / poor / none (he can write but that’s the extent of it)
Technical skills: excellent / good / moderate / poor / none
Habits
Drinking alcohol: never / special occasions / sometimes / frequently / tried it / alcoholic / former borderline alcoholic turned sober
Smoking: tried it / trying to quit / already quit / never / rarely / sometimes / frequently / chain smoker ([Smoking… there was a time I was on fire…?] ahh character)
Recreational drugs: tried some / never / special occasions / sometimes / frequently / addict
Medicinal drugs: never / no longer needs medication / some medication needed / frequently / to excess / definitely needs some psych meds but doesn’t have access
Unhealthy food: never / special occasions / sometimes / frequently / binge eater (he doesn’t eat :>)
Splurge spending: never / rarely / sometimes / frequently / shopaholic
Gambling: never / rarely/ sometimes / frequently / compulsive gambler
———
no pressure tagging the Tag Game List! Lemme know if you’d like on/off:
@sableglass @dioles-writes @viridis-icithus @allaboutmagic @paeliae-occasionally
@astor-and-the-endless-ink @vsnotresponding @nightlylaments @ancientmyth @vesanal
@thebookishkiwi @verdant-mainframe @threedaysgross @fifis-corner @bamber344
and as always, open tag!
17 notes · View notes
trivialbob · 2 years ago
Text
Tumblr media
For work I have to sign in through a virtual desktop. I hate it.
Compared to pre-virtual desktop days it takes me longer to get signed in. There's often a lag time between pressing a keyboard button and a letter or number appearing on my screen. This delay is a small fraction of a second. But I notice it, and if I type many characters the delay becomes more noticeable.
Death by a thousand cuts.
This morning the virtual desktop system isn't working at all. Under the old way, where we had each application installed on our laptops, if something wasn't working with, for example, the mainframe connection I could still get into Outlook, and vice versa. Virtual desktop is all or zero.
And I'm a zero today.
As I wait I got out my air compressor and started to clean things. Computer keyboards. Vacuum cleaners. That ridge around the top of the Instant Pots. Coffee makers (I have three different types).
I find it soothing (though noisy) getting things clean down to the smallest nooks and crannies.
I probably *cough* should have worn a mask though.
52 notes · View notes
foundationhq · 1 year ago
Text
Tumblr media
ACCESS GRANTED TO SITE-φ.
Welcome, 𝐻𝐴𝑅𝑃𝐸𝑅. 𝚃𝚑𝚎 𝙰𝚍𝚖𝚒𝚗𝚒𝚜𝚝𝚛𝚊𝚝𝚘𝚛 is pleased to scout you for the role of [𝑄𝑈𝑂𝑇𝐸 𝑈𝑁𝑄𝑈𝑂𝑇𝐸].
A Succinctly Candid Perspective. A Subtly Crafted Pretense. A Superbly Charismatic Person. Matias — “Loch” — is an amazing find for the Foundation, as well as for us reading your application. It is not lost on us that this role was one of the Foundation's newest acquisitions, and yet your take on him was so intimate, familiar to us. The thorough rapport one feels for a relationship spanning years, every quip so quotable. Loch’s vision, a most singular scope found in his story, elevated him to gleeful heights. Where that zest for anomalies comes from is removed from just fantastical fancy. There is real love in there, protective and strong for the causes he believes in and the people he adores. Really, an absolute treat of an app. We’d let him hack our mainframe any day. Please don’t, though. We need that. We are so incredibly happy to invite you into the Foundation.
Please refer to our checklist for primary onboarding, and have your account ready in 24 hours. The flight to Site-φ leaves on the dot. And 𝚃𝚑𝚎 𝙰𝚍𝚖𝚒𝚗𝚒𝚜𝚝𝚛𝚊𝚝𝚘𝚛 doesn't like to be kept waiting.
5 notes · View notes
shiprasharma2927 · 2 years ago
Text
Tumblr media
Explore modern mainframe migration strategies for a future-ready IT landscape. Embrace agility, cost-efficiency, and innovation.
0 notes
simonh · 1 month ago
Video
Converting to IBM System/360, 1964
flickr
Converting to IBM System/360, 1964 by Colorcubic™ Via Flickr: colorcubic.com/2010/06/04/ibm-system360/
0 notes
readingsquotes · 1 year ago
Text
"This piece aims to identify the pitfalls in thinking about what is being called an ‘algorithmic genocide’ in Gaza. I’d like to push against the exceptionalism afforded to AI; for example pieces which set military uses of AI as distinct from previous iterations of techno-warfare. Rather, the spectre of ‘artificial intelligence’ is a reification—a set of social relations in the false appearance of concrete form; something made by us which has been cast as something outside of us. And the way in which AI has been talked about in the context of a potentially ‘AI-enabled’ genocide in Gaza poses a dangerous distraction. All of the actually interesting and hard problems about AI, besides all the math, lie in its capacity as an intangible social technology and rhetorical device which elides human intention, creating the space of epistemic indeterminacy through which people act.
...The data does not “speak for itself”, neither in the context of academic research or in military applications.
Any ML model is, from its beginning, bound to a human conceptual apparatus.
...
The reification of AI, which happens at all points on the political spectrum, is actively dangerous in the context of its being taken to its most extreme conclusion: in the ‘usage' of ‘AI’ for mass death, as in the case of Gospel (‘Habsora’, הבשורה, named after the infallible word of God) and Lavender. This reification gives cover for politicians and military officers to make decisions about human lives, faking a hand-off of responsibility to a pile of linear algebra and in doing so handing themselves a blank check to do whatever they want. The extent to which these “AI systems” are credible or actually used is irrelevant, because the main purpose they serve is ideological, with massive psychological benefits for those pressing the buttons. Talking about military AI shifts the focus from the social relations between people to the technologies used to implement them, a mystification which misdirects focus and propagates invincibility.
There are things which are horrifying and exceptional about the current genocide, but the deployment of technology is not in itself one of those things; the usage of data-driven methods to conduct warfare is neither ‘intelligent’ nor ‘artificial’, and moreover not even remotely novel. As prior reporting from Ars Technica has shown about the NSA’s SKYNET program in Pakistan, Lavender is not even the first machine learning-driven system of mass assassination. I recently read Nick Turse’s Kill Anything That Moves: The Real American War in Vietnam (2013) and was struck by the parallels to the current campaign of extermination in Gaza, down to the directed-from-above obsession with fulfilling ‘body count’ as well as the creation of anarchic spaces in which lower-level operatives are afforded opportunities to carry out atrocities which were not explicitly ordered, an observation which has also been made of the Shoah. Thinking about it in this way allows us to fold AI into other discourses of technological warfare over the past century, such as the US’s usage of IBM 360 mainframe computers in Vietnam to similarly produce lists of targets under Operation Igloo White. Using technology as rhetorical cover for bureaucratized violence is not new.
The Lavender piece by Yuval Abraham states that IDF soldiers rapidly rubber-stamped bombing targets “despite knowing that the system makes what are regarded as ‘errors’ in approximately 10 percent of cases”. But even if the error rate were 0.005% it wouldn’t matter, because the ‘precision’ canard is just laundering human intent through a justification-manufacturing apparatus which has zero technical component. Abraham reports that “sources who have used Lavender in recent months say human agency and precision were substituted by mass target creation and lethality,” but in reality exactly zero human agency has been removed. He writes that “once the list was expanded to include tens of thousands of lower-ranking operatives, the Israeli army figured it had to rely on automated software and artificial intelligence…AI did most of the work instead”, but this verbiage is a perverse reversal of cause and effect to create post-hoc justification.
...
Another line from the Gospel piece reads “the increasing use of AI based systems like Habsora allows the army to carry out strikes on residential homes where a single Hamas member lives on a massive scale”. Emphasis mine—that word ‘allows’ is the hinge upon which this whole grotesque charade rests. The algorithm isn’t choosing anything; the choices already happened in the compiling and labeling of the dataset. The collecting and categorizing of data—which data on individuals’ social media or GPS movements or purchasing activity is to be used, which to be excluded—is in itself the construction of an elaborate ideological apparatus."
..
The purpose of a system is what it does, and science is a thing which people do
...We can expect the laundering of agency, whitewashed through the ideological device of 'the algorithm', to begin to be deployed in the arena of international law, given the ways in which Israel is already trying to sidestep the ‘genocidal intent’ it has been charged with at the ICJ. "The fetish of AI as a commodity allows companies and governments to sell it, particularly Israel, which still enjoys a fairly glowing reputation in the ML/AI industry and research world."
2 notes · View notes
watching-pictures-move · 1 year ago
Text
Put On Your Raincoats | Randy: The Electric Lady (Schuman & Strong, 1980)
Tumblr media
This is the second movie I’ve seen co-directed by Zachary Strong, and like Little Showoffs, there’s a deconstructive streak. That movie was about exploring the fantasies of its participants, alternating between interviews and enactments, and eventually pulling back the curtain to show the work that goes into making these things. That movie’s overall attitude was fairly warm and supportive. This one’s, maybe less so. Here we find ourselves at some kind of institute of sexual studies where scientists are doing some very scientific research on the science of orgasms. For science, you see.
How it’s presented is any number of beautiful women in the throes of ecstasy, subject to stimuli either external or self-applied, while strapped to electrodes and the like, so they can be observed. For science, you see. Now, one might not take too much issue being conflated with the nice looking ladies in the cast, but one might object a little more to their relative lack of agency. The movie is softening the blow, but it’s still poking fun at you the viewer. The height of these jabs comes during a sequence where the subjects are forced to watch specially edited "commercial porno films" and masturbate, and what we see of the films is played so quickly and cut so incoherently, that one wonders how anyone can get off on it, which I suppose is the folly of skipping to the good parts. One of the scientists admits that pornography doesn’t do anything for him, as “there’s never any story” and you never get to know the characters, and the movie’s satire comes into focus. And even a character’s eventual sexual actualization is defined by a number of preprogrammed stimuli and positions.
The plot eventually turns to the extraction of “Orgasmine”, which allows two of the scientists to experience sex with each other entirely in their minds without actual physical interaction. One can speculate what sort of applications this might have in the name of science, but one should be wary lest it be used for less altruistic motives, such as world domination. Which may or may not be the motive of a mad scientist played by uberMILF Juliet Anderson, who may or may not want “enough Orgasmine to control the vurld!”.
Listen, you get Anderson chewing the scenery with a shitty German accent and a femdom lite routine, and this is automatically a good movie. You get the cute as a button Desiree Cousteau as the titular character (yes, Randy is a girl’s name here) and you have an even better movie. You get Lori Blue and Jesie St. James in the supporting cast and I certainly ain’t complaining. But what takes this to the next level is the forceful style with which this is executed. There’s an emphasis on sensory overload, like a sequence that cuts between Cousteau’s garden fantasy to her masturbating frantically to a pair of scientists fucking while we’re hit with colour effects and punchcards explode out of a computer (another scientist warns them “There’s no fucking in the sex institute!”). Or the porno sequence where we’re hit with a barrage of explicit imagery while punk rock blares on the soundtrack. Or the scene the dissolve-heavy scene where Cousteau and Blue masturbate together while reminiscing about past lovers, which has probably the best cross-cutting during sex scenes I’ve seen in porno.
The horror movie aesthetics and mix of coldness and camp invite comparisons with another pornographic favourite of mine, Nightdreams, although this leans heavier on the camp than the coldness. The lab setting, which includes a sassy mainframe computer and a mechanized dildo, on top of all the monitors, tape machines and cold interiors, help give this a distinct visual identity. So much so that the heavily degraded print, which alternated between giving the movie a gummy candy green and steely blue colour palette, didn’t manage to sink it. Ideally we’ll get a restoration of this at some point, but in any case it’s well worth a look.
5 notes · View notes
ultramaga · 1 year ago
Text
Like OS/390, z/OS combines a number of formerly separate, related products, some of which are still optional. z/OS has the attributes of modern operating systems but also retains much of the older functionality originated in the 1960s and still in regular use—z/OS is designed for backward compatibility.
Major characteristics
z/OS supports[NB 2] stable mainframe facilities such as CICS, COBOL, IMS, PL/I, IBM Db2, RACF, SNA, IBM MQ, record-oriented data access methods, REXX, CLIST, SMP/E, JCL, TSO/E, and ISPF, among others.
z/OS also ships with a 64-bit Java runtime, C/C++ compiler based on the LLVM open-source Clang infrastructure,[3] and UNIX (Single UNIX Specification) APIs and applications through UNIX System Services – The Open Group certifies z/OS as a compliant UNIX operating system – with UNIX/Linux-style hierarchical HFS[NB 3][NB 4] and zFS file systems. These compatibilities make z/OS capable of running a range of commercial and open source software.[4] z/OS can communicate directly via TCP/IP, including IPv6,[5] and includes standard HTTP servers (one from Lotus, the other Apache-derived) along with other common services such as SSH, FTP, NFS, and CIFS/SMB. z/OS is designed for high quality of service (QoS), even within a single operating system instance, and has built-in Parallel Sysplex clustering capability.
Actually, that is wayyy too exciting for a bedtime story! I remember using the internet on a unix terminal long before the world wide web. They were very excited by email, but it didn't impress me much. Browers changed the world.
Tumblr media
23K notes · View notes
dmacardshop7 · 10 days ago
Text
Understanding the DMA Card: Applications, Benefits, and Selection Guide
What is a DMA Card? An Overview of Direct Memory Access
Defining the DMA Card and Its Purpose
A Direct Memory Access (DMA) card is a specialized hardware component that allows certain peripherals to access system memory independently of the central processing unit (CPU). This capability enables high-speed data transfer without exposing the CPU to additional computational burden. Traditional data transfer methods rely on the CPU to oversee every byte of data transmission, which can create bottlenecks, especially in high-demand applications such as gaming and data analysis. DMA cards streamline these processes, providing a more efficient way to move data between devices and memory.
The dma card integrates directly with a computer’s motherboard through slots like PCIe, allowing it to interface with various peripherals, including graphics cards, storage devices, and more. By taking the initiative to manage data transfer, DMA cards significantly enhance system performance and responsiveness.
Key Technologies Behind DMA Implementation
The core technology behind DMA cards relies on a separate DMA controller, which acts as an intermediary between the memory and peripheral devices. This controller can execute transactions such as reading or writing data directly without CPU intervention. For users, this results in improved processing speeds and reduced latencies. The process typically unfolds in several steps:
Request: The peripheral device sends a request to the DMA controller to transfer data.
Grant: After processing, the controller grants the request and manages the data transfer directly to or from the memory.
Completion: The DMA controller signals the peripheral that the transfer has been completed.
This method of operation frees up the CPU to perform other tasks, allowing for a multi-tasking environment that does not compromise on speed and performance.
Historical Context and Evolution of DMA Cards
The concept of direct memory access has roots dating back to the early days of computing. Initially designed for mainframe systems, the technology was recognized for its efficiency in offloading tasks from the CPU. As computers evolved, so did the implementation of DMA. The introduction of PCIe technology in the early 2000s allowed for faster data communication, further enhancing the capabilities of DMA cards.
Over time, advancements in semiconductor technology led to the development of smaller, more powerful DMA cards, such as those equipped with FPGA (Field Programmable Gate Array) functionalities. These modern cards offer customizable performance settings, making them ideal for gaming, video rendering, and complex data processing tasks.
Applications of DMA Cards in Modern Computing
Use Cases in Gaming: Enhancing Performance with DMA
In the gaming industry, performance is paramount. Game developers and players alike benefit from the high-speed data transfer provided by DMA cards. For example, DMA cards enable faster loading times by allowing game data to be streamed directly into memory without CPU intervention. This can lead to significant improvements in frame rates and overall gameplay experience.
Moreover, DMA technology is increasingly being used to facilitate real-time data processing for game cheat detection systems. By allowing read and write access to memory without CPU involvement, DMA cards can operate stealthily, making them difficult for traditional anti-cheat systems to detect.
Implementing DMA Cards for Data Transfer Efficiency
DMA cards are not limited to gaming; they play a vital role in data-centric applications across industries. For instance, in video production, the use of DMA significantly accelerates the transfer of large video files from capture devices to editing software. By streamlining this process, creatives can focus more on their craft and less on technical delays.
Additionally, in machine learning applications, DMA cards can facilitate rapid data retrieval, improving training times for complex models. As datasets grow in size and complexity, the efficiency offered by DMA becomes increasingly essential.
Industry-Specific Applications: From Gaming to Industrial Automation
Beyond gaming and data processing, DMA cards find utility in various sectors such as industrial automation, telecommunications, and scientific research. In industrial settings, they are used to manage data flow from sensors to processing units without overwhelming the CPU. This efficient data management enables timely responses in systems critical for safety and compliance.
In the telecommunications sector, DMA technology supports the demanding data transfer requirements of modern networks. Here, DMA cards improve throughput and reduce latency, allowing service providers to deliver high-quality streaming services and manage large volumes of concurrent users.
Benefits of Using DMA Cards
Increased Data Transfer Speeds
The primary advantage of using DMA cards is the significant increase in data transfer speeds. By bypassing the CPU for direct memory access, data can be moved more quickly and efficiently. This is especially valuable in applications requiring high bandwidth, such as video streaming or data-heavy software applications.
Reducing CPU Load for Improved Performance
With DMA cards handling data transfers independently, the CPU is free to execute other processes. This reduction in CPU load can lead to enhanced system performance, especially in multitasking environments where many applications run simultaneously. Users can experience smoother performance as the system becomes more responsive under load.
Enhanced Reliability and Support for High-Volume Tasks
DMA cards are designed to handle high data volumes reliably. In industries where data integrity and loss prevention are paramount, such as finance and healthcare, the use of DMA cards ensures that transfers occur without interruption or error. Their built-in redundancy and error-checking capabilities further enhance their reliability in mission-critical applications.
Choosing the Right DMA Card for Your Needs
Key Features to Consider: Speed, Compatibility, and Support
When selecting a DMA card, several factors come into play. First and foremost, assess the speed ratings of the card, which indicate how rapidly data can be processed. Next, compatibility with existing hardware is crucial; ensure that the card fits your motherboard’s slot type (PCIe, USB, etc.) and is compatible with your operating system.
Additionally, factor in manufacturer support for firmware updates and customization options. A card with robust support can provide long-term value as technologies evolve.
Comparison of Popular DMA Card Models
The market for DMA cards is populated with various models, each catering to specific needs. Popular choices include the FPGA-based cards known for their flexibility and performance. For instance, the 75T and 35T models offer broad compatibility and high throughput rates, making them ideal for both gamers and data scientists alike. Comparisons can involve assessing factors such as speed, supported features, and user reviews.
Understanding Firmware and Customization Options
Many modern DMA cards come with customizable firmware settings, offering users the ability to tweak performance profiles according to their requirements. Understanding how to leverage these options can lead to improved performance and reliability. For users with specific needs, such as gameplay optimization or intensive data processing, detailed knowledge of firmware can make a significant difference.
Best Practices for Using and Maintaining DMA Cards
Installation Tips and Common Pitfalls
Proper installation is critical to harnessing the full potential of DMA cards. Always refer to the manufacturer’s guidelines. Common pitfalls include improper seating of the card in its slot and incompatible peripheral setups. Ensuring a secure connection minimizes issues and maximizes performance.
Regular Maintenance for Optimal Performance
Like any hardware, DMA cards benefit from regular maintenance. Keep software and firmware updated to capitalize on performance enhancements and security patches. Regularly monitoring temperatures and ensuring good air circulation can prevent overheating and prolong the lifespan of the card.
Upgrading: When is it Time to Replace Your DMA Card?
Signs that it may be time to upgrade your DMA card include consistently low performance metrics, compatibility issues with new hardware, or lack of manufacturer support for updates. Evaluating your current needs against the capabilities of your card can help inform your decision to upgrade.
1 note · View note