#predictive modeling software
Explore tagged Tumblr posts
cyber-soul-smartz · 1 year ago
Text
21st Century Project Planning: Blueprint for Unparalleled Success
Discover the secrets to mastering project planning and achieving unparalleled success! Dive into our latest article for expert insights and practical tips. Don't miss out—subscribe now to stay updated on the best strategies for professional development!
Mastering Project Planning: Crafting the Blueprint for Unparalleled Success in the 21st Century Imagine venturing on a journey without a map, a compass, or even a clear destination in mind. The chance of reaching your goal would be slim to none. This scenario mirrors the challenges faced by project managers who dive into execution without a solid plan in place. The planning phase of project…
0 notes
lumenore-datalytics · 1 month ago
Text
How AI-Powered Analytics Is Transforming Healthcare in 2025
In healthcare, seconds save lives. Imagine AI predicting a heart attack hours before symptoms strike or detecting cancer from a routine scan. This isn’t science fiction��AI-powered analytics in healthcare is making this a reality, turning data into life-saving insights. 
Tumblr media
By analyzing vast amounts of data, AI healthcare analytics help decode hidden patterns, improving diagnoses and personalizing treatments, which were unimaginable until a few years ago. The global healthcare analytics market is projected to hit $167 billion by 2030, growing at a 21.1% CAGR, thereby proving that data is becoming the foundation of modern medicine. 
From real-time analytics in healthcare to AI-driven insights, the industry is witnessing a revolution—one that enhances patient care, optimizes hospital operations, and accelerates drug discovery. The future of healthcare is smarter, faster, and data-driven. 
What Is AI-Powered Analytics in Healthcare?
AI-powered analytics uses artificial intelligence and machine learning to analyze patient data, detect patterns, and predict health risks. This empowers healthcare providers to make smarter, faster, and more personalized decisions. Here’s how this data revolution is reshaping healthcare:
1. Early Diagnosis and Predictive Analytics 
AI-powered analytics can analyze massive datasets to identify patterns beyond human capability. Traditional diagnostic methods often rely on visible symptoms, but AI can detect subtle warning signs long before they manifest. 
For example, real-time analytics in healthcare is proving life-saving in sepsis detection. Hospitals that employ AI-driven early warning systems have reported a 20% drop in sepsis mortality rates as these systems detect irregularities in vitals and trigger timely interventions. 
2. Personalized Treatment Plans 
AI-powered analytics can customize plans for individual patients based on genetic data, medical history, and lifestyle. This shift towards precision medicine eliminates the conventional one-size-fits-all approach. 
AI also enables real-time patient monitoring and adjusting treatments based on continuous data collection from wearable devices and electronic health records (EHRs). This level of personalization is paving the way for safer, more effective treatments. 
3. Smarter Hospital Operations 
Hospitals generate 2,314 exabytes of data annually, yet much of it remains underutilized. AI-powered analytics is changing that by optimizing hospital operations to reduce inefficiencies and improve patient flow management. 
For instance, Mount Sinai Hospital in New York uses AI-powered analytics for patient care by predicting life-threatening complications before they escalate. A clinical deterioration algorithm analyzes patient data daily, identifying 15 high-risk patients for immediate intervention by an intensive care rapid response team. Beyond emergency care, AI also prevents falls, detects delirium, and identifies malnutrition risks, ensuring proactive treatment. 
4. Drug Discovery and Development 
Developing a new drug is expensive and time-consuming, often taking 10-15 years and costing over $2.6 billion. However, AI-powered analytics is significantly reducing both time and costs by analyzing millions of chemical compounds, predicting potential drug candidates, and streamlining clinical trials faster than traditional methods. 
During the COVID-19 pandemic, AI played a crucial role in identifying potential antiviral treatments by rapidly analyzing millions of drug interactions – a process that would have taken human researchers years. Additionally, AI is now being used to repurpose existing drugs, optimize trial designs, and predict patient responses, making pharmaceutical development faster, more efficient, and data-driven. 
5. 24/7 Patient Support with AI Chatbots and Virtual Assistants 
Tumblr media
A survey by Accenture estimates that AI applications, including chatbots, could save the U.S. healthcare system around $150 billion annually by 2026. These savings stem from improved patient access and engagement, as well as a reduction in costs linked to in-person medical visits. AI-driven healthcare analytics is making healthcare more efficient, patient-centric, and responsive to individual needs. 
Challenges in AI-Driven Healthcare
Despite its potential to revolutionize healthcare, AI-powered healthcare data & analytics come with challenges that must be addressed for widespread adoption. Some of the challenges are: 
Data Privacy and Security: Healthcare systems handle sensitive patient data, making them prime targets for cyberattacks. Ensuring robust encryption, strict access controls, and compliance with HIPAA and GDPR is critical to maintaining patient trust and regulatory adherence. 
Bias in AI Models: If AI systems are trained on biased datasets, they can perpetuate healthcare disparities, thereby leading to misdiagnoses and unequal treatment recommendations. Developing diverse, high-quality datasets and regularly auditing AI models can help mitigate bias. 
Regulatory Compliance: AI-driven healthcare solutions must align with strict regulations to ensure ethical use. Organizations must work closely with regulatory bodies to maintain transparency and uphold ethical AI practices. 
What’s Next in Smart Healthcare?
AI-Powered Surgeries: Robotic assistance enhances precision and reduces risks.
Smart Wearables: Track vital signs in real-time and alert patients to anomalies.
Mental Health Tech: Predictive tools offer proactive support and personalized therapy.
Why It Matters
AI isn’t replacing doctors—it’s augmenting their decision-making with data-driven insights. Healthcare systems that adopt analytics will see:
Improved patient outcomes
Reduced costs
Streamlined operations
0 notes
picontrols · 1 month ago
Text
Bridging the Skill Gap with Process Control Simulation Training
Tumblr media
"Why is it so hard to find skilled workers for industrial automation?"
"How do we train new employees without risking downtime or safety?"
"Is there a way to upskill our team without pulling them off active projects?" If you've ever asked these questions, you're not alone. The skills gap in industrial sectors—especially in process control and automation—is a growing concern for plant managers, HR teams, and training coordinators. The good news? Process control simulation training is becoming a game-changer. It's not just about learning theory; it's about giving your team hands-on experience in a risk-free, highly realistic environment. In this blog, let's explore how process control simulation training is helping companies bridge the skills gap, boost productivity, and future-proof their workforce.
🔧 What Is Process Control Simulation Training?
Process control simulation training uses software-based tools (and sometimes hardware-integrated systems) to simulate real-world industrial processes, such as chemical reactions, fluid flow, heating systems, or batch operations.
Employees interact with digital twins of systems rather than learning on a live plant or production line (which can be costly and risky). They can also practice controlling variables and troubleshoot simulated failures in a controlled and safe learning space.
📉 The Reality of the Skills Gap
Here's the harsh truth: as experienced engineers retire and tech continues to evolve, there's a growing mismatch between what employers need and what job seekers can do.
According to various industry reports:
Over 50% of manufacturers say they struggle to find qualified talent.
Many graduates enter the workforce without practical exposure to control systems, instrumentation, or advanced automation.
On-the-job training often means learning under pressure, which increases risk and slows down productivity.
That's where process control simulation comes in to level the playing field.
💡 Why Simulation Training Works So Well
Let's break it down—why is simulation training such a powerful tool for skill development?
1. Hands-On Without the Risk
Operators and engineers can learn to manage pumps, valves, sensors, and PID controllers without shutting down an actual plant or risking equipment failure.
✅ Outcome: Teams gain confidence and skills faster, without the anxiety of making real-world mistakes.
2. Real-Time Feedback and Learning
Simulation platforms offer instant feedback so learners can see every decision's cause and effect. Did a parameter spike? Was the valve response too slow? The trainee can adjust, repeat, and refine.
✅ Outcome: Faster learning curves and better problem-solving abilities.
3. Customized to Industry Needs
Whether you're in oil and gas, food processing, pharmaceuticals, or energy, process control simulation training can be tailored to match the systems your team uses every day.
✅ Outcome: No more generic training—only relevant, job-specific practice.
4. Supports All Experience Levels
From entry-level technicians to experienced engineers learning new platforms, simulation training fulfills people where they are.
✅ Outcome: Continuous professional development becomes scalable.
🧠 What Skills Are Developed?
Here are just a few areas where process control simulation builds competence:
Instrument calibration
Process variable tuning (temperature, flow, pressure)
PLC and SCADA integration
Alarming and fault detection
Start-up and shutdown procedures
Troubleshooting under abnormal conditions
It also enhances soft skills like decision-making, attention to detail, and collaboration using group-based simulations.
🏭 Real-World Benefits for Companies
Let's not forget the big picture—this isn't just a learning tool. It's a strategic investment.
✔️ Shorter onboarding time for new hires
✔️ Reduced operational downtime from human error
✔️ Higher retention and employee satisfaction
✔️ Stronger compliance with safety regulations
✔️ Better preparedness for automation upgrades
Companies using process control simulation in their training programs are more agile, efficient, and better positioned for growth.
🚀 Getting Started with Simulation Training
Are you ready to close the skills gap in your team? Here's how to begin:
Choose the right platform – 
Look for simulation tools like Simulink, DCS emulators, or virtual PLC trainers.
Assess your team's needs – 
Identify the processes or skills most needing improvement.
Design a structured training path – 
Combine simulations with assessments and guided instruction.
Track progress – 
Use KPIs to measure learning outcomes and improvements over time.
Encourage a culture of learning –
 Make training constant, not just a one-time event.
Final Thoughts
Bridging the skill gap doesn't have to mean expensive hires or risky learning curves. With simulation training in process control, you can quickly, safely, and effectively give your team the necessary skills. As industries evolve, the companies that invest in their people through innovative training tools will be the ones that lead the way. So, if you're ready to turn your team into top-tier operators and problem-solvers, process control simulation might be your best bet. Count on skilled software developers of PiControl Solutions LLC to design and implement tools for process control simulation and train your team.
0 notes
insteptechnologies123 · 2 months ago
Text
0 notes
stephypublisher1 · 1 year ago
Text
Lateral Rock Characteristic Evidence of Hydrocarbon Restoration from Post-Stack Impedance Inversion: A Case Study of ZED-field Offshore Niger Delta
Tumblr media
Lateral predictions of rock properties that are descriptive of a reservoir have been studied using a model-established seismic inversion approach that was used to convert input seismic data into an acoustic impedance structure to optimize hydrocarbon restoration in the field. This was actualized by integrating wireline logs and 3D post stack seismic data procured from ZED-Field offshore Niger Delta. The inversion system employed in this study comprises of forward modelling of reflection coefficients starting with a low-frequency impedance model induced from well logs and convolution of the reflection coefficients with the source wavelet extracted from the seismic input. P-impedance analysis at reservoir C5000 delineated from the control well gave a near perfect correlation of 0.998109 (≈ 99.8 %) between the original P-impedance log, initial P-impedance model log and inverted P-impedance log with an estimated error of 0.0616932 which is about (6.16%). Seismic inversion analysis realized an acoustic impedance structure with P-impedance values ranging from 3801 to 11073.0m/s*g/cc and having an overall increase with depth tendency. Specifically, a low-impedance structure was observed at the reservoir window that can be profiled laterally elsewhere from the existing wells. Impedance slices extracted from the impedance volume at the top and base of reservoir C5000 clearly showed low impedance values away from the existing wells which are evidence of new hydrocarbon prospects (hydrocarbon-charged sands) in the field that can be explored for improved hydrocarbon recovery and field development. Therefore, the realized attribute could supply litho-fluids knowledge inside the current reservoirs, and likewise aid in defining probable hydrocarbon regions of low impedance that can be tested with the input seismic to boost the interpretation of reservoir attributes with feasible integration to seismic stratigraphy for improved reservoir prediction away from the extant wells. This information can be invaluable in delineating more prospective reservoir zones in the field, and thereby enhancing optimum field development which aids in reservoirmanagement decisions.
0 notes
otiskeene · 2 years ago
Text
Guidewire Partners With Swiss Re To Reduce Operational Friction Across Insurance Parties
Tumblr media
Swiss Re Reinsurance Solutions and Guidewire (NYSE: GWRE) have forged a strategic alliance to use technology to improve connections within the insurance sector. The partnership, which has its roots in a common dedication to insurance innovation and quality, seeks to lessen operational friction for all parties involved in the insurance value chain—risks, insureds, insurers, reinsurers, and intermediaries.
A range of analytics solutions, connectors, and data transmission methods are provided as part of the relationship. The introduction of Swiss Re Reinsurance Solutions' own data models and risk insights into the Guidewire cloud platform marks the beginning of this endeavor. Guidewire's Analytics Manager will make the integration easier and enable the incorporation of pertinent findings into essential insurance activities.
In the face of a dynamic environment and growing complexity in risk assessment, insurers are quickly incorporating advanced analytics into their claims and underwriting processes. This tendency is anticipated to pick even more speed as generative AI becomes more widely used. To share information, gain insights, and ease risk transfer, primary carriers and reinsurers must be able to communicate with one other seamlessly. The goal of Guidewire and Swiss Re's partnership is to reduce friction in insurance procedures by optimizing data availability and predictive model deployment.
Swiss Re Reinsurance Solutions' Chief Executive Officer, Russell Higginbotham, expressed enthusiasm about the collaboration and emphasized Guidewire's worldwide experience in developing technology and analytics for the property and casualty (P&C) insurance industry. The partnership intends to improve the insurance sector's capacity to effectively transfer risks and provide superior customer service.
Read More - https://bit.ly/46dcDNl
0 notes
ellipsus-writes · 3 months ago
Text
Tumblr media
Each week (or so), we'll highlight the relevant (and sometimes rage-inducing) news adjacent to writing and freedom of expression. This week:
Inkitt’s AI-powered fiction factory
Inkitt started in the mid-2010s as a cozy platform where anyone could share their writing. Fast forward twenty twenty-fuckkkkk, and like most startups, it’s pivoted hard into AI-fueled content production with the soul of an algorithm.
Tumblr media
Pictured: Inkitt preparing human-generated work for an AI-powered flume ride to The Unknown.
Here’s how it works: Inkitt monitors reader engagement with tracking software, then picks popular stories to publish on its premium app, Galatea. From there, stories can get spun into sequels, spinoffs, or adapted for GalateaTV… often with minimal author involvement. Authors get an undisclosed cut of revenue, but for most, it’s a fraction of what they’d earn with a traditional publisher (let alone self-publishing).
“'They prey on new writers who have no idea what they’re doing,' said the writer of one popular Galatea series."
Many, many authors have side-eyed or outright decried the platform as inherently predatory for years, due to nebulous payout promises. And much of the concern centers on contracts that don’t require authors’ consent for editorial changes or AI-generated “additions” to the original text.
Now, Inkitt has gone full DiSrUpTiOn, leaning heavily on generative AI to ghostwrite, edit, generate audiobook narration, and design covers, under the banner of “democratizing storytelling.” (AI? In my democratized storytelling platform? It’s more likely than you think.)
Tumblr media
Pictured: Inkitt’s CEO looking at the most-read stories.
But Inkitt’s CEO doesn’t seem too concerned about what authors think: “His business model doesn’t need them.”
Tumblr media
The company recently raised $37 million, with backers including former CEOs of Sony, Penguin, and HarperCollins, proving once again that publishing loves a disruptor… as long as it disrupts creatives, not capital. And more AI companies are mushrooming up to chase the same vision: “a vision of human-created art becoming the raw material for AI-powered, corporate-owned content-production machines—a scenario in which humans would play an ever-shrinking role.”
(Not to say we predicted this, but…)
Welcome to the creator-industrial complex.
Tumblr media
Publishers to AI: Stop stealing our stuff (please?)
Major publishers—including The New York Times, The Washington Post, The Guardian, and Vox Media—have launched a "Support Responsible AI" campaign, urging the U.S. government to regulate AI's use of copyrighted content.
Like last month's campaigns by the Authors Guild and the UK's Society of Authors, there's a website where where you can (and should!) contact your representatives to say, “Hey, maybe stop letting billion-dollar tech giants strip-mine journalism.”
The campaign’s ads carry slogans like “Stop AI Theft” and “AI Steals From You Too” and call for legislation that would force AI companies to pay for the content they train on and clearly label AI-generated content with attribution. This follows lobbying by OpenAI and Google to make it legal to scrape and train on copyrighted material without consent.
The publishers assert they are not explicitly anti-AI, but advocate for a “fair” system that respects intellectual property and supports journalism.
But… awkward, The Washington Post—now owned by Jeff Bezos—has reportedly already struck a deal with OpenAI to license and summarize its content. So, mixed signals.
Still, as the campaign reminds us: “Stealing is un-American.”
(Unless it’s profitable.)
Tumblr media
#WarForever
We at Ellipsus love a good meme-turned-megaproject. Back in January, the-app-formerly-known-as-Twitter user @lolt64 tweeted a cryptic line about "the frozen wastes of europa,” the earliest reference to the never-ending war on Jupiter’s icy moon.
A slew of bleak dispatches from weary, doomed soldiers entrenched on Europa’s ice fields snowballed (iceberged?) into a sprawling saga, yes-and-ing with fan art, vignettes, and memes under the hashtag #WarForever.
It’s not quite X’s answer to Goncharov: It turns out WarForever is some flavor of viral marketing for a tabletop RPG zine. But the internet ran with it anyway, with NASA playing the Scorcese of the stars.
Tumblr media
In a digital hellworld increasingly dominated by AI slopification, data harvesting, and “content at scale,” projects like WarForever are a blessed reminder that creativity—actual, human creativity—perseveres.
Even on a frozen moon. Even here.
Tumblr media
Let us know if you find something other writers should know about, (or join our Discord and share it there!)
- The Ellipsus Team xo
Tumblr media
330 notes · View notes
stardust-n-glitterglue · 14 days ago
Text
Gonna yap about glitterglueau a lil
Tumblr media
⭐ Star's a diversity hire at the Pizza Plex (different location than Security Breach, there's no Vanny/og virus in my au) and I'm gonna be making some little 1-10 page comics/pictures to kinda follow his time there
Essentially, he's a disabled LGBT+ guy who was hired to work on redesigns for the art and character costumes around the Plex. (In the future, things he designs can be purchased in the gift shops) He has a form of dysautonomia (which seemingly randomly makes his entire body go limp, or causes him to straight up pass out), as well as seizures and a few other issues. When he's not mid pass out spell, he's actually weirdly strong for being such a little guy (farm kid who grew up tiny) and he really enjoys physical activities.
Part of his contract includes on site housing, since he can't drive due to medical issues and paying for transportation would be more expensive long term.
He ends up being based out of the daycare due to its lack of flashing lights, and the fact that the Plex security bot tends to be there most often during his "down time".
🌚 speaking of security bot, enter Eclipse
Tumblr media
He's the newest of the three "celestial" model animatronics, taller than Sun and Moon by a foot, and made with heavier materials to make him more formidable. He has four arms, primarily to make him a more efficient grappler for removing threats, and spends the majority of his time in the rafters above the daycare surveying the cameras remotely - just in case someone comes trying to get a kid out who isn't theirs.
He's smart and methodical, but hasn't been online all that long yet, so he doesn't always understand slang or more human centric interactions. He can come across as very intense, but a lot of it is him trying to gauge the situation and predict when he's going to be needed.
He's very protective, especially of the daycare (children and other attendants included) and isn't so sure about this new hire he's being assigned to keep an eye on. He has researched about the health limitations of people sharing the disorders ahead of time, so he's hoping that will go smoothly.
🌙 you see Moon, skittering across the nap area to prepare for the day
Tumblr media
He's a low energy extrovert who delights in helping with the daycare's quieter kids - the ones who need a little more help or guidance than others. He's usually on the quieter side himself, able to move around near silently, and is the reigning hide and seek champion of the SuperStar Daycare, but he enjoys singing and can be heard if you get close at all.
He's a little awkward more often than not, with a nervous laugh that a lot of people find just a bit off putting, but very kind and patient. He tends to slouch forward to look less intimidating to the kids, but some feel like the posture isn't beating the "creepy monster" allegations. He's got to be unsettling enough to scare off the nightmares, you know? Besides, before Eclipse came online, he was the main security bot and needed to be a bit more intimidating.
He's hesitant about getting this new human coworker, they've had plenty that haven't worked out and he's loath to have to help another person get used to being around him just for them to leave again.
☀️ Sun tidies up the craft area, looking towards the door with something that could be excitement...or trepidation? Hard to tell sometimes
Tumblr media
A high energy extrovert with a BIG personality, he loves working with the kids (most days) but really misses being a dedicated theater bot. The little ones are wonderful and he loves spending time with them, but it's just not what he was originally made for.
He's extremely sweet with kids, if a bit overwhelming at times, and very focused on making sure they have as much fun as safely as possible! He is the de facto "medic" of the daycare, after all. The one with the most updated software about healthcare and first aid.
With adults, it's a completely different story. He's passive aggressive at best, and downright verbally scathing at the slightest provocation. He doesn't touch anyone, he doesn't want to get decommissioned thank you, but he's definitely giving "if not for the laws of this land" vibes.
He's very much not looking forward to this little trespasser in HIS daycare, even if it is a direct order from upper management. You can't trust them either as far as he's concerned
95 notes · View notes
nostalgebraist · 3 months ago
Text
So, about this new "AI 2027" report...
I have not read the whole thing in detail, but my immediate reaction is kind of like what I said about "Bio Anchors" a while back.
Like Bio Anchors – and like a lot of OpenPhil reports for that matter – the AI 2027 report is mainly a very complex estimation exercise.
It takes a certain way of modeling things as a given, and then does a huge amount of legwork to fill in the many numeric constants in an elaborate model of that kind, with questions like "is this actually a reasonable model?" and "what are the load-bearing assumptions here?" covered as a sort of afterthought.
For instance, the report predicts a type of automated R&D feedback loop often referred to a "software intelligence explosion" or a "software-only singularity." There has been a lot of debate over the plausibility of this idea – see Eth and Davidson here for the "plausible" case, and Erdil and Barnett here for the "implausible" case, which in turn got a response from Davidson here. That's just a sampling of very recent entries in this debate, there's plenty more where that came from.
Notably, I don't think "AI 2027" is attempting to participate in this debate. It contains a brief "Addressing Common Objections" section at the end of the relevant appendix, but it's very clear (among other things, simply from the relative quantity of text spent on one thing versus another) that the "AI 2027" authors are not really trying to change the minds of "software intelligence explosion" skeptics. That's not the point of their work – the point is making all these detailed estimates about what such a thing would involve, if indeed it happens.
And the same holds for the rest of their (many) modeling assumptions. They're not trying to convince you about the model, they're just estimating its parameters.
But, as with Bio Anchors, the load-bearing modeling assumptions get you most of the way to the conclusion. So, despite the name, "AI 2027" isn't really trying to convince you that super-powerful AI is coming within the decade.
If you don't already expect that, you're not going to get much value out of these fiddly estimation details, because (under your view) there are still-unresolved questions – like "is a software intelligence explosion plausible?" – whose answers have dramatically more leverage over your expectations than facts like "one of the parameters in one of the sub-sub-compartments of their model is lognormally distributed with 80% CI 0.3 to 7.5."
---
Maybe this is obvious, I dunno? I've just seen some reactions where people express confusion because the whole picture seems unconvincing and under-motivated to them, and I guess I'm trying to explain what I think is going on.
And I'm also worried – as always with this stuff – that there are some people who will look at all those pages and pages of fancy numbers, and think "wow! this sounds crazy but I can't argue with Serious Expert Research™," and end up getting convinced even though the document isn't really trying to convince them in the first place.
---
Now, if you do buy all the assumptions of the model, then yes, I guess this seems like a valuable exercise. If you are literally Daniel Kokotajlo, and hence believe in all the kind of stuff that Daniel Kokotajlo believes, then it makes sense to do all this legwork to "draw in the fine details" of that high-level view. And yeah, if you think the End Times are probably coming in a few years (but you might be able to do something about that at the margins), then you probably do want to get very precise about exactly how much time you have left, and when it will become too late for this or that avenue for change.
(Note that while I don't agree with him about this stuff, I do respect Kokotajlo a lot! I mean, you gotta hand it to him... not only did he predict what we now call the "Gen AI boom" with eerie accuracy way back in 2021, he was also a whistleblower who refused to sign OpenAI's absurd you-can't-talk-about-the-fact-that-you-can't-talk-about-it non-disparagement agreement, thereby bringing it into public view at last.)
But, in short, this report doesn't really touch on the reasons I disagree with short timelines. It doesn't really engage with my main objections, nor is it trying to do so. If you don't already expect "AI" in "2027" then "AI 2027" is not going to change your view.
81 notes · View notes
picontrols · 2 months ago
Text
Is Model Predictive Control the Future of Process Automation?
Tumblr media
Did you know that nearly 70% of industrial processes still use old control methods? These methods can be slow and costly. But advanced strategies like Model Predictive Control (MPC) are changing that. MPC can predict and adjust in real-time, making it more effective than older systems.
This technology helps make processes more efficient, lowers costs, and improves product quality. In this article, we'll look at what Model Predictive Control is, how it works, and how different industries use it. You'll get a clear idea of its role in automation today.
Understanding the Basics of Process Control
To get what Model Predictive Control does, we first need to understand process control. Process control helps machines and systems run as planned in industries.
Old Control Methods and Their Drawbacks
Feedback control has been used for many years. But it's not perfect, especially for systems that are hard to manage. One expert said, "Traditional control methods' limits are clear, pushing for new tech."
Older methods can’t see future problems or change behavior before issues happen.
The Move Toward Smarter Control Systems
To get better results, industries have moved to new ways of control. These smarter systems use new tools and tech. By using them, companies can control their systems better and work more smoothly.
Model Predictive Control: A Smarter Way to Automate
Model Predictive Control (MPC) is a big step forward in how we control systems. It changes how we think about speed, accuracy, and output. MPC uses smart guesses and fast actions to help industries stay in control and improve accuracy.
How MPC Predicts and Improves Process Flow
MPC looks at what happened before to guess what might happen next. Experts say it's a strong way to boost how well systems work. Because MPC can change things before problems happen, it keeps everything running better.
Why MPC Is Better Than Old Feedback Systems
MPC has many strong points compared to old systems. It works well in tough settings and can manage many things at once. Here are some key benefits:
Better Efficiency: MPC helps systems work faster and waste less.
Higher Output: It makes it easier to meet targets by adjusting in real-time.
Smooth Control: MPC keeps machines running with fewer stops or mistakes.
Solving Hard Problems with Smart Tech
MPC uses smart systems to solve hard problems. It’s great for settings with lots of moving parts. With MPC, everything stays in line and meets the rules that matter.
Real-Life Success: How Industries Use MPC
MPC is already showing results in real cases. It helps fine-tune systems in big industries. Here are a few examples:
Oil Refining: Better Flow and Less Waste
In oil plants, MPC improves how crude oil is handled. Plants can make more while using less energy and creating less waste.
Power Plants: Clean and Steady Output
Power plants use MPC to stay within emission rules. It helps balance how much power is made and how much pollution is produced. This means they can meet demand and protect the environment.
Starting with MPC: Challenges and Tips
It’s not always easy to get started with MPC. But there are ways to make it work. As one expert put it, "MPC's benefits are worth the effort."
With good planning and support, companies can make the switch and start seeing results.
Conclusion: Embracing the Future of Process Automation
As more industries start using Model Predictive Control, they’ll see big changes. MPC can make work faster, cheaper, and better. It also helps companies do more with less.
Trying Model Predictive Control can help you stay ahead. This tool is shaping the future of automation by making systems smarter and more reliable.
For any business that wants to grow and work better, Model Predictive Control is a smart choice. It’s a key tool for success in modern industry.
0 notes
booksinmythorax · 1 year ago
Text
So, in the midst of... you know, everything, life at the library goes on and I wanted to talk about the difference between Libby and Hoopla.
For those not in the know, Libby and Hoopla are both apps/software that libraries can use to offer digital items to our patrons. Libby does ebooks (including graphic novels) and audiobooks.
Tumblr media
Hoopla does ebooks, audiobooks, digital comics (weekly issues, not just trades or graphic novels), movies, TV shows, and music.
Tumblr media
A little while back, my library system had to cut down on the number of Hoopla items patrons can check out per month. This caused a little bit of a stir - people like Hoopla! And they should! It's really cool! But the reason we had to cut back there and not with Libby was because the ways we pay for Libby and Hoopla are different.
Libby uses a pay-per-license model. This means that when we buy an ebook or audiobook on Libby, it's like we're buying one copy of a physical item. Except, because publishers are vultures, it's often much more expensive than buying one copy of the physical book - unless it's an audiobook, in which case buying the CDs might very well be more expensive than buying the digital license on Libby. That's why you might have to wait on a list for a Libby title that's really popular: we only have licenses for so many "copies". These licenses can be in perpetuity (i.e. you pay once and you can use that copy forever) or, more commonly, for a limited length of time like a year. Once that time is up, we decide whether to pay for the license for each copy again.
Hoopla uses a pay-per-circulation model. There's no waiting: once you, the patron, decide you'd like to check something out, you can do so immediately and we pay Hoopla a smaller amount of money to essentially "rent" the license from them. Cool, right?
Except that the pay-per-circ model adds up. If we have access to a brand new or popular title on Libby and Hoopla, and the Libby copy has a long waiting list, patrons might hop over to Hoopla to check it out immediately. If enough people do this, we might end up paying more overall for the Hoopla item on a per-circulation basis than we did for the license on the Libby item. That's why libraries typically limit the number of Hoopla checkouts patrons can use per month: because otherwise, we can't predict the amount we'll be paying Hoopla in the same way we can predict the amount we'll pay Libby.
Let me be clear: If a library offers a digital service and it would be helpful to you, please use it. Don't deny yourself a service you need or would enjoy in some misguided attempt to save your library some cash. We want to offer digital services, not least because ebooks and audiobooks have accessibility features that print books often don't. If your library has Libby and Hoopla and you get utility out of both, use both!
That said, if you're upset with the lower number of checkouts on Hoopla or the limited number of titles or copies available to you on Libby, you know who you should talk to? Your elected officials. Local, state, and federal. Because those folks are the ones who decide how much money we get, and what we can spend it on.
Don't go to them angry, either, because then we'll get scolded for not using the funds they "gave" us appropriately. (If you're a frequent library user, you might be shocked at how anti-library many local government officials already are.) Write your officials an email, call them, or show up at a board meeting and say you like the services the library offers, but you'd love it if we had enough money to buy more books on Libby or offer more checkouts on Hoopla. Tell them directly that this is how you would like your tax dollars to be spent.
If anybody has questions about how Hoopla or Libby work, I'm happy to answer them! Just wanted to make sure we had a baseline understanding.
185 notes · View notes
asestimationsconsultants · 3 months ago
Text
Future-Ready Forecasts | Why Smart Projects Trust a Commercial Estimating Service
Introduction
In the fast-paced world of commercial construction, uncertainty is a constant. Market fluctuations, labor shortages, changing regulations, and unpredictable global events can all affect the costs and timelines of construction projects. To mitigate these risks, forward-thinking developers and contractors rely on a commercial estimating service to provide accurate forecasts. These services help in not only predicting costs but also in preparing for the challenges that may arise during the construction process. This article explores the importance of using commercial estimating services for future-ready forecasts and why they are trusted by the most successful construction projects.
Understanding the Role of Commercial Estimating Services
A commercial estimating service provides detailed cost breakdowns for a project, considering every possible expense, from materials and labor to permits and equipment. However, the value of such a service extends beyond simple cost prediction. Estimators use advanced software, historical data, and market intelligence to predict future costs with a high degree of accuracy. These forecasts enable project stakeholders to plan better and make informed decisions before, during, and after construction.
Staying Ahead of Market Fluctuations
The construction industry is subject to numerous market fluctuations, such as shifts in the prices of raw materials, fuel, and labor. A commercial estimating service stays ahead of these changes by continuously monitoring market trends and adjusting cost predictions accordingly. For example, if steel prices are expected to rise due to a global shortage, estimators can incorporate these changes into their forecasts. By anticipating price increases or decreases, the estimating service helps projects stay on budget and avoid unpleasant surprises.
Predicting Labor Costs and Availability
Labor costs are one of the most volatile aspects of any construction project. From changes in union agreements to labor shortages and immigration laws, labor costs can fluctuate significantly. A commercial estimating service uses historical data and market analysis to predict potential labor cost increases and the availability of skilled workers. With this information, project managers can plan for workforce needs more effectively, adjusting timelines and budgets to reflect potential changes in labor availability.
Mitigating the Risks of Supply Chain Disruptions
Supply chain disruptions have become a significant concern for the construction industry, especially in the wake of the COVID-19 pandemic. Materials may be delayed, and shipping costs may increase due to global supply chain issues. A commercial estimating service can help anticipate these challenges by incorporating potential delays and cost increases into their forecasts. By factoring in these risks, estimators help project managers plan ahead and prepare contingency strategies, ensuring the project continues to move forward even when external factors disrupt the supply chain.
Using Technology to Improve Forecast Accuracy
Modern commercial estimating services leverage cutting-edge technology, such as 3D modeling, Building Information Modeling (BIM), and artificial intelligence, to improve the accuracy of cost forecasts. These technologies allow estimators to simulate different scenarios and predict outcomes more precisely. For instance, BIM allows the team to visualize the project in three dimensions, helping to identify potential cost issues before construction begins. AI algorithms can analyze vast amounts of historical data to predict cost trends and provide more accurate future forecasts.
Ensuring Accurate Scheduling and Cost Control
Estimating services do more than just predict costs—they also help in maintaining project schedules. By providing detailed cost breakdowns and timelines, commercial estimating services help ensure that projects are completed on time and within budget. Accurate scheduling based on future-ready forecasts reduces the risk of delays, which are often caused by unforeseen costs. If material prices increase or if additional labor is required, estimators can adjust the project’s schedule accordingly, ensuring that the work progresses smoothly.
Providing Strategic Decision-Making Support
A commercial estimating service also supports strategic decision-making by offering comprehensive cost analyses. These services provide developers and contractors with a clear picture of the financial health of a project, enabling them to make informed decisions about project scope, material selection, and financing. By using accurate cost forecasts, project stakeholders can evaluate different strategies, choose the most cost-effective options, and ensure that their decisions align with long-term financial goals.
Improving Project Collaboration
Construction projects often involve multiple stakeholders, including developers, architects, contractors, and subcontractors. A commercial estimating service facilitates collaboration by providing a single, reliable source of cost information. When all stakeholders have access to accurate forecasts, they can work together more effectively, align their expectations, and avoid misunderstandings about budget and timeline. Clear communication based on trusted cost estimates leads to smoother project execution and better overall results.
Conclusion
A commercial estimating service is more than just a tool for predicting project costs; it is an essential resource for ensuring that construction projects remain future-ready. By leveraging technology, monitoring market fluctuations, and providing accurate labor cost predictions, these services help mitigate risks and provide developers and contractors with the insights needed to make informed decisions. With the increasing complexity of modern construction projects, the importance of using a commercial estimating service to forecast future costs and manage risks cannot be overstated. Projects that trust commercial estimating services are better prepared to navigate the uncertainties of the construction world and successfully deliver projects on time and within budget.
0 notes
mariacallous · 9 months ago
Text
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.
The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”
In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.
Why Whisper Confabulates
The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, "Researchers aren’t certain why Whisper and similar tools hallucinate," but that isn't true. We know exactly why Transformer-based AI models like Whisper behave this way.
Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.
The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.
According to OpenAI in 2022, Whisper learned those statistical relationships from “680,000 hours of multilingual and multitask supervised data collected from the web.” But we now know a little more about the source. Given Whisper's well-known tendency to produce certain outputs like "thank you for watching," "like and subscribe," or "drop a comment in the section below" when provided silent or garbled inputs, it's likely that OpenAI trained Whisper on thousands of hours of captioned audio scraped from YouTube videos. (The researchers needed audio paired with existing captions to train the model.)
There's also a phenomenon called “overfitting” in AI models where information (in this case, text found in audio transcriptions) encountered more frequently in the training data is more likely to be reproduced in an output. In cases where Whisper encounters poor-quality audio in medical notes, the AI model will produce what its neural network predicts is the most likely output, even if it is incorrect. And the most likely output for any given YouTube video, since so many people say it, is “thanks for watching.”
In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information. For example, if many examples of training data featured speakers saying the phrase “crimes by Black criminals,” when Whisper encounters a “crimes by [garbled audio] criminals” audio sample, it will be more likely to fill in the transcription with “Black."
In the original Whisper model card, OpenAI researchers wrote about this very phenomenon: "Because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself."
So in that sense, Whisper "knows" something about the content of what is being said and keeps track of the context of the conversation, which can lead to issues like the one where Whisper identified two women as being Black even though that information was not contained in the original audio. Theoretically, this erroneous scenario could be reduced by using a second AI model trained to pick out areas of confusing audio where the Whisper model is likely to confabulate and flag the transcript in that location, so a human could manually check those instances for accuracy later.
Clearly, OpenAI's advice not to use Whisper in high-risk domains, such as critical medical records, was a good one. But health care companies are constantly driven by a need to decrease costs by using seemingly "good enough" AI tools—as we've seen with Epic Systems using GPT-4 for medical records and UnitedHealth using a flawed AI model for insurance decisions. It's entirely possible that people are already suffering negative outcomes due to AI mistakes, and fixing them will likely involve some sort of regulation and certification of AI tools used in the medical field.
87 notes · View notes
Note
I have a question can a ac/20 use high velocity rounds or would that just destroy the barrel and feeding loader for using a round like that? Been looking into getting ammo for the ac/20 that is not the stander ones. Trying to make my marauder be a sniper one for long range shooting, since it’s configured for long range sniping.
I would look into the Hypervelocity Autocannon line. It offers increased range over the standard models, though at the cost of less ammo per ton. There is no AC/20 variant, unfortunately. But the HVAC/10 does have a 33% greater maximum range than a standard AC/10. And even with the ammunition being more bulky, it still loads 8 shots per ton compared to an AC/20's 5 shots per tons. So you only lose 20% of your maximum potential damage per ton. Also, while the HVAC/10 weighs the same as your AC/20 (14 tons for both options) the HVAC/10 takes up 40% less internal space - you could use that space to upgrade your 'Mech with an Endo-Composite chassis...
Hang on.
[20 minutes elapses]
All right. I have taken the liberty of designing you a full refit for your Marauder, with some help from my sibkin Xerxes.
Marauder MAD-3R-LS
Tumblr media Tumblr media Tumblr media Tumblr media
To explain - the refit features upgraded weapons (all with extended range over your current loadout). The ERPPCs are effective out to nearly 700 meters. They are backed up by an HVAC/10 autocannon, effective out to 600 meters, fed by 3 tons of ammunition - enough for 4 continuous minutes of fire - in a CASE II protected ammunition bin (meaning you need not fear an ammo explosion any longer). As backup weapons, the standard medium lasers have been swapped for Clan ER medium lasers, giving you 66% increased range and 40% higher energy output for the same weight and bulk.
In terms of armor, there was enough space to fit Clan ferro-fibrous armor, meaning you now have slightly increased armor protection for less weight. We did have to make the cockpit slightly smaller, but the controls are now fully customizable, so you should not have any issues piloting - the cockpit is also now armored, with additional exterior armor to protect you from side and rear ambushes. We also managed to fit in a Clan endo-steel chassis, saving you further weight for no decrease in the chassis strength. We added in Clan double heat sinks and a Radical Heat Sink systems, as well as a combat computer and modifications to the weapons themselves. With all of those combined, you no longer have to worry about heat build-up except in the most intense situations - which should help with your comfort both in combat and when traveling.
Since you seem to be doing a lot of traveling on your own, and getting into quite a few battles on your own, we also added a supercharger, a Clan active probe (with some software patches to extend detection range to 210 meters), and a Clan ECM suite - in short, you can now move faster, detect enemies at much longer ranges, and the enemy will find it more difficult to hit you in combat - and you can effect a measure of stealth using the ECM suite to spoof enemy sensors.
Finally, we managed to keep the increased rotational ability of the arms, and mounted the HVAC on a directional mount, just like the original Marauder - meaning you can still fire behind you if needed. In addition, we increased the torso rotation speed and tolerances, meaning you can torso twist further in the same amount of time - useful if you find yourself surrounded. We added a predictive battle analysis system that should help alert you to enemy moves before they make them, letting you act faster in combat. Finally, we added multi-trac and variable targeting programming to the targeting computer, as well as accurized your weapons (and stabilized the HVAC/10 for firing on the move) - loosely translated, this means you can target multiple opponents at the same time with less issue, and you will be much more likely to hit them, even while moving.
In terms of comfort, we added improved communications equipment. This will allow you to somewhat cut through enemy ECM jamming and link up to any orbiting satellites to get a better picture of your surroundings in combat (as well as tune into local broadcasts for entertainment while out on mission). And finally, we used the remaining space in the head to fit an extensive set of amenities. A full-size hideaway bed, a compact but advanced kitchen unit, a fold-out toilet, a compact shower, and a half of cargo space to carry your own additional supplies with you. We also added a rumble seat - in case you ever find yourself with a passenger.
I have taken the liberty of sending you the refit files in question. If you bring your 'Mech to any Sea Fox licensed MechTech shop, present the waiver included in the files, and they will refit your Marauder to these exact specifications.
I have also arranged for the entire cost of the refit to be covered with no debt on your part.
Enjoy.
11 notes · View notes
skyfallscotland · 6 months ago
Text
Twisted Love, by Ana Huang 📷
“I never claimed to be Prince Charming, and my love isn’t a fairy-tale type of love. I’m a fucked-up person with fucked-up morals. I won’t write you poems or serenade you beneath the moonlight. But you are the only woman I have eyes for."
Tumblr media
I was so close to DNF'ing this, and honestly, I hate-read my way to the end. It's a shame really, because I feel like Huang isn't a bad writer per se, but her characters are completely intolerable (to me) and she needed to make better choices about what was included in this book.
Firstly, this book has every trope you could possibly imagine and I am not exaggerating. This is every wattpad story ever written crammed into one (too long tbh) book. Brother's best friend, grumpy-sunshine, billionaire CEO who doesn't like anyone else, crazy ex-boyfriend, one bed, family members out for your money, family members who wanted to kill you, oh shit actually you're adopted—everything. EVERYTHING. It's too much.
And even if we put that aside...let's move onto the characters.
Ava: the girl with so much trauma she has night terrors and a mysterious past she can't remember, whose father acts like he hates her, whose ex stalks and manhandles her, and oh yeah, she's SO nice and SO happy and just the BEST PERSON EVER all the fucking time, because none of that affected her. At all. ✔️ Check.
Alex: What isn't Alex Volkov? No seriously, what can't he do? And that's not a compliment.
He drove the same way he walked, talked, and breathed—steady and controlled, with an undercurrent of danger warning those foolish enough to contemplate crossing him that doing so would be their death sentence.
Alex’s parents had died when he was young and left him a pile of money he’d quadrupled the value of when he came into his inheritance at age eighteen. Not that he’d needed it, because he’d invented a new financial modeling software in high school that made him a multimillionaire before he could vote. With an IQ of 160, Alex Volkov was a genius, or close to it. He was the only person in Thayer’s history to complete its five-year joint undergrad/ MBA program in three years, and at age twenty-six, he was the COO of one of the most successful real estate development companies in the country. He was a legend, and he knew it.
“I’m not bragging. I have hyperthymesia, or HSAM. Highly superior autobiographical memory. Look it up.”
Stop. Please, I'm begging you.
And if you thought that might have just been her thoughts about him, well...
I didn’t do sweet nothings or lovemaking. I fucked a certain way, and only a specific type of woman was into that shit. Not hard-core BDSM, but not soft. No kissing, no face-to-face contact. Women agreed, then tried to change it up halfway through, after which I’d stop and show them the door.
You like to take a woman from behind and throw in some dirty talk and degradation babe, it's really not that deep 🥴
It's giving ✨i'm not like other guys✨
So anyway after we filter through at least 3178920 predictable plots and sideplots and just sideways journeys that didn't really need to be in here, finally we get to a third-act breakup (his choice) after which he decides he doesn't like (his choice) and decides to stalk her. For over a year.
“I’ll file a restraining order against you. Have you arrested for stalking.” “You can try, but I can’t guarantee my friends in the British government will comply.” His face darkened. “And if you think I’m leaving you alone and unprotected anywhere, you don’t know me at all.”
Ummm bro, the only danger to her here is you, are you kidding me? And sunny old Ava who was literally stalked by her last boyfriend (and it was a whole damn plot point) is like you know what, I love this guy who's stalking me! I'll give him another chance! Sure!
But wait, wait, wait, only after he serenades her with a love song. I'm not kidding. Oh, and you guessed it—voice of an angel, because there's nothing Alex Volkov can't do.
Personally I feel like ten years have passed since I picked up this book yesterday and some chick was stranded in the rain on the side of the road.
Also, minus ten points for
thick, and hard as a steel pipe—
Tumblr media
Just...no. Just no.
I'd love to have something more positive to say but I really don't have anything. The side characters were more tolerable than the main characters and that's the only reason I'm wondering if I should subject myself to the next book in the series, but honestly? I really don't think I can. I wish I'd picked up one of the fanfics on my TBR instead 😶
32 notes · View notes
do-you-have-a-flag · 2 months ago
Text
putting aside the ethics of 'A.I' videos in their creation/usage/waste/economics, just on a purely technical level one thing i find interesting is no matter if the result looks photorealistic or like 3d CGI- it's all technically 2d image generation.
unless specifically used as an add on in a software for 3d rendering, of course, pretty much every ai video you see online is 2d art. the space rendered is a single plane, think of it like doing a digital painting on a single layer. the depth/perspective is an illusion that is frame by frame being rendered to the best ability of prediction based on data it has been fed.
obviously videos of 3d models in animation are a 2d file. like a pixar movie. but in video games you do have a fully rendered 3d character in a 3d rendered space, that's why glitches that clip through environments are so funny. it's efficient to have stock animations and interaction conditions programmed onto rigged dolls and sets.
by contrast if you were to use a generative ai in a similar context it would be real time animating a series of illustrations. of sounds and scenarios. the complexity required for narrative consistency and the human desire to fuck up restrictions hits up against a much more randomised set of programming. how would it deal with continuity of setting and personality? obviously chatbots already exist but as the fortnight darth vader debacle recently shows there are limits to slapping a skin on a stock chatbot rather than building one custom.
i just think that there's so many problems that come from trying to make an everything generator that don't exist in the mediums it is trying to usurp because those mediums have a built in problem solving process that is inherent to the tools and techniques that make them up.
but also also, very funny to see algorithmic 2D pixel generation being slapped with every label "this photo, this video, this 3d render" like it is at best description cgi, let's call it what it is.
but i could of course be wrong in my understanding of this technology, so feel free to correct me if you have better info, but my basic understanding of this tech is: binary code organised by -> human programming code to create -> computer software code that -> intakes information from data sets to output -> pixels and audio waveforms
9 notes · View notes