Tumgik
#AI Face Recognition solutions
third-eyeai · 3 months
Text
Embracing the Future: Facial Recognition in Corporate Offices
In today’s technologically advanced world, corporate offices are continually seeking innovative solutions to enhance efficiency, security, and employee satisfaction. Facial recognition technology has emerged as a promising tool, especially in the domains of attendance tracking, time and attendance management, and payroll processing. Here, we explore current trends, challenges, functionality, benefits, and solutions related to facial recognition systems in corporate settings.
Current Trends in Facial Recognition Technology
Contactless Solutions: With heightened awareness of hygiene, contactless systems are in demand. Facial recognition offers a hands-free method to authenticate employees, minimizing physical contact.
Improved Accuracy: Advanced machine learning algorithms are continuously enhancing the accuracy and reliability of facial recognition systems, reducing errors and increasing trust.
Mask Detection: The capability to recognize faces with masks has become crucial, especially post-pandemic. Systems are being adapted to accurately identify individuals even when partially covered.
Anti-Spoofing Measures: To counteract spoofing attempts using photos or videos, modern systems incorporate advanced anti-spoofing technologies, ensuring only live faces are recognized.
Privacy and Regulation: As privacy concerns rise, systems are being developed to comply with stringent data protection regulations like GDPR and CCPA, ensuring responsible usage of biometric data.
Integration with AI and Analytics: Facial recognition is being integrated with AI to provide insightful analytics on employee attendance, punctuality, and even mood analysis, aiding in better management decisions.
Cloud-Based Solutions: Cloud-based facial recognition systems offer scalability and remote accessibility, making it easier to manage attendance data across multiple locations.
Customization and Flexibility: Businesses are seeking customizable solutions that can be tailored to their specific needs, ensuring seamless integration with existing systems.
Challenges and Issues in Corporate Offices
Privacy Concerns: Employees may be wary of how their biometric data is used and stored, raising concerns about surveillance and misuse of personal information.
Legal and Regulatory Compliance: Navigating the complex landscape of biometric data laws and ensuring compliance can be challenging for organizations.
Accuracy in Varied Conditions: Ensuring high accuracy in different lighting conditions, angles, and when faces are partially obscured remains a significant challenge.
Security Vulnerabilities: Facial recognition systems can be susceptible to spoofing and hacking, necessitating robust security measures to protect sensitive data.
Integration Complexity: Integrating facial recognition systems with existing HR and payroll software can be complex and may require significant customization.
User Acceptance: Gaining acceptance and trust from employees regarding the use of facial recognition for attendance tracking can be difficult.
Cost of Implementation: The initial cost of deploying facial recognition systems, including hardware and software, can be prohibitive for some organizations.
How Facial Recognition Attendance Systems Work
Enrollment: Employees are enrolled by capturing their facial images, which are then converted into unique faceprints stored in a database.
Capture Attendance: As employees arrive, their faces are scanned by a camera. The system detects and aligns the face for accurate recognition.
Feature Extraction: Key facial features are extracted to create a faceprint that is compared against stored templates.
Matching: The system matches the extracted faceprint with the database to verify the identity.
Attendance Recording: Upon a successful match, the system logs the attendance with a timestamp in the centralized database.
Real-Time Feedback: Employees receive instant feedback confirming their attendance has been recorded.
Data Management: Attendance data is integrated with time and attendance management systems, providing comprehensive records for payroll processing.
Benefits of Facial Recognition Systems
Automated Attendance Tracking: Reduces administrative overhead by automating the attendance recording process.
Contactless Operation: Enhances hygiene and safety by minimizing physical contact.
Accurate Timekeeping: Ensures precise tracking of working hours, reducing errors associated with manual entry.
Elimination of Time Theft: Prevents buddy punching and other forms of attendance fraud.
Real-Time Monitoring: Provides managers with real-time data on employee attendance patterns.
Efficient Payroll Management: Integrates seamlessly with payroll systems, automating wage calculations based on accurate attendance data.
Enhanced Security: Reduces the risk of unauthorized access with high-precision facial recognition.
Employee Satisfaction: Improves overall employee experience by simplifying the attendance process and ensuring timely, accurate payroll.
Solutions for Corporate Offices
Time and Attendance Management
Comprehensive Integration: Integrate facial recognition systems with existing HR and time management software to streamline operations.
Flexible Scheduling: Accommodate various work schedules, including remote and shift-based work, with accurate time tracking.
Compliance Tracking: Ensure adherence to labor laws and company policies regarding working hours and breaks.
Payroll Management
Automated Calculations: Utilize accurate attendance data to automate payroll calculations, minimizing errors.
Timely Payments: Ensure employees are paid on time, enhancing satisfaction and reducing disputes.
Cost Efficiency: Reduce administrative costs and the risk of payroll fraud through automated processes.
Detailed Reporting: Generate comprehensive reports on payroll expenses and attendance metrics for informed decision-making.
Regulatory Compliance: Maintain compliance with tax laws and payroll regulations, reducing the risk of penalties.
By addressing these challenges and leveraging the benefits, facial recognition technology can revolutionize attendance tracking, time and attendance management, and payroll processing in corporate offices, leading to more efficient and secure workplace operations.
0 notes
Link
Tektronix Technologies face recognition technology provides guests with perfection much advanced than the mortal eye, supports multiple attributes, periods, and complicated surroundings, and is able of producing results stationed on pall, edge, or bedded including unleashing smartphone bias, access control, furnishing accurate, fast and largely effective results to diligence.
face recognition technology abu dhabi ,  face recognition devcie in abu dhabi ,  face recognition device sharjah , facial recognition device in ajman , facial recognition device price , facial recognition device price abu dhabi , facial recognition device price alain , facial recognition device price oman , facial recognition device price qator , facial recognition device price in uae , facial recognition device sharjah 
Facial recognition system , Facial recognition system abu dhabi , Facial recognition system in alain , Facial recognition system oman , Facial recognition system qator facial recognition devices abu dhabi , facial recognition devices sharjah ,Face Recognition System in Al Ain , Face Recognition System in Al Ain 
0 notes
fireflysummers · 9 months
Text
Heroes, Gods, and the Invisible Narrator
Tumblr media
Slay the Princess as a Framework for the Cyclical Reproduction of Colonialist Narratives in Data Science & Technology
An Essay by FireflySummers
All images are captioned.
Content Warnings: Body Horror, Discussion of Racism and Colonialism
Spoilers for Slay the Princess (2023) by @abby-howard and Black Tabby Games.
If you enjoy this article, consider reading my guide to arguing against the use of AI image generators or the academic article it's based on.
Tumblr media
Introduction: The Hero and the Princess
You're on a path in the woods, and at the end of that path is a cabin. And in the basement of that cabin is a Princess. You're here to slay her. If you don't, it will be the end of the world.
Tumblr media
Slay the Princess is a 2023 indie horror game by Abby Howard and published through Black Tabby Games, with voice talent by Jonathan Sims (yes, that one) and Nichole Goodnight.
The game starts with you dropped without context in the middle of the woods. But that’s alright. The Narrator is here to guide you. You are the hero, you have your weapon, and you have a monster to slay.
Tumblr media Tumblr media
From there, it's the player's choice exactly how to proceed--whether that be listening to the voice of the narrator, or attempting to subvert him. You can kill her as instructed, or sit and chat, or even free her from her chains.
It doesn't matter.
Regardless of whether you are successful in your goal, you will inevitably (and often quite violently) die.
And then...
You are once again on a path in the woods.
The cycle repeats itself, the narrator seemingly none the wiser. But the woods are different, and so is the cabin. You're different, and worse... so is she.
Based on your actions in the previous loop, the princess has... changed. Distorted.
Tumblr media Tumblr media Tumblr media
Had you attempted a daring rescue, she is now a damsel--sweet and submissive and already fallen in love with you.
Had you previously betrayed her, she has warped into something malicious and sinister, ready to repay your kindness in full.
But once again, it doesn't matter.
Because the no matter what you choose, no matter how the world around you contorts under the weight of repeated loops, it will always be you and the princess.
Tumblr media Tumblr media
Why? Because that’s how the story goes.
So says the narrator.
So now that we've got that out of the way, let's talk about data.
Tumblr media
Chapter I: Echoes and Shattered Mirrors
The problem with "data" is that we don't really think too much about it anymore. Or, at least, we think about it in the same abstract way we think about "a billion people." It's gotten so big, so seemingly impersonal that it's easy to forget that contemporary concept of "data" in the west is a phenomenon only a couple centuries old [1].
This modern conception of the word describes the ways that we translate the world into words and numbers that can then be categorized and analyzed. As such, data has a lot of practical uses, whether that be putting a rover on mars or tracking the outbreak of a viral contagion. However, this functionality makes it all too easy to overlook the fact that data itself is not neutral. It is gathered by people, sorted into categories designed by people, and interpreted by people. At every step, there are people involved, such that contemporary technology is embedded with systemic injustices, and not always by accident.
The reproduction of systems of oppression are most obvious from the margins. In his 2019 article As If, Ramon Amaro describes the Aspire Mirror (2016): a speculative design project by by Joy Buolamwini that contended with the fact that the standard facial recognition algorithm library had been trained almost exclusively on white faces. The simplest solution was to artificially lighten darker skin-tones for the algorithm to recognize, which Amaro uses to illustrate the way that technology is developed with an assumption of whiteness [2].
This observation applies across other intersections as well, such as trans identity [3], which has been colloquially dubbed "The Misgendering Machine" [4] for its insistence on classifying people into a strict gender binary based only on physical appearance.
Tumblr media Tumblr media Tumblr media
This has also popped up in my own research, brought to my attention by the artist @b4kuch1n who has spoken at length with me about the connection between their Vietnamese heritage and the clothing they design in their illustrative work [5]. They call out AI image generators for reinforcing colonialism by stripping art with significant personal and cultural meaning of their context and history, using them to produce a poor facsimile to sell to the highest bidder.
All this describes an iterative cycle which defines normalcy through a white, western lens, with a limited range of acceptable diversity. Within this cycle, AI feeds on data gathered under colonialist ideology, then producing an artifact that reinforces existing systemic bias. When this data is, in turn, once again fed to the machine, that bias becomes all the more severe, and the range of acceptability narrower [2, 6].
Tumblr media
Luciana Parisi and Denise Ferreira da Silva touch on a similar point in their article Black Feminist Tools, Critique, and Techno-poethics but on a much broader scale. They call up the Greek myth of Prometheus, who was punished by the gods for his hubris for stealing fire to give to humanity. Parisi and Ferreira da Silva point to how this, and other parts of the “Western Cosmology” map to humanity’s relationship with technology [7].
However, while this story seems to celebrate the technological advancement of humanity, there are darker colonialist undertones. It frames the world in terms of the gods and man, the oppressor and the oppressed; but it provides no other way of being. So instead the story repeats itself, with so-called progress an inextricable part of these two classes of being. This doesn’t bode well for visions of the future, then–because surely, eventually, the oppressed will one day be the machines [7, 8].
It’s… depressing. But it’s only really true, if you assume that that’s the only way the story could go.
“Stories don't care who takes part in them. All that matters is that the story gets told, that the story repeats. Or, if you prefer to think of it like this: stories are a parasitical life form, warping lives in the service only of the story itself.” ― Terry Pratchett, Witches Abroad
Tumblr media
Chapter II: The Invisible Narrator
So why does the narrator get to call the shots on how a story might go? Who even are they? What do they want? How much power do they actually have?
With the exception of first person writing, a lot of the time the narrator is invisible. This is different from an unreliable narrator. With an unreliable narrator, at some point the audience becomes aware of their presence in order for the story to function as intended. An invisible narrator is never meant to be seen.
Tumblr media
In Slay the Princess, the narrator would very much like to be invisible. Instead, he has been dragged out into the light, because you (and the inner voices you pick up along the way), are starting to argue with him. And he doesn’t like it.
Despite his claims that the princess will lie and cheat in order to escape, as the game progresses it’s clear that the narrator is every bit as manipulative–if not moreso, because he actually knows what’s going on. And, if the player tries to diverge from the path that he’s set before them, the correct path, then it rapidly becomes clear that he, at least to start, has the power to force that correct path.
While this is very much a narrative device, the act of calling attention to the narrator is important beyond that context. 
Tumblr media
The Hero’s Journey is the true monomyth, something to which all stories can be reduced. It doesn’t matter that the author, Joseph Campbell, was a raging misogynist whose framework flattened cultures and stories to fit a western lens [9, 10]. It was used in Star Wars, so clearly it’s a universal framework.
Tumblr media Tumblr media Tumblr media
The metaverse will soon replace the real world and crypto is the future of currency! Never mind that the organizations pushing it are suspiciously pyramid shaped. Get on board or be left behind.
Generative AI is pushed as the next big thing. The harms it inflicts on creatives and the harmful stereotypes it perpetuates are just bugs in the system. Never mind that the evangelists for this technology speak over the concerns of marginalized people [5]. That’s a skill issue, you gotta keep up.
Computers will eventually, likely soon, advance so far as to replace humans altogether. The robot uprising is on the horizon [8]. 
Who perpetuates these stories? What do they have to gain?
Why is the only story for the future replications of unjust systems of power? Why must the hero always slay the monster?
Because so says the narrator. And so long as they are invisible, it is simple to assume that this is simply the way things are.
Tumblr media
Chapter III: The End...?
This is the part where Slay the Princess starts feeling like a stretch, but I’ve already killed the horse so I might as well beat it until the end too.
Because what is the end result here?
According to the game… collapse. A recursive story whose biases narrow the scope of each iteration ultimately collapses in on itself. The princess becomes so sharp that she is nothing but blades to eviscerate you. The princess becomes so perfect a damsel that she is a caricature of the trope. The story whittles itself away to nothing. And then the cycle begins anew.
Tumblr media
There’s no climactic final battle with the narrator. He created this box, set things in motion, but he is beyond the player’s reach to confront directly. The only way out is to become aware of the box itself, and the agenda of the narrator. It requires acknowledgement of the artificiality of the roles thrust upon you and the Princess, the false dichotomy of hero or villain.
Slay the Princess doesn’t actually provide an answer to what lies outside of the box, merely acknowledges it as a limit that can be overcome. 
Tumblr media
With regards to the less fanciful narratives that comprise our day-to-day lives, it’s difficult to see the boxes and dichotomies we’ve been forced into, let alone what might be beyond them. But if the limit placed is that there are no stories that can exist outside of capitalism, outside of colonialism, outside of rigid hierarchies and oppressive structures, then that limit can be broken [12].
Tumblr media
Denouement: Doomed by the Narrative
Video games are an interesting artistic medium, due to their inherent interactivity. The commonly accepted mechanics of the medium, such as flavor text that provides in-game information and commentary, are an excellent example of an invisible narrator. Branching dialogue trees and multiple endings can help obscure this further, giving the player a sense of genuine agency… which provides an interesting opportunity to drag an invisible narrator into the light.
There are a number of games that have explored the power differential between the narrator and the player (The Stanley Parable, Little Misfortune, Undertale, Buddy.io, OneShot, etc…)
However, Slay the Princess works well here because it not only emphasizes the artificial limitations that the narrator sets on a story, but the way that these stories recursively loop in on themselves, reinforcing the fears and biases of previous iterations. 
Critical data theory probably had nothing to do with the game’s development (Abby Howard if you're reading this, lmk). However, it works as a surprisingly cohesive framework for illustrating the ways that we can become ensnared by a narrative, and the importance of knowing who, exactly, is narrating the story. Although it is difficult or impossible to conceptualize what might exist beyond the artificial limits placed by even a well-intentioned narrator, calling attention to them and the box they’ve constructed is the first step in breaking out of this cycle.
“You can't go around building a better world for people. Only people can build a better world for people. Otherwise it's just a cage.” ― Terry Pratchett, Witches Abroad
Tumblr media
Epilogue
If you've read this far, thank you for your time! This was an adaptation of my final presentation for a Critical Data Studies course. Truthfully, this course posed quite a challenge--I found the readings of philosophers such as Kant, Adorno, Foucault, etc... difficult to parse. More contemporary scholars were significantly more accessible. My only hope is that I haven't gravely misinterpreted the scholars and researchers whose work inspired this piece.
I honestly feel like this might have worked best as a video essay, but I don't know how to do those, and don't have the time to learn or the money to outsource.
Slay the Princess is available for purchase now on Steam.
Screencaps from ManBadassHero Let's Plays: [Part 1] [Part 2] [Part 3] [Part 4] [Part 5] [Part 6]
Post Dividers by @cafekitsune
Citations:
Rosenberg, D. (2018). Data as word. Historical Studies in the Natural Sciences, 48(5), 557-567.
Amaro, Ramon. (2019). As If. e-flux Architecture. Becoming Digital. https://www.e-flux.com/architecture/becoming-digital/248073/as-if/
What Ethical AI Really Means by PhilosophyTube
Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on human-computer interaction, 2(CSCW), 1-22.
Allred, A.M., Aragon, C. (2023). Art in the Machine: Value Misalignment and AI “Art”. In: Luo, Y. (eds) Cooperative Design, Visualization, and Engineering. CDVE 2023. Lecture Notes in Computer Science, vol 14166. Springer, Cham. https://doi.org/10.1007/978-3-031-43815-8_4
Amaro, R. (2019). Artificial Intelligence: warped, colorful forms and their unclear geometries.
Parisisi, L., Ferreira da Silva, D. Black Feminist Tools, Critique, and Techno-poethics. e-flux. Issue #123. https://www.e-flux.com/journal/123/436929/black-feminist-tools-critique-and-techno-poethics/
AI - Our Shiny New Robot King | Sophie from Mars by Sophie From Mars
Joseph Campbell and the Myth of the Monomyth | Part 1 by Maggie Mae Fish
Joseph Campbell and the N@zis | Part 2 by Maggie Mae Fish
How Barbie Cis-ified the Matrix by Jessie Gender
233 notes · View notes
ausetkmt · 2 months
Text
Tumblr media
The first time Karl Ricanek was stopped by police for “driving while Black” was in the summer of 1995. He was twenty-five and had just qualified as an engineer and started work at the US Department of Defense’s Naval Undersea Warfare Center in Newport, Rhode Island, a wealthy town known for its spectacular cliff walks and millionaires’ mansions. That summer, he had bought his first nice car—a two-year-old dark green Infiniti J30T that cost him roughly $30,000 (US).
One evening, on his way back to the place he rented in First Beach, a police car pulled him over. Karl was polite, distant, knowing not to seem combative or aggressive. He knew, too, to keep his hands in visible places and what could happen if he didn’t. It was something he’d been trained to do from a young age.
The cop asked Karl his name, which he told him, even though he didn’t have to. He was well aware that if he wanted to get out of this thing, he had to cooperate. He felt at that moment he had been stripped of any rights, but he knew this was what he—and thousands of others like him—had to live with. This is a nice car, the cop told Karl. How do you afford a fancy car like this?
What do you mean? Karl thought furiously. None of your business how I afford this car. Instead, he said, “Well, I’m an engineer. I work over at the research centre. I bought the car with my wages.”
That wasn’t the last time Karl was pulled over by a cop. In fact, it wasn’t even the last time in Newport. And when friends and colleagues shrugged, telling him that getting stopped and being asked some questions didn’t sound like a big deal, he let it lie. But they had never been stopped simply for “driving while white”; they hadn’t been subjected to the humiliation of being questioned as law-abiding adults, purely based on their visual identity; they didn’t have to justify their presence and their choices to strangers and be afraid for their lives if they resisted.
Karl had never broken the law. He’d worked as hard as anybody else, doing all the things that bright young people were supposed to do in America. So why, he thought, can’t I just be left alone?
Karl grew up with four older siblings in Deanwood, a primarily Black neighbourhood in the northeastern corner of Washington, DC, with a white German father and a Black mother. When he left Washington, DC, at eighteen for college, he had a scholarship to study at North Carolina A&T State University, which graduates the largest numbers of Black engineers in the US. It was where Karl learned to address problems with technical solutions, rather than social ones. He taught himself to emphasize his academic credentials and underplay his background so he would be taken more seriously amongst peers.
After working in Newport, Karl went into academia, at the University of North Carolina, Wilmington. In particular, he was interested in teaching computers to identify faces even better than humans do. His goal seemed simple: first, unpick how humans see faces, and then teach computers how to do it more efficiently.
When he started out back in the ’80s and ’90s, Karl was developing AI technology to help the US Navy’s submarine fleet navigate autonomously. At the time, computer vision was a slow-moving field, in which machines were merely taught to recognize objects rather than people’s identities. The technology was nascent—and pretty terrible. The algorithms he designed were trying to get the machine to say: that’s a bottle, these are glasses, this is a table, these are humans. Each year, they made incremental, single-digit improvements in precision.
Then, a new type of AI known as deep learning emerged—the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches—in this case, millions of photos uploaded to the web that could be used to train new image recognition algorithms.
Deep learning catapulted the small gains Karl was seeing into real progress. All of a sudden, what used to be a 1 percent improvement was now 10 percent each year. It meant software could now be used not just to classify objects but to recognize unique faces.
When Karl first started working on the problem of facial recognition, it wasn’t supposed to be used live on protesters or pedestrians or ordinary people. It was supposed to be a photo analysis tool. From its inception in the ’90s, researchers knew there were biases and inaccuracies in how the algorithms worked. But they hadn’t quite figured out why.
The biometrics community viewed the problems as academic—an interesting computer-vision challenge affecting a prototype still in its infancy. They broadly agreed that the technology wasn’t ready for prime-time use, and they had no plans to profit from it.
As the technology steadily improved, Karl began to develop experimental AI analytics models to spot physical signs of illnesses like cardiovascular disease, Alzheimer’s, or Parkinson’s from a person’s face. For instance, a common symptom of Parkinson’s is frozen or stiff facial expressions, brought on by changes in the face’s muscles. AI technology could be used to analyse these micro muscular changes and detect the onset of disease early. He told me he imagined inventing a mirror that you could look at each morning that would tell you (or notify a trusted person) if you were developing symptoms of degenerative neurological disease. He founded a for-profit company, Lapetus Solutions, which predicted life expectancy through facial analytics, for the insurance market.
His systems were used by law enforcement to identify trafficked children and notorious criminal gangsters such as Whitey Bulger. He even looked into identifying faces of those who had changed genders, by testing his systems on videos of transsexual people undergoing hormonal transitions, an extremely controversial use of the technology. He became fixated on the mysteries locked up in the human face, regardless of any harms or negative consequences.
In the US, it was 9/11 that, quite literally overnight, ramped up the administration’s urgent need for surveillance technologies like face recognition, supercharging investment in and development of these systems. The issue was no longer merely academic, and within a few years, the US government had built vast databases containing the faces and other biometric data of millions of Iraqis, Afghans, and US tourists from around the world. They invested heavily in commercializing biometric research like Karl’s; he received military funding to improve facial recognition algorithms, working on systems to recognize obscured and masked faces, young faces, and faces as they aged. American domestic law enforcement adapted counterterrorism technology, including facial recognition, to police street crime, gang violence, and even civil rights protests.
It became harder for Karl to ignore what AI facial analytics was now being developed for. Yet, during those years, he resisted critique of the social impacts of the powerful technology he was helping create. He rarely sat on ethics or standards boards at his university, because he thought they were bureaucratic and time consuming. He described critics of facial recognition as “social justice warriors” who didn’t have practical experience of building this technology themselves. As far as he was concerned, he was creating tools to help save children and find terrorists, and everything else was just noise.
But it wasn’t that straightforward. Technology companies, both large and small, had access to far more face data and had a commercial imperative to push forward facial recognition. Corporate giants such as Meta and Chinese-owned TikTok, and start-ups like New York–based Clearview AI and Russia’s NTech Labs, own even larger databases of faces than many governments do—and certainly more than researchers like Karl do. And they’re all driven by the same incentive: making money.
These private actors soon uprooted systems from academic institutions like Karl’s and started selling immature facial recognition solutions to law enforcement, intelligence agencies, governments, and private entities around the world. In January 2020, the New York Times published a story about how Clearview AI had taken billions of photos from the web, including sites like LinkedIn and Instagram, to build powerful facial recognition capabilities bought by several police forces around the world.
The technology was being unleashed from Argentina to Alabama with a life of its own, blowing wild like gleeful dandelion seeds taking root at will. In Uganda, Hong Kong, and India, it has been used to stifle political opposition and civil protest. In the US, it was used to track Black Lives Matter protests and Capitol rioters during the uprising in January 2021, and in London to monitor revellers at the annual Afro-Caribbean carnival in Notting Hill.
And it’s not just a law enforcement tool: facial recognition is being used to catch pickpockets and petty thieves. It is deployed at the famous Gordon’s Wine Bar in London, scanning for known troublemakers. It’s even been used to identify dead Russian soldiers in Ukraine. The question whether it was ready for prime-time use has taken on an urgency as it impacts the lives of billions around the world.
Karl knew the technology was not ready for widespread rollout in this way. Indeed, in 2018, Joy Buolamwini, Timnit Gebru, and Deborah Raji—three Black female researchers at Microsoft—had published a study, alongside collaborators, comparing the accuracy of face recognition systems built by IBM, Face++, and Microsoft. They found the error rates for light-skinned men hovered at less than 1 percent, while that figure touched 35 percent for darker-skinned women. Karl knew that New Jersey resident Nijer Parks spent ten days in jail in 2019 and paid several thousand dollars to defend himself against accusations of shoplifting and assault of a police officer in Woodbridge, New Jersey.
The thirty-three-year-old Black man had been misidentified by a facial recognition system used by the Woodbridge police. The case was dismissed a year later for lack of evidence, and Parks later sued the police for violation of his civil rights.
A year after that, Robert Julian-Borchak Williams, a Detroit resident and father of two, was arrested for a shoplifting crime he did not commit, due to another faulty facial recognition match. The arrest took place in his front garden, in front of his family.
Facial recognition technology also led to the incorrect identification of American-born Amara Majeed as a terrorist involved in Sri Lanka’s Easter Day bombings in 2019. Majeed, a college student at the time, said the misidentification caused her and her family humiliation and pain after her relatives in Sri Lanka saw her face, unexpectedly, amongst a line-up of the accused terrorists on the evening news.
As his worlds started to collide, Karl was forced to reckon with the implications of AI-enabled surveillance—and to question his own role in it, acknowledging it could curtail the freedoms of individuals and communities going about their normal lives. “I think I used to believe that I create technology,” he told me, “and other smart people deal with policy issues. Now I have to ponder and think much deeper about what it is that I’m doing.”
And what he had thought of as technical glitches, such as algorithms working much better on Caucasian and male faces while struggling to correctly identify darker skin tones and female faces, he came to see as much more than that.
“It’s a complicated feeling. As an engineer, as a scientist, I want to build technology to do good,” he told me. “But as a human being and as a Black man, I know people are going to use technology inappropriately. I know my technology might be used against me in some manner or fashion.”
In my decade of covering the technology industry, Karl was one of the only computer scientists to ever express their moral doubts out loud to me. Through him, I glimpsed the fraught relationship that engineers can have with their own creations and the ethical ambiguities they grapple with when their personal and professional instincts collide.
He was also one of the few technologists who comprehended the implicit threats of facial recognition, particularly in policing, in a visceral way.
“The problem that we have is not the algorithms but the humans,” he insisted. When you hear about facial recognition in law enforcement going terribly wrong, it’s because of human errors, he said, referring to the over-policing of African American males and other minorities and the use of unprovoked violence by police officers against Black people like Philando Castile, George Floyd, and Breonna Taylor.
He knew the technology was rife with false positives and that humans suffered from confirmation bias. So if a police officer believed someone to be guilty of a crime and the AI system confirmed it, they were likely to target innocents. “And if that person is Black, who cares?” he said.
He admitted to worrying that the inevitable false matches would result in unnecessary gun violence. He was afraid that these problems would compound the social malaise of racial or other types of profiling. Together, humans and AI could end up creating a policing system far more malignant than the one citizens have today.
“It’s the same problem that came out of the Jim Crow era of the ’60s; it was supposed to be separate but equal, which it never was; it was just separate . . . fundamentally, people don’t treat everybody the same. People make laws, and people use algorithms. At the end of the day, the computer doesn’t care.”
Excerpted from Code Dependent: Living in the Shadow of AI by Madhumita Murgia. Published by Henry Holt and Company. Copyright © 2024 by Madhumita Murgia. All rights reserved.
2 notes · View notes
mariacallous · 1 year
Text
The US government should create a new body to regulate artificial intelligence—and restrict work on language models like OpenAI’s GPT-4 to companies granted licenses to do so. That’s the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress.
Under the proposal, developing face recognition and other “high risk” applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.
The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.
The senators’ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.
A day later, senator Chuck Schumer will host the first in a series of meetings to discuss how to regulate AI, a challenge Schumer has referred to as “one of the most difficult things we’ve ever undertaken.” Tech executives with an interest in AI, including Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, make up about half the almost-two-dozen-strong guest list. Other attendees represent those likely to be subjected to AI algorithms and include trade union presidents from the Writers Guild and union federation AFL-CIO, and researchers who work on preventing AI from trampling human rights, including UC Berkeley’s Deb Raji and Humane Intelligence CEO and Twitter’s former ethical AI lead Rumman Chowdhury.
Anna Lenhart, who previously led an AI ethics initiative at IBM and is now a PhD candidate at the University of Maryland, says the senators’ legislative framework is a welcome sight after years of AI experts appearing in Congress to explain how and why AI should be regulated.
“It's really refreshing to see them take this on and not wait for a series of insight forums or a commission that's going to spend two years and talk to a bunch of experts to essentially create this same list,” Lenhart says.
But she’s unsure how any new AI oversight body could host the broad range of technical and legal knowledge required to oversee technology used in many areas from self-driving cars to health care to housing. “That’s where I get a bit stuck on the licensing regime idea,” Lenhart says.
The idea of using licenses to restrict who can develop powerful AI systems has gained traction in both industry and Congress. OpenAI CEO Sam Altman suggested licensing for AI developers during testimony before the Senate in May—a regulatory solution that might arguably help his company maintain its leading position. A bill proposed last month by senators Lindsay Graham and Elizabeth Warren would also require tech companies to secure a government AI license but only covers digital platforms above a certain size.
Lenhart is not the only AI or policy expert skeptical of the government licensing for AI development. In May the idea drew criticism from both libertarian-leaning political campaign group Americans for Prosperity, which fears it would stifle innovation, and from the digital rights nonprofit Electronic Frontier Foundation, which warns of industry capture by companies with money or influential connections. Perhaps in response, the framework unveiled yesterday recommends strong conflict of interest rules for staff at the AI oversight body.
Blumenthal and Hawley’s new framework for future AI regulation leaves some questions unanswered. It's not yet clear if oversight of AI would come from a newly-created federal agency or a group inside an existing federal agency. Nor have the senators specified what criteria would be used to determine if a certain use case is defined as high risk and requires a license to develop.
Michael Khoo, climate disinformation program director at environmental nonprofit Friends of the Earth says the new proposal looks like a good first step but that more details are necessary to properly evaluate its ideas. His organization is part of a coalition of environmental and tech accountability organizations that via a letter to Schumer, and a mobile billboard due to drive circles around Congress next week, are calling on lawmakers to prevent energy-intensive AI projects from making climate change worse.
Khoo agrees with the legislative framework’s call for documentation and public disclosure of adverse impacts, but says lawmakers shouldn’t let industry define what’s deemed harmful. He also wants members of Congress to demand businesses disclose how much energy it takes to train and deploy AI systems and consider the risk of accelerating the spread of misinformation when weighing the impact of AI models.
The legislative framework shows Congress considering a stricter approach to AI regulation than taken so far by the federal government, which has launched a voluntary risk-management framework and nonbinding AI bill of rights. The White House struck a voluntary agreement in July with eight major AI companies, including Google, Microsoft, and OpenAI, but also promised that firmer rules are coming. At a briefing on the AI company compact, White House special adviser for AI Ben Buchanan said keeping society safe from AI harms will require legislation.
6 notes · View notes
faaun · 2 years
Text
i want to be a machine learning engineer but some of u guys r making it embarrassing actually. long but IMO important explanation below. We have bigger issues to deal w and better things to focus on.
like our planet is dying and the commercialisation of massive AI models and training the models themselves releases like hundreds of thousands of tonnes of carbon emissions. and this includes very "nonessential" models that don't tend to contribute much to society (re: new fancy image generation toy). but u have decided your new career path is "AI artist" (glorified prompt-writer?) .
and just as bad, some of you have decided the biggest issue w AI is those people, the glorified prompt writers!! you draw more attention to it instead of focusing on the real problems behind AI and the ethics of training models! about the harm it causes to the planet, about web-scraping limitations basically not existing (stolen art falls under this domain), copyright laws to do with AI, the way facial recognition deals with race, about the boundaries between letting AI learn and develop in an "unbiased" way vs preventing sociopolitical damage at the cost of (potentially) further progress.
conversely, there is nowhere NEAR enough focus about how AI can help us overcome some of our fundamental problems. i love machine learning bc i find it - specifically the maths behind it - fascinating and i believe one day it could help us make very cool advancements, as it already has. i think the mathematical architectures and processes behind creating new deep learning models are beautiful. i also know the damage capitalists will inevitably do - they always wield powerful, beautiful new tools as weapons.
AND HERE YOU ARE FALLING FOR IT! it's very frustrating to watch!! if you're angry on behalf of artists, i'm begging you to protect the rights of artists and be mad at greedy companies instead of villanising a tool that can help us immensely! learn about AI ethics, learn about how it is present in our lives, what we should try to stop, what we should promote.
if you "boycott AI" as a whole with no desire to gain more literacy on the topic other than "steals art therefore bad", you will have to be against your translate app, your search engine, your email spam filter, almost everything on your phone that categorises anything (i.e. pretty much all of your search functions), NPC enemies in games, your medical diagnostic tools, your phone's face unlock, your maps app, online banking, accessibility tools that help blind and deaf people, new advancements in genetic sequencing and protein folding and treating cancer and modelling new solutions in physics and so on and so on.
the issue isn't all AI as a whole. the issue is A) how companies are using it and B) how a lot of you guys are getting mad at the concept of AI instead of responding to A.
18 notes · View notes
tciddaemina · 1 year
Text
AI and fanworks - a dissection
In attempt to address some of the fears and uncertainty going around in the fan communities at the moment I wanted to have a look at AI interacts with art and fandom, and more specifically how it relates to theft in these fields. When AI comes up there has been a lot of knee jerk defensiveness and hurt from artists and writers in the fan community, and honestly, that’s fair enough. This is a field that has advanced rapidly, which has entered the scene with all the force of an explosion. I don’t think anyone was expecting it to hype up this much or so suddenly be splashed everywhere all at once, becoming the topic of debate in nearly every discipline and field, be it law and academia, digital art, and even fanfic writing. 
It’s everywhere, it’s all over the place, and to be honest, it’s just a bit of a mess.  
Currently we’ve had a strong outpouring of reaction, lashing out at everything to do with AI and its involvement in creative works. The thing is though, this whole issue of AI in art - especially its use on and in relation to fanworks - is something that's complicated and layered and isn’t composed so much of one single offense being committed, but rather multiple stacked issues all tangled together, each of which is simultaneously eliciting reactions all that all end up jumbled together. 
What I want to do a bit here is unpick the different layers of how AI is causing offense and why people are reacting the way they are, and in doing so, hopefully give us to the tools to advance the conversation a little and explain more clearly why we don’t want AI used in certain ways.  
There are, in essence, three layers of potential ‘theft’ in this AI cake. These are:
The training data that goes into the AI.
The issue regarding ‘styles’ and the question of theft in regards to that. 
Use of AI for the completion of another person’s story. 
This last one is the one that’s doing the rounds right now in the fanfic community and is causing a lot of strife and upset after a screenshot of a TikTok of someone saying that they were using ChatGPT to finish abandoned and/or incomplete fanfictions.
It’s a concept that has caused a lot of anger, a lot of hurt, and a lot of grief for authors who are facing the concept of this being done to their work and honestly that’s fair enough. As a fic writer myself, I also find the concept highly uncomfortable and have strong feelings about it. Maybe this person was trolling, maybe they were not, but the fact exists that this is increasingly coming a subject of debate. 
So, let's go through this mess step by step, see how exactly ‘AI theft’ is being done, what different levels this ‘theft’ is operating at, and what, if anything, can be done about it. And I’m not claiming to have a magic solution, but just that in understanding a problem, you have a better toolkit to explain to others why it is a problem and how to ask for the specific things that might help improve the situation. 
Training datasets - yeah, there’s some problems here
One thing we need to talk about going into this section is how AI actually functions - AI here being specifically image and word generators.
Text generators such as ChatGPT and Sudowrites are essentially word calculators. When someone feeds in an input, a prompt saying ‘write me this email’ or ‘write me a script for so and so story’ they essentially behave in the same manner as an autocomplete on your phone. Part of a sentence gets written, and based on the hundreds of thousands of pieces of writing they examined in their training data, they write what the next most expected thing in the sentence would be. 
For art generators, it comes down more on pattern recognition. A picture on a screen is just a pattern of pixels, and by feeding them huge amounts of training images you can teach them what the pattern for say ‘an apple’ looks like. When prompted, they can then use that learned pattern to spit out the image of an apple, one that doesn’t look specifically like any single image that went in. In essence it is a new ‘original’ apple, one that never existed specifically just like that before.    
All of these systems require huge amounts of training data to work. The more data they have, the better they can recognise patterns and spit out output that looks pretty or makes sense, or seems to properly match the prompt that was put in.
Tumblr media
In both these situations, writing or art, you can put in a prompt and get out an output that was created on demand and is different from even the pieces of work it might have been trained on, something that is ‘original’. This is, I think, where a lot of the back and forth regarding use of generated writing and art as a whole comes from and why there’s a lot of debate about whether it counts as theft or not. 
And honestly, whether generated content counts as ‘art’ or not, whether it counts as original, and whether the person who prompted it ‘owns it’ or the company who created the generator does isn’t a debate I want to get into here. Hello Future Me did a very good breakdown of this issue and the legalities of AI regarding copyright on YouTube, and I would recommend watching that if you want to learn more. Basically, it’s a murky pond with a lot of nuance and it’s not the debate we’re looking at here, so we’re just going to set it aside and focus on the other part of this instead - the training data. 
Because yeah, currently the vast majority of these generative ‘AI’ models have been trained on materials taken without the creators consent. They need huge amount of training data to work, and a great deal of it has been sourced in an unethical way. We’ve seen this with OpenAI scraping AO3 for writing text, to use as training material for their software - and being caught out with it by the whole knot thing. We know that Stable Diffusion was trained using the work of thousands of digital artists works without permission and used their work as training to help educate their image generation software, now resulting in a class action lawsuit by artists against it. 
And that fucking sucks.  
It sucks on multiple levels. If someone creates a piece of art, copyright law and shit requires that someone wanting to make use of it ask permission. And that’s fair. Copyright law does have its issues, but it does provide some protection to artists and their work. Using someone else's work without permission is not okay, especially when you’re using it for your own profit. There’s a couple of different aspects we need to unpick here, so let’s unpick them.
Yes, a generated work might not be a specific copy of any work that was taken and put into it, but honestly that doesn’t even matter - the lack of consent on the part of the people whose work is being used in this situation is unethical. Web scraping has always been a thing, but there’s a difference between people gathering data for research purposes and a corporation gathering the work of independent small artists and writers without their consent for use in generating their own profits. Because yes, these systems are currently free to use but you know that monetisation is an any now kind of thing, right? Eventually these generative softwares will be paywalled and used as a means of earning profit for the companies that created them, and that work will be done off the back of all the artists/writers whose work was put in.  
Already, on the level of being a regular writer or artist, this is pretty shitty (particularly when you consider that generated works are being eyed up by corporations as a way to avoid having to pay proper artists and writers for their work). As a fanworks author, the thought is even shittier. Part of what is lovely about the fan community is the fact that it’s all done for free. Fanworks exist in a specific legal framework that prevents monetisation, yes, and for the most part that’s something that’s celebrated. The joy of creating art or writing a story - of creating and sharing it with a community, just for the sake of doing it and not because of profit, is an integral part of what makes being a creator of fanworks so amazing. So to have something that you’ve created for free and shared for the love of it - specifically with the intent of never having it monetised - fed into a machine that will, in the end, be used to turn corporate profits is souring. People are entirely right to not feel pleased at that concept.  
AI techbros would argue that there isn’t a way to create large generative models like this with big enough datasets unless they collect them the way they do, but is it really?
Dance Diffusion is a music generation program that creates music from purely public domain, volunteered, and licenced material. It uses only works where the musicians have either consented to be part of the dataset, or have made their work available broadly for public use. And yes, the reason that they’ve taken this route is probably because the music industry is cutthroat with copyright and they’d be sued out of existence otherwise, but at the very least it proves that taking an ethical approach to data sourcing is entirely possible.
Already some generative software for images are working to do the same. Adobe Firefly is an image generator that runs entirely on its own stock photos, licenced photos, and images that are public domain . This already seems like a more ethical direction, one which is essentially no different than a company using public domain images and modifying them for commercial purpose without using AI. Yes, it has a more constrained data set and therefore might be less developed, but honestly if you can’t create a quality product without doing it ethically, you just shouldn’t make that product.
This is the element of consent missing from many of the big generative softwares right now. The only reason this isn’t the mode of approach being used in selecting what goes into training data is because techbros are testing the waters of what they can get away with, because yeah, taking this step takes more effort and is more expensive, even if it is demonstrably more ethical, and god forbid anything standing in the way of silicon valley profit margins.
So yes, there is unethical shit going on in regards to how people’s art and writing is being put into training sets without their consent, for the final result of generating corporate profits, and as a fanwork author it’s particularly galling when such a model might be making use of work you specifically created in good faith with a desire for it not to be commercialized. You have a right to be upset about this and to have a desire for your work not to be used.
Work that is used in training AI should have to be volunteered, obtained with a licence and the artist’s consent, or be works of public domain. It has to be an opt-in system, end of story.
If you take away this issue of stolen training works, the ethical dilemma of generated works is already a lot more palatable. Yeah, it might not be the same as making a work from scratch yourself and some might argue it’s not ‘real art’, but at that point it just becomes another tool people can use without this bad-taste-in-the-mouth feeling that you’re using software exploiting stolen works from non-consenting artists and writers. (The undervaluing of artists and writers works is something I acknowledge, but that’s a different debate and we’re not going to get into it here – though it will be touched upon briefly in the next section.)
The thing is though, this issue isn’t one we can readily fix as individuals. This is something that’s going to come down to government regulation and legislation. If you feel strongly about it, that’s the areas you need to put pressure - learn about what regulation is being done, make known your support for regulations that do promote more ethical practice, and if you’re someone who is generating artistic works, when make your consumer vote count and do it using software that more ethically sources its training data.
And yeah, maybe this sort of sucks as a conclusion for this issue, but the cat’s out of the bag now and generated works are likely here to stay. We’re in the wild west of them right now, but things can be done and it doesn’t need to be the complete dystopia it currently feels like it might be becoming.
The ‘theft’ part of ‘art theft’ 
Here we move on from the issues inherent with the generative models themselves - or more specifically their training sets - and to the ways they’re being used. This issue is split into two parts, the first of which is a more prominent issue in visual art and the second of which is an issue more specific to fanfic writing. 
Up until now you can sort of argue that use of a generative model is a victimless crime. The output you get may have been trained using someone else’s work, but it itself isn’t a strict copy of anything that exists. It’s created on demand and it’s unique. And yeah, you might say, maybe it sucks a bit that the training data stuff is a bit unethical but the harm there is already done and using it to generate a piece of art isn’t hurting anyone.
And to a point that’s true. If the ethical sourcing problem for data is solved, then there isn’t necessarily a lot of harm in using that software to generate something – it might not be the same process as creating art yourself, but it’s a tool that people can do doubt find some helpful uses for.
Using AI to copy someone’s style, or to finish one of their pieces of fiction, however, is something that hurts someone. It’s not a bad system causing the problem, but bad actors using that system to cause harm. In this this is distinct from the previous theft issue we’ve discussed. Even in a world where the big AI software’s all have de facto ethical training data, this would still be an issue.
‘Style’ theft – a problem for artists
One of the big things that’s been happening with image generation AIs is people training them to mimic certain styles, so that the user can then create new works in that style. There’s a lot of pretty furious debate back and forth about it in regards to whether using a style is art theft, whether its infringing copyright, with the weeping techbros often hurrying to cry ‘you can’t own a style!’
And to a certain point, yeah, they have a point. If you were to train an AI on impressionist artworks and tell it ‘paint me a tree in the style of Monet’ I don’t think anyone would necessarily have a problem with it. It’s a style, yes, but no one person owns it, and most of the people who developed that style are long dead – more works being created in that style and even sold commercially isn’t going to undercut a specific artist’s market. You could up and decide to learn to paint yourself and start churning out Monet-esque works by hand and it wouldn’t be copyright infringement or art theft or any of the like.
The situation is sort of different when you’re dealing with a living artist today. Already there have been cases of artists with distinct and beloved styles having their portfolios fed into AI without their consent, so that the prompter can then create works in their style.
Tumblr media
(Original work by samdoesart on the left, and generated work trained on samdoesart’s style on the right, posted on reddit. A model was specifically created trained on this artist in order to create art that looks like it was made by them.)
And this is harmful in two separate ways. We’ve covered above why its unethical and shitty to feed someone else’s work into a training set without their consent, and for a work to be generated in a specific artist’s style, a large amount of their work needs to be put in in order for the AI to learn its patterns and mimic it. The artist hasn’t consented to their inclusion, and now their hard work and the style they’ve created has been absorbed and made available for corporate use. Remember – most of these programs retain full ownership over all works they produce, regardless of the user that ‘created’ then. All generated works are owned by the corporation.
Tumblr media
Unlike general AI generation, where you’re creating some generic piece of art or writing and it doesn’t specifically harm anyone, this is a crime that directly has a victim. A specific artist has had their work used without consent, and often with the specific goal of creating work in that artist’s style to avoid having to commission or pay the artist for otherwise producing a work in their style. By creating this workaround where they don’t have to pay the artist for their work, they’re directly contributing to the harm of the artists financial prospects and the devaluing of their work.
When we talk about generic AI generation as a whole, the type where thousands of different distinct inputs go in and the thing that comes out doesn’t look specifically like any single one of them, you can make a case that it’s not copying any specific person’s work. This isn’t the case here. This is use of AI for the purposes of creating an art is a specific persons style. As Hello Future Me puts it “it is deriving a product from their data and their art without any attribution, payment, or credit.” And I don’t know about you, but that sounds a lot like theft to me. Yes, maybe a specific existing work was not explicitly copied, but you’re still making use of their style without a consent in a way that undermines their work and the market for their work, and that is a pretty shitty thing to do.
This whole section has talked about visual art so far, but you can see how it could also be applied to a written work. Telling an AI ‘write something in the style of Shakespeare’ is harmless, just as the Monet example was harmless, but you can readily see how this could also be applied to say, a poet who has a particular style of writing, with their portfolio of works fed in to create more poems in their style without their consent.
The key difference here, I think, is that works like Monet or Shakespeare are part of the public domain. No one owns them and everyone is free to use them. Generating derivitive works off their creations harms no one, because these creators are not alive and still producing - and selling - their work today. That cannot be said of living artists whose work is being exploited without compensation and whos livelihoods are being threatened by generated works.
Feeding an artist or writer’s work into an AI so that you can use their style, while not strictly ‘copying’ any single work of theirs, it still a harmful thing to do and should not be done. Even if you don’t intend to sell it and are just doing it for your private fun – even if you never even post the results online – you’ve still put their portfolio into the training set, making it a part of the resources a corporation can now use to turn a profit without repaying the artist or writer for their contribution.
The Fic Problem – or, completing someone else’s work
Now we moved onto the specific issue that first prompted me to write up this debate. There’s been noise going around in the fic writing community about people using AI to write fanfic and/or to finish other people’s incomplete works.
Tumblr media
In general, using AI to write fanfic is no different than using AI to come up with any other kind of creative writing. It has the same underlying ethical issues inherent in the system – which is, currently, that all work is being generated based on training data that was obtained without consent. Leaving aside whether in general generating creative writing is a good thing to do (what’s the point in writing, after all, if you don’t enjoy the process of actually writing?) let’s talk instead about the issue of people finishing other people’s fics.
The fanfiction community – and the fanart community as well I would wager – has always had something of a problem with entitlement. And look, I sort of get it. When you find an amazing fic that isn’t finished, it can hurt to reach the end and wonder if you’ll ever get to read any more of it. But that’s just the way it is. The fanworks community is one that is built overwhelmingly upon people creating labours of love just for the joy of it and sharing them free of charge. It’s hobby work, it’s hours squeezed in around jobs and families and kids, it’s someone scraping time out of their busy day to create something and share it just for the sake of it. People move onto different projects for all manner of reasons – maybe they don’t enjoy the fandom anymore, maybe that story now has baggage, maybe their tastes have just changed and they want to write different stuff. Maybe they stopped writing fiction altogether. Maybe they died.
Some stories will never be finished, and honestly, no author owes you the end of one. They have every right to write whatever they want and work on what projects they like, even if that means leaving some unfinished.
Coming in to complain about it, or to demand new updates – or, in this case, to finish the work yourself without their consent, is a shitty thing to do. And yeah, some author’s do put their abandoned works up for adoption and are happy for people to finish them, but in most cases author’s don’t want this done. That’s a personal choice and varies author to author. Someone seeing an abandoned fic and choosing, with their own two hands, to write up an ending for it without asking the author is already a bit of a sketchy thing to do, even if it’s just done privately and never sees the light of day, and doing it using AI is even worse.
Because yeah, to do it, you have to feed that person’s fic into the AI. You as a person, are putting a piece of work made with love and shared freely with you as a gift into a corporate dataset, where it will be used to generate corporate profits off the back of that person’s work. And more, if you want it to write properly in their style and mimic them well, then hey, you have to probably put in even more of their work, so that the AI can copy them well, which means even more of their works taken without their consent and put into the AI.
And that fucking sucks. It’s unethical, it’s a shitty thing to do, and if it’s unacceptable for a corporation to be taking someone’s work without their consent for use in AI training, then it’s unacceptable for you as an individual to be handing someone else’s work to them of your own volition.
As a fanfiction author, I cannot imagine anything more disheartening. I have a lot of unfinished works yes, some of which are even completely abandoned, but even when I haven’t touched them in years I don’t stop caring about those fics. I have everything that will happening in them already planned out – all the arcs, the twists, the resolutions and the ends – and someone else finishing it without my consent robs me of that chance to do it myself. You don’t know if an author still cares about a fic, whether they’re trying to work up the motivation to put out a new chapter even years later. You don’t know why they stopped writing it, or what that story might mean to them.
And I think this is what a lot of it comes down to. People will assume that because a fic is abandoned that an author doesn’t care about it anymore, or that because it’s fanfiction they don’t ‘own it’. Nothing could be further from the truth. Author’s do still care, and yes, they do own their own work. A transformative work might be one based on another franchise and one that makes use of another person’s copyright, but everything you put in yourself is still yours. Any new material, new characters, new concepts, new settings – the prose you use, the way you write, your turns of phrase. Those legally belong to the fanfic author even though they’re writing fanfic.
And yeah, some people might argue that putting someone's fanfic through an AI is a transformative act in itself. People write fic of fic, after all, and is creating something with AI based on fic really any different? Yes and no. A transformative work is one that builds on a copyrighted work in a different manner or for a different purpose from the original, adding adds "new expression, meaning, or message". It is this transformative act that makes it fair use, rather than just theft.
Could an AI do this, creating something that is ‘transformative’? Maybe. It’s not for me to say, and AI generators are getting better every day. In the use of AI in continuing a fic, however, I would argue that it’s use is not transformative but instead derivative. A derivative work is based on a work that has already been copyrighted. The new work arises—or derives—from the previous work. It’s things like a sequel or an adaptation, and you could clearly argue that the continuation of a story, a ‘Part 2′, counts under this banner of derivative rather than transformative. 
And okay, relying on copyright law while already in the nebulous space of fanfic feels like a bit of a weird thing, but it still counts here. Anything new that a person put into a fic belongs to them, and if you take it without their consent then yes, that is indeed stealing. Putting it through an AI or not doesn’t change that. 
And if the legal arguments here don’t sway you, then how about this - it’s a dick thing to do. Even if you don’t mean for it to hurt anyone, you are hurting the author. If you enjoy their work at all - and you must, if you’re so attached to it that you're desperate to see more of it - then respect the author. Respect the work they’ve done, the gift they’ve shared with you. Read it, enjoy it, and then move on.  
Fandom is built on a shared foundation of love and respect, a community where people create things and share them with others in good faith just for the love of it. This relationship is a two-way street. Authors and artists share their work with you and put in hours making it, but if the fan community stops respecting that then things might end up changing. An author burned by seeing their work fed into AI might stop writing, or stop sharing their works publicly. Already people are locking down their AO3 accounts so that only registered users can read and are creating workskins to prevent text copy and pasting out of fear that their work will be the one that someone chooses to ‘finish for them’. Dread over non-consenting use of AI is already having a direct impact in making fic more inaccessible. 
Is that the direction that we as a community want to head in? Because what’s currently happening a real shame. I would prefer to share my works with non-users, to have them read by anyone who cares to read them, but I’ve had to change that, just in the hopes that that small extra step might provide a slight sliver of protection that will keep my work from being put in an AI.
And what after that? Currently sites like AO3 allow readers to download copies of the fic so that they can store them themselves. If authors are concerned about their work being taken and used in AI, will they begin petitioning for this feature to be removed, so as to better protect their work from this sort of use? Artists are already coming up with programs such as Glaze to help protect their work from use in training datasets, but what recourse to writers have save to make their work more and more inaccessible and private? No doubt some authors are already contemplating whether its better to just start only sharing their works privately with friends or in discord servers, in the hopes that it might better protect their works from being stolen and used.
Its worth noting as well that any way in which an author restricts fic on AO3 in order to prevent AI - such as using a workskin to prevent copy and pasting - is one that will also impact peoples ability to do things like ficbinding and translations. If access is restricted, then its restricted for everyone, good and bad.
And some of you might say - well what does it matter if your work does end up in an AI? It’s a drop in the ocean, they won’t be copying your exact work so what does it even matter? And all I can say in response to that is that its about the principle of it. If someone is making use of another person’s work, then that person deserves to be fairly credited and compensated. If I create work for free, in rebellion against a world that is racing towards the commercialisation of all and everything, then I don’t want my work being exploited by some fucking corporation to earn profits. That’s not why I made it, that’s not why I shared it, and they can go fuck themselves.
Unethical corporations are one thing, but the other component of this problem is people within the community acting in bad faith. We might not as individuals be able to regulate mass AI data gathering, but we can chose how we behave in community spaces, and what things we chose to find acceptable in our communities.
What I’m trying to say is, if you love fanfiction, if you love fanart, if you love the works people create and want to see more, then please treat their creators with respect. We put a lot of work into creating the art you enjoy, we put blood and sweat and tears into it, and we want to keep making more. We want to share it with you. We can’t do that if you make us scared to do it, out of fear of our works being put into an AI without our consent.
It's about respect, it’s about common decency. We as a community need to decide whether we think it’s acceptable for people to put someone’s work into an AI without their consent – whether it’s acceptable to finish someone else’s work using an AI without their consent. Maybe AO3 will make some sort of ruling in the future about it, or maybe they won’t. Even if they do, it likely won’t make that big of a difference. Bad actors will act badly no matter the rules, all we can do as a community is make it clear how we feel about that and discourage these sorts of behaviours.
AI generative tools exist and like it or not will be here to stay. They have a lot of potential and could become a really useful and interesting tool in both writing and the visual arts, but as it stands there are serious issues with how it is used – both on a system wide level, with the unethical gathering of data, and on a user level, with people using it in bad faith.
AI is what it is, but it doesn’t change the underlying choice here – you as a member of the community have the choice whether to behave like a dick. Don’t be a dick.
5 notes · View notes
govindhtech · 11 months
Text
MediaTek Genio’s Wonder: Snorble Children’s Play’s Future!
Tumblr media
Due to MediaTek Genio, the Snorble, a Smart Companion for Children, Has Come to Life
What precisely is Snorble, and why does its operation need cutting-edge artificial intelligence and other technological advancements?
Snorble is a revolutionary new intelligent companion toy, and it is driven by a chip manufactured by MediaTek Genio. This toy is at the leading edge because of the innovative method in which it interacts with children and adapts to meet the changing psychological and sleep needs of children as they get older.
The stuffed animal known as Snorble was designed to assist children in establishing not just a regular pattern of restful sleep but also other emotionally wholesome routines, such as positive thinking and mindfulness. It is designed to aid children in accomplishing their goals by hearing, comprehending, and responding to the things that they say to it.
Its face has a moving expression. The ability for parents to keep their Snorbles up to date as their children continue to mature and their needs alter is made possible by the availability of an app for use on smartphones.
In addition to the significant requirements for speech processing and AI processing that this toy has, the technology that powers it must prioritize the privacy and safety of its users. Given that it satisfies all of these characteristics, MediaTek Genio is a fantastic option for accomplishing this objective.
A New Intelligent Companion that Puts Children’s Safety First in the World of Technology has Just Been Released for Young People.
So that it can interact with children and grow with them, Snorble requires sensors that can understand what they are saying and connections that can make product updates easier. Snorble was designed from the bottom up to function without any cameras at all, as an alternative to relying on audio processing that takes place in the cloud or remotely. This action was taken in response to concerns about users’ privacy and safety. Notably, a Wi-Fi connection is not required in order to utilize Snorble on a daily basis like other apps and services.
This eliminates a possible entry point for hackers and boosts mobility (because you do not need a Wi-Fi connection, you are free to take it with you wherever you go since it does not require this). The fact that Snorble executes its processing locally on user devices rather than in the cloud is the primary factor that sets it apart from other voice assistants. When it comes to the specifics of the Snorble’s interaction with the children in their care, parents may have an increased sense of peace of mind as a result of the device’s in-built parental control features, which are included with the product.
The Challenges That Are Presented by Modern Technology
The developers of Snorble needed an ecosystem that could provide all of the following features and functions:
Platform, in both its physical and digital iterations (hardware and software).
Capacity of the central processing unit to manage voice processing for artificial intelligence
Edge AI is a kind of artificial intelligence in which all processing is performed on the device itself, as opposed to being performed on the cloud.
Audio features that are cutting edge, such as the New Language Process (NLP), which can comprehend speech without the need of a dictionary stored in the cloud.
The abbreviation OTA is short for “over the air.”
The MediaTek Genio 350 Is the Solution to This Problem
The CPU that is used in this gadget is the MediaTek Genio 350. This highly integrated System on a Chip (SoC) makes use of a smart Linux application to connect the microphone sensors and run complex artificial intelligence algorithms that are able to genuinely comprehend children and respond with what it is that they need.
In addition, the MediaTek 350 is capable of conducting Edge AI on the smartphone. This includes audio and NLP processing, which may assist with the recognition of words and sounds and even the separation of spoken sentences from a distance, all while decreasing background noise.
Even after a family has purchased a Snorble, the manufacturer has said that it would continue to improve the device by making available for download a variety of extra updates, apps, and features that will broaden the scope of what may be accomplished with the product.
Read more on Govindhtech.com
2 notes · View notes
enalytixai · 2 years
Text
Take your workplace into a new era with Enalytix
Every employee, regardless of their official designation or their remuneration, is valued with a common metric - time. On an average, an employee dedicated 6-9 hours for 4-6 days a week to their work. As a business leader, are you managing their time effectively? That’s where Enalytix can help.
 Our AI-powered face recognition attendance system is a smart choice when it comes to more focused time and productivity management. In 2022, the solution any workplace needs should offer:
●        Smart biometric check-ins and checkouts.
●        Ability to connect and coordinate with staff with 100% security.
●        Ability to track store staff productivity.
●        Ability to recognise standout performers.
 With our AI-based Employee Experience solution focused on productivity and time management, your organisation can transform instantly into an employee-centric workplace that incorporates all of the above in a cohesive and streamlined manner.
 Our solution gives you details of which employees are working, where they are working from, and what they are working on. This allows you to track productivity and employee efficiency, all in real-time, like never before! It gives insights into where the maximum efficiency of your employees lies and nurturing the same to boost performance, engagement, and productivity.
 With a single intelligent AI-based solution, you get:
●        Geofenced facial attendance
●        Leave, shift and overtime management
●        Location tracking for each in/out marked
●        Work and expense reports
●        Task management and automated timesheets
●        A shared data repository
●        Company broadcasts
●        Secure peer-to-peer chats
●        Employee recognition
 You can deploy Enalytix solution with absolutely zero dependency on hardware! All you need is a smartphone and you’re good to go! This makes the solution easy-to-adopt, works for all types of employees and does not need any IT expertise to maintain in the long-term!
 Take the next step to evolving your organisation into a AI-powered workplace with Enalytix! Reach out to us at http://www.enalytix.com
8 notes · View notes
Text
How You Can Use AI To Make Travel More Affordable and Accessible
Traveling can be such a hassle. Who wants to stand in airport security lines, hope their flight is on time, and have to worry about your schedule while you're on vacation? Thanks to the power of AI, these hassles might be eliminated (or at least minimized) soon. In this article, we'll look at all the ways AI solutions are changing the face of travel. 
First, what exactly is AI? For our purposes, artificial intelligence (AI) can be defined as "...the simulation of human intelligence by machines, especially computer systems" [1]. Use cases for AI include natural language processing, speech recognition, and machine vision. Don't worry if these terms and phrases are a bit foreign to you; you won't need to be an expert on AI to harness its powers for yourself.  Here are some uses AI may have to make travel more accessible and affordable for those on a budget.  
Nix The Tour Guides:  Buying an expensive tour package not in your budget? Why not let AI help design your itinerary? Roam Around is a simple web-based tool powered by ChatGPT [2]. All you need to do is navigate to roamaround.io and enter the city you'd like to visit and how much time you have there [2]. ChatGPT will then generate an easy-to-read itinerary for you [2].  
For instance, the AI software suggests spending time at the Colosseum and traversing the historic neighborhood of Tivoli for a hypothetical five-day expedition to Rome. Arguably even cooler, the names of key sites are hyperlinked to pages that provide budget tours of the corresponding area. No tour guide needed!  
Time Your Purchases:   If you're one of those people constantly checking flights to grab the best prices, AI has got you covered. Hopper is a travel booking app that uses artificial intelligence to predict flight prices [3]. Hopper looks at historical price changes and predicts when prices are likely to increase or decrease based on past data [3]. The app can save you up to 40% on travel costs [3]. It works for hotels, rental cars, and flight costs, likely your biggest three expenses when you travel [3]. Hopper even plants two free trees when you travel to offset the carbon footprint from your travel [3]. Friendly for the planet and your wallet! 
Do Your Homework with Chatbots:  Sure, you could research all the data you need to plan your trip. However, that could take hours and add stress to your vacation. Why not let AI pull the info you need for you? The Kayak app contains a chatbot that can grab information at the click of a button [4]. The bot will give you details on flights, rental cars, and itinerary options and give you some activities to try [4]. It can also help you budget, as you can specify things like "Get me a flight from New York City to London for under $150" or ask where you can go with $300 this weekend [4].  
Know Before You Go:   We've all been there: you see a luxurious-looking property with gorgeous photos, only to book something that's nothing like what you were promised. Tools like Magpie use something called sentiment analysis to scope out emerging trends and adjust their search algorithms accordingly [5]. In other words, it combs through data like reviews to determine whether they're positive, negative, or neutral and uses that data to rank reviews. Destinations with worse reviews should get knocked to the bottom, saving you time and money when it comes to searching. After all, there's no better way to get destination info than from someone who's already been there.  
Virtual Vacations?  Got more of a staycation in mind? AI can help with that too. Virtual Reality or VR is a technology that simulates an immersive 3D world all around you. AI can create "physically and emotionally immersive travel that is unique to each user [5]." After AI chooses a site for you according to your needs, you can enjoy an immersive experience through companies like First Airlines [5]. This includes first-class service with four-course meals and virtual sightseeing [5]. If you're a "try before you buy" type of person, these immersive experiences can help you learn about a destination inside and out without ever having to physically go there. You can also search YouTube for 3D travel experiences you can view inside Google Cardboard and other VR headsets if you own one.  
These are just some of the ways AI is making travel more affordable, convenient, and accessible. With the advent of the internet and modern solutions such as chatbots, ChatGPT, virtual reality headsets, and artificial intelligence apps, it's easier than ever to find stays and times that work for you and your budget. You don't have to dent your bank account (or even leave your living room) to see the world, and much of it can be organized and planned out for you. As these tools and solutions evolve, it's possible they will further democratize the travel experience, making the same cultural expeditions more readily available to everyone.  
Burns, Ed, and Nicole Laskowski. "What Is Artificial Intelligence (AI)?: Definition from TechTarget." Edited by Linda Tucci, Enterprise AI, TechTarget, 24 Feb. 2023, techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence. 2. Patkar, Mihir. "5 Free Travel Planning AI and Chatgpt Apps to Get an Instant Itinerary." MUO, 19 Feb. 2023, makeuseof.com/free-travel-planning-ai-chatgpt-apps/ 3. Germán. "Ai Travel Apps You Can Use Right Now (That Are Not Chatgpt) [2023]." AirLapse, 20 Feb. 2023, airlapse.net/blog/ai-travel-apps 4.Bulanov, Alexandr. "How Machine Learning and AI Can Improve Travel Services." Medium, Towards Data Science, 26 Feb. 2019, https://towardsdatascience.com/how-machine-learning-and-ai-can-improve-travel-services-3fc8a88664c4 5."5 Ways AI Will Revolutionize Travel." CATALYST, CATALYST, 15 Feb. 2023, catalyst.cm/stories-new/2023/2/14/5-ways-ai-will-revolutionize-travel
This article is also available to purchase on Constant Content here: https://www.constant-content.com/MoreDetails/1912373-Ways_Ai_is_Making_Travel_More_Accessible_and_Affordable.htm. Once purchased, you can change the byline and use it for your own blog or website. 
2 notes · View notes
wingedkiare · 2 years
Text
And I'm telling Tumblr that any Blaze post that it's with AI generated images isn't interesting or is offensive.
Because oof. In this house, we push back against AI - because the technology isn't advancing with ethics alongside it. We all know that it's just going to be used to destroy the market for artists, because why hire an artist when you can get something cheaper, right?
Ignoring that AI art wouldn't exist right now except for the fact that it's been trained on art they didn't have permission to sample.
(Oh, and unless you know for sure they aren't... most of those AI art generators are probably mining data for facial recognition. They might not be keeping your images, but they're probably flagging which AI resulted in you downloading that particular image, since that means it was probably accurate in mapping your face. And using those results to better train the AI for accuracy. So that they can then repurpose that algorithm in a solution they can sell to governments and law enforcement.)
2 notes · View notes
nialabsjaipur · 2 days
Text
Why Nialabs is the Top Choice for Face Biometric Devices in India
Tumblr media
In today’s fast-paced business world, managing workforce attendance and ensuring security have become crucial. Traditional methods of tracking attendance are no longer efficient, especially for growing businesses. That’s where Nialabs steps in, offering innovative biometric solutions to simplify and secure your workforce management. Nialabs' face biometric devices have emerged as the leading choice for businesses across India, delivering top-notch accuracy, seamless integration, and unbeatable reliability.
Advanced Technology for Accurate Attendance
When it comes to accuracy in attendance tracking, Nialabs is second to none. Our face biometric devices utilize AI-powered technology to ensure that every employee's check-in and check-out is recorded with precision. Whether your employees are working on-site or remotely, Nialabs' face biometric systems eliminate any possibility of time fraud or human error. Traditional attendance systems like manual punching or card swiping often lead to inaccurate records. But with our facial recognition technology, the risk of "buddy punching" or clocking in for others is completely eliminated, ensuring a more honest and accountable workforce.
Seamless Integration with HR Systems
One of the major reasons businesses in India prefer Nialabs is our ability to integrate effortlessly with existing HR and payroll systems. Our face biometric devices work hand-in-hand with your current software, whether it’s for attendance tracking, payroll processing, or leave management. No more juggling between multiple systems – everything is connected for a smooth, hassle-free experience. By integrating our biometric systems into your HR platform, your management processes become more efficient, freeing up time to focus on other important tasks.
Nialabs ensures that our technology is adaptable and scalable, making it suitable for small businesses as well as large enterprises. This flexibility is what makes Nialabs stand out from other providers in the market.
Enhanced Security and Employee Accountability
At Nialabs, we understand that employee accountability goes hand-in-hand with workplace security. Our biometric devices not only track attendance but also enhance security. With facial recognition, only authorized personnel can gain access to secure areas in your office or building. This dual-purpose function of attendance tracking and security management gives businesses peace of mind, knowing that their workforce and assets are safe.
Employee accountability improves as well. Knowing that their attendance is being monitored by an advanced system, employees are less likely to commit time theft or slack off during work hours. The transparency that Nialabs' biometric systems bring helps in fostering a more responsible and productive work culture.
Trusted by Businesses Across India
With a growing list of satisfied clients, Nialabs has earned its reputation as India’s top choice for biometric face attendance systems. From startups to large-scale enterprises, businesses across industries trust Nialabs to simplify their HR and attendance management processes. Our state-of-the-art technology, coupled with user-friendly systems, ensures that clients receive reliable, efficient solutions that cater to their specific needs.
We pride ourselves on offering not just cutting-edge technology but also outstanding customer support. Our team is always ready to assist, ensuring that your system runs smoothly from installation to daily use.
Conclusion
Nialabs’ face biometric devices are revolutionizing how businesses in India manage attendance, security, and employee accountability. With accurate tracking, seamless HR integration, and enhanced security features, it’s no wonder that Nialabs is the go-to solution for companies looking to improve their workforce management. If you're looking to simplify your business operations while boosting efficiency and security, Nialabs is your answer.
Visit Our Website for More Information: Nialabs Please Call Us on This Number: 8729047305 Address: 101, Vedic Homes, Narayan Vihar, Jaipur, Rajasthan Email ID: [email protected]
0 notes
appu143 · 2 days
Text
KettyBot Reception Robots in USA
Designed to help with front desk tasks, KettyBot is a driven by artificial intelligence refined reception robot. Not only is it a static information point, but it is able to identify faces, communicate with voice commands, and navigate. Not only can KettyBot move around on her own, but it can also welcome people, give them directions, and even interact with them. Hotel rooms, company offices, medical institutions, and retail spaces are just a few of the places where its stylish appearance and kind disposition make it ideal.
Key Features of KettyBot
Interactive Display Autonomous Navigation Facial Recognition & Voice Interaction Multi-Language Support Data Collection & Analytics
Applications of KettyBot in the USA
Hotels & Hospitality Corporate Offices Healthcare Facilities Retail & Shopping Malls
Benefits of Using KettyBot Reception Robots
Enhanced Customer Experience Operational Efficiency Cost-Effective Solution Brand Image & Innovation
The use of reception robots, such as KettyBot, is expected to rise as technology develops. Higher levels of autonomy, connection with other intelligent systems, and greater AI capabilities are possible future developments. In terms of customer service and operational effectiveness, companies who adopt these advances early on will probably stand out.
The future of customer service in the United States lies with KettyBot reception robots, which are more than simply an a passing trend. KettyBot improves the greeting experience across multiple sectors by mixing innovative technology with useful functionality. KettyBot is going to play a major role in modern customer care, whether it is used in retail, company, or hotel settings. Our expectations for a more effective, engaging, and interactive receiving experience will grow as more companies use this technology.
To know more click here:- Fortune Robotics
0 notes
Text
Conversational AI in Healthcare Market Drifts and Assessments: 2024 Industry Worth Prophesied to Witness Sustainable Development over 2032
The global conversational AI in healthcare Market is set to witness remarkable growth, with market size expected to surge from USD 10.80 billion in 2023 to an impressive USD 80.50 billion by 2032. This represents a compound annual growth rate (CAGR) of 25.02% during the forecast period from 2024 to 2032, underscoring the increasing adoption of artificial intelligence-driven communication tools across the healthcare industry.
Conversational AI refers to technologies such as chatbots, virtual assistants, and natural language processing (NLP) systems that enable human-like interactions with digital systems. In healthcare, conversational AI is revolutionizing patient engagement, clinical workflows, and administrative processes by providing intelligent, responsive, and scalable communication solutions.
Key Market Drivers
Growing Demand for Virtual Healthcare Services: The healthcare industry has been rapidly transitioning to digital platforms, especially after the COVID-19 pandemic. The need for virtual consultations, remote patient monitoring, and telemedicine services has driven the adoption of conversational AI technologies. AI-powered chatbots and virtual assistants allow healthcare providers to engage with patients more efficiently, offering immediate assistance and reducing the burden on clinical staff. This shift has created a favorable environment for the conversational AI market to flourish.
Enhanced Patient Engagement and Experience: Conversational AI enables personalized and 24/7 patient support, improving the overall patient experience. Patients can interact with virtual assistants to schedule appointments, access medical information, receive reminders for medications, and even receive preliminary diagnoses based on symptoms. These AI-driven interactions help improve patient engagement, reduce wait times, and enhance access to healthcare services. As the focus on patient-centered care intensifies, healthcare organizations are increasingly leveraging AI-powered solutions to ensure seamless communication and positive outcomes.
Streamlining Clinical and Administrative Processes: Conversational AI is not only transforming patient-facing services but also streamlining back-end operations in healthcare institutions. AI-powered virtual assistants can handle routine administrative tasks such as patient data entry, appointment scheduling, and insurance claim processing. By automating these repetitive tasks, healthcare providers can free up valuable resources and focus on delivering quality care. The efficiency gained through AI-driven solutions is expected to be a significant growth driver for the market.
Rising Focus on Mental Health and Wellness: Mental health support has become a growing priority in healthcare, and conversational AI is playing a crucial role in delivering accessible and timely interventions. AI-powered chatbots are increasingly being used for mental health counseling, providing users with confidential, non-judgmental conversations and real-time support. These AI-driven tools can assess patients’ emotional states, offer coping strategies, and even refer users to human professionals when necessary. The rising focus on mental health and wellness is likely to boost the adoption of conversational AI in this segment.
Advancements in AI and NLP Technologies: Rapid advancements in artificial intelligence, machine learning, and natural language processing are enabling conversational AI systems to become more sophisticated and human-like. Improvements in understanding medical terminology, patient context, and intent recognition are driving the effectiveness of these systems. The ability of conversational AI to analyze vast amounts of data, learn from interactions, and continuously improve its performance makes it an indispensable tool for healthcare providers aiming to enhance operational efficiency and patient outcomes.
Get Free Sample Report: https://www.snsinsider.com/sample-request/4498 
Challenges and Opportunities
Despite the promising growth trajectory, the conversational AI in healthcare market faces challenges related to data privacy, security concerns, and the complexity of integrating AI into existing healthcare systems. Stringent regulatory requirements, especially concerning patient data protection and compliance with healthcare standards like HIPAA, may pose barriers to the widespread adoption of AI technologies.
However, these challenges present opportunities for further innovation. The development of secure, compliant conversational AI solutions that prioritize patient privacy will likely accelerate market growth. Additionally, collaborations between AI developers, healthcare providers, and regulatory bodies will be crucial in addressing these concerns and ensuring the responsible implementation of AI in healthcare.
Regional Insights
North America currently dominates the conversational AI in healthcare market due to the region’s advanced healthcare infrastructure, strong focus on technological innovation, and increasing investment in digital health solutions. Europe is also a significant player, with growing adoption of AI technologies in healthcare services across the region.
The Asia-Pacific region is anticipated to experience the highest growth rate during the forecast period, driven by the increasing demand for healthcare services, rising digitalization, and government initiatives to promote AI in healthcare. Countries like China, India, and Japan are expected to lead the charge in adopting conversational AI technologies, given their burgeoning healthcare industries and focus on innovation.
Future Outlook
The future of healthcare is set to be transformed by conversational AI, with applications extending far beyond simple patient interactions. As AI technologies continue to evolve and integrate with other healthcare innovations such as electronic health records (EHR) and wearables, the possibilities for personalized and predictive care are endless. By 2032, the conversational AI in healthcare market is expected to reach USD 80.50 billion, reshaping the way healthcare is delivered, experienced, and managed globally.
In conclusion, the conversational AI in healthcare market is positioned for explosive growth, driven by the increasing demand for virtual healthcare services, advancements in AI technologies, and the need for enhanced patient engagement. With a CAGR of 25.02%, the market is set to revolutionize healthcare communication and operations, offering new opportunities for innovation and improvement in patient care.
Other Trending Reports
Process Analytical Technology Market
Biodegradable Medical Plastics Market
Smart Hospitality Market
IoT Medical Devices Market
0 notes
mediatechgroup · 4 days
Text
Overcoming Challenges in AI Adoption for Small Businesses
Overcoming Challenges in AI Adoption for Small Businesses
Artificial Intelligence (AI) might seem like a goldmine of opportunities for small businesses, but adopting it often feels like trying to unlock a secret vault. Despite the promising benefits, we need to overcome some significant AI adoption challenges. In this blog post, I'll guide you through the common hurdles small businesses face and how to tackle them effectively.
Understanding the Basics of AI Adoption
Before diving into the challenges, let's understand what AI is. Artificial Intelligence simulates human intelligence in machines. This involves tasks like decision-making, speech recognition, and even visual perception. While it might sound like something from a sci-fi movie, it’s very real and practical for various business applications.
Lack of Understanding and Awareness
One of the first barriers is simply not knowing what AI is or how it can benefit your business. Many small business owners are busy with daily operations and might not have time to explore new technologies. But think about this - AI can automate mundane tasks, analyze customer data, and even enhance customer experiences.
High Initial Costs
Another big hurdle is the financial investment required upfront. Implementing AI can be expensive initially. However, don’t let this stop you. Think of it as an investment that will pay off over time through increased efficiency and smarter decision-making. Look for scalable solutions that fit your budget. Starting small and gradually expanding your AI capabilities can be a wise strategy.
Automated Customer Service: Lesser need for human agents
Enhanced Marketing Strategies: Precise targeting and personalization
Efficient Business Processes: Streamlined operations
Technical Knowledge and Expertise
AI is specialized and requires a certain level of technical know-how. Many small businesses lack in-house experts to manage these systems. This is where consulting firms like Media & Technology Group, LLC come in. Partnering with experts not only eases the burden on your existing team but also ensures a successful AI adoption process.
Data Quality and Availability
Good AI relies on good data. If your business’ data is incomplete, outdated, or disorganized, the AI tool might not work effectively. Improve your data collection methods. Ensure that the data you're gathering is clean and relevant. Sometimes, investing in better data management tools is crucial for a more effective AI system.
Change Management
Adopting AI isn’t just a technological shift; it’s a cultural one too. Employees might resist new technologies, fearing job losses or feeling overwhelmed by new workflows. To address this, involve them in the process, offer training, and show them how these tools will make their jobs easier, not replace them. A culture of open communication and continuous learning will help ease this transition.
Security and Privacy Concerns
Last but definitely not least, small businesses need to address security and privacy issues. Customers are more aware than ever about how their data is used. Make sure your AI implementation complies with legal regulations and follows best practices for data security.
Implement robust data encryption methods.
Regularly update your security protocols.
Educate your team about cybersecurity risks.
The Role of Media & Technology Group, LLC
Here at Media & Technology Group, LLC, we provide comprehensive solutions to these challenges. From Website Design and Software Development to specialized services in Marketing Automation and AI Implementation, our team is ready to guide you every step of the way. Our expertise in Technical Project Management
0 notes
mariacallous · 2 years
Text
Five days after Russia launched its full-scale invasion of Ukraine, a year ago this week, US-based facial recognition company Clearview AI offered the Ukrainian government free access to its technology, suggesting that it could be used to reunite families, identify Russian operatives, and fight misinformation. Soon afterward, the Ukraine government revealed it was using the technology to scan the faces of dead Russian soldiers to identify their bodies and notify their families. By December 2022, Mykhailo Fedorov, Ukraine's vice prime minister and minister of digital transformation, was tweeting a picture of himself with Clearview AI’s CEO Hoan Ton-That, thanking the company for its support.
Accounting for the dead and letting families know the fate of their relatives is a human rights imperative written into international treaties, protocols, and laws like the Geneva Conventions and the International Committee of the Red Cross’ (ICRC) Guiding Principles for Dignified Management of the Dead. It is also tied to much deeper obligations. Caring for the dead is among the most ancient human practices, one that makes us human, as much as language and the capacity for self-reflection. Historian Thomas Laqueur, in his epic meditation, The Work of the Dead, writes that “as far back as people have discussed the subject, care of the dead has been regarded as foundational—of religion, of the polity, of the clan, of the tribe, of the capacity to mourn, of an understanding of the finitude of life, of civilization itself.” But identifying the dead using facial recognition technology uses the moral weight of this type of care to authorize a technology that raises grave human rights concerns.
In Ukraine, the bloodiest war in Europe since World War II, facial recognition may seem to be just another tool brought to the grim task of identifying the fallen, along with digitizing morgue records, mobile DNA labs, and exhuming mass graves.
But does it work? Ton-That says his company’s technology “works effectively regardless of facial damage that may have occurred to a deceased person.” There is little research to support this assertion, but authors of one small study found results “promising” even for faces in states of decomposition. However, forensic anthropologist Luis Fondebrider, former head of forensic services for the ICRC, who has worked in conflict zones around the world, casts doubt on these claims. “This technology lacks scientific credibility,” he says. “It is absolutely not widely accepted by the forensic community.” (DNA identification remains the gold standard.) The field of forensics “understands technology and the importance of new developments” but the rush to use facial recognition is “a combination of politics and business with very little science,” in Fondebrider’s view. “There are no magic solutions for identification,” he says.  
Using an unproven technology to identify fallen soldiers could lead to mistakes and traumatize families. But even if the forensic use of facial recognition technology were backed up by scientific evidence, it should not be used to name the dead. It is too dangerous for the living. 
Organizations including Amnesty International, the Electronic Frontier Foundation, the Surveillance Technology Oversight Project, and the Immigrant Defense Project have declared facial recognition technology a form of mass surveillance that menaces privacy, amplifies racist policing, threatens the right to protest, and can lead to wrongful arrest. Damini Satija, head of Amnesty International’s Algorithmic Accountability Lab and deputy director of Amnesty Tech, says that facial recognition technology undermines human rights by “reproducing structural discrimination at scale and automating and entrenching existing societal inequities.” In Russia, facial recognition technology is being used to quash political dissent. It fails to meet legal and ethical standards when used in law enforcement in the UK and US, and is weaponized against marginalized communities around the world. 
Clearview AI, which primarily sells its wares to police, has one of the largest known databases of facial photos, at 20 billion images, with plans to collect an additional 100 billion images—equivalent to 14 photos for every person on the planet. The company has promised investors that soon “almost everyone in the world will be identifiable.” Regulators in Italy, Australia, UK, and France have declared Clearview’s database illegal and ordered the company to delete their citizens’ photos. In the EU, Reclaim Your Face, a coalition of more than 40 civil society organizations, has called for a complete ban on facial recognition technology. 
AI ethics researcher Stephanie Hare says Ukraine is “using a tool, and promoting a company and CEO, who have not only behaved unethically but illegally.” She conjectures that it’s a case of “the end justifies the means,” but asks, “Why is it so important that Ukraine is able to identify dead Russian soldiers using Clearview AI? How is this essential to defending Ukraine or winning the war?”
I think the answer to these questions can be found in an interview with Fedorov early in the war. A journalist asked him, “What kind of use cases do you have in mind when leveraging Clearview AI?” Fedorov answered, “I would start off by saying that most of these use cases would not be public, not something that we’d be able to share publicly. But something that I can just give you a sneak peek would be … trying to identify Russian forces who have been killed or taken prisoner in Ukraine.”
This interview and others like it suggest that Clearview AI’s technology is being used for many purposes in Ukraine, but only its potentially positive use to identify the dead is being publicly discussed. In other words, we are being allowed “a sneak peek” that spotlights a humanitarian application while more controversial deployments are hidden from view. At a moment when Clearview AI is facing legal action from regulators for its use in policing, it has found a way to reinvent itself as a purveyor of humanitarian technology, all while expanding its scope from law enforcement to the military.
Because the EU recognizes facial recognition as a “dual use” technology with both military and civilian applications, its use in war for any purpose, including the identification of the dead, must be subject to strict scrutiny and oversight. And on the battlefield, facial recognition may be used to name the dead but can also be used to target the living—for example, when incorporated into drones and machine guns or used to develop lethal autonomous weapons. ​​Fully autonomous drones are already deployed in Ukraine and a Russian manufacturer has announced plans to develop uncrewed robot combat vehicles. Fedorov recently said that fully autonomous killer drones are “a logical and inevitable next step” in weapons development. 
Amnesty International’s Satija says that “killer robots are not just about a Terminator-style system. It’s about a system that can select and engage a human target without meaningful human control and built with technologies which will accelerate violence and discrimination. When we call for regulation and red lines around autonomous weapons, we're also talking about the components of those systems, like facial recognition, which are by design in violation of international human rights standards.”
Ultimately, the faces of the dead soldiers scanned by Clearview AI silence these other applications. Anthropologist Katherine Verdery, writing about the political lives of dead bodies, shows how corpses are mute and mutable—“ambiguous, protean symbols” that can be put to many political uses. In the name of the dead, facial recognition is given a humanitarian pretext that obscures its role in emerging mass surveillance and future automated violence.
4 notes · View notes