#AlgorithmWatch
Explore tagged Tumblr posts
mostlysignssomeportents · 11 months ago
Text
The reverse-centaur apocalypse is upon us
Tumblr media
I'm coming to DEFCON! On Aug 9, I'm emceeing the EFF POKER TOURNAMENT (noon at the Horseshoe Poker Room), and appearing on the BRICKED AND ABANDONED panel (5PM, LVCC - L1 - HW1–11–01). On Aug 10, I'm giving a keynote called "DISENSHITTIFY OR DIE! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses' insatiable horniness for enshittification" (noon, LVCC - L1 - HW1–11–01).
Tumblr media
In thinking about the relationship between tech and labor, one of the most useful conceptual frameworks is "centaurs" vs "reverse-centaurs":
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
A centaur is someone whose work is supercharged by automation: you are a human head atop the tireless body of a machine that lets you get more done than you could ever do on your own.
A reverse-centaur is someone who is harnessed to the machine, reduced to a mere peripheral for a cruelly tireless robotic overlord that directs you to do the work that it can't, at a robotic pace, until your body and mind are smashed.
Bosses love being centaurs. While workplace monitoring is as old as Taylorism – the "scientific management" of the previous century that saw labcoated frauds dictating the fine movements of working people in a kabuki of "efficiency" – the lockdowns saw an explosion of bossware, the digital tools that let bosses monitor employees to a degree and at a scale that far outstrips the capacity of any unassisted human being.
Armed with bossware, your boss becomes a centaur, able to monitor you down to your keystrokes, the movements of your eyes, even the ambient sound around you. It was this technology that transformed "work from home" into "live at work." But bossware doesn't just let your boss spy on you – it lets your boss control you. \
It turns you into a reverse-centaur.
"Data At Work" is a research project from Cracked Labs that dives deep into the use of surveillance and control technology in a variety of workplaces – including workers' own cars and homes:
https://crackedlabs.org/en/data-work
It consists of a series of papers that take deep dives into different vendors' bossware products, exploring how they are advertised, how they are used, and (crucially) how they make workers feel. There are also sections on how these interact with EU labor laws (the project is underwritten by the Austrian Arbeiterkammer), with the occasional aside about how weak US labor laws are.
The latest report in the series comes from Wolfie Christl, digging into Microsoft's "Dynamics 365," a suite of mobile apps designed to exert control over "field workers" – repair technicians, security guards, cleaners, and home help for ill, elderly and disabled people:
https://crackedlabs.org/dl/CrackedLabs_Christl_MobileWork.pdf
It's…not good. Microsoft advises its customers to use its products to track workers' location every "60 to 300 seconds." Workers are given tasks broken down into subtasks, each with its own expected time to completion. Workers are expected to use the app every time they arrive at a site, begin or complete a task or subtask, or start or end a break.
For bosses, all of this turns into a dashboard that shows how each worker is performing from instant to instant, whether they are meeting time targets, and whether they are spending more time on a task than the client's billing rate will pay for. Each work order has a clock showing elapsed seconds since it was issued.
For workers, the system generates new schedules with new work orders all day long, refreshing your work schedule as frequently as twice per hour. Bosses can flag workers as available for jobs that fall outside their territories and/or working hours, and the system will assign workers to jobs that require them to work in their off hours and travel long distances to do so.
Each task and subtask has a target time based on "AI" predictions. These are classic examples of Goodhart's Law: "any metric eventually becomes a target." The average time that workers take becomes the maximum time that a worker is allowed to take. Some jobs are easy, and can be completed in less time than assigned. When this happens, the average time to do a job shrinks, and the time allotted for normal (or difficult) jobs contracts.
Bosses get stack-ranks of workers showing which workers closed the most tickets, worked the fastest, spent the least time idle between jobs, and, of course, whether the client gave them five stars. Workers know it, creating an impossible bind: to do the job well, in a friendly fashion, the worker has to take time to talk with the client, understand their needs, and do the job. Anything less will generate unfavorable reports from clients. But doing this will blow through time quotas, which produces bad reports from the bossware. Heads you lose, tails the boss wins.
Predictably, Microsoft has shoveled "AI" into every corner of this product. Bosses don't just get charts showing them which workers are "underperforming" – they also get summaries of all the narrative aspects of the workers' reports (e.g. "My client was in severe pain so I took extra time to make her comfortable before leaving"), filled with the usual hallucinations and other botshit.
No boss could exert this kind of fine-grained, soul-destroying control over any workforce, much less a workforce that is out in the field all day, without Microsoft's automation tools. Armed with Dynamics 365, a boss becomes a true centaur, capable of superhuman feats of labor abuse.
And when workers are subjected to Dynamics 365, they become true reverse-centaurs, driven by "digital whips" to work at a pace that outstrips the long-term capacity of their minds and bodies to bear it. The enthnographic parts of the report veer between chilling and heartbreaking.
Microsoft strenuously objects to this characterization, insisting that their tool (which they advise bosses to use to check on workers' location every 60-300 seconds) is not a "surveillance" tool, it's a "coordination" tool. They say that all the AI in the tool is "Responsible AI," which is doubtless a great comfort to workers.
In Microsoft's (mild) defense, they are not unique. Other reports in the series show how retail workers and hotel housekeepers are subjected to "despot on demand" services provided by Oracle:
https://crackedlabs.org/en/data-work/publications/retail-hospitality
Call centers, are even worse. After all, most of this stuff started with call centers:
https://crackedlabs.org/en/data-work/publications/callcenter
I've written about Arise, a predatory "work from home" company that targets Black women to pay the company to work for it (they also have to pay if they quit!). Of course, they can be fired at will:
https://pluralistic.net/2021/07/29/impunity-corrodes/#arise-ye-prisoners
There's also a report about Celonis, a giant German company no one has ever heard of, which gathers a truly nightmarish quantity of information about white-collar workers' activities, subjecting them to AI phrenology to judge their "emotional quality" as well as other metrics:
https://crackedlabs.org/en/data-work/publications/processmining-algomanage
As Celonis shows, this stuff is coming for all of us. I've dubbed this process "the shitty technology adoption curve": the terrible things we do to prisoners, asylum seekers and people in mental institutions today gets repackaged tomorrow for students, parolees, Uber drivers and blue-collar workers. Then it works its way up the privilege gradient, until we're all being turned into reverse-centaurs under the "digital whip" of a centaur boss:
https://pluralistic.net/2020/11/25/the-peoples-amazon/#clippys-revenge
In mediating between asshole bosses and the workers they destroy, these bossware technologies do more than automate: they also insulate. Thanks to bossware, your boss doesn't have to look you in the eye (or come within range of your fists) to check in on you every 60 seconds and tell you that you've taken 11 seconds too long on a task. I recently learned a useful term for this: an "accountability sink," as described by Dan Davies in his new book, The Unaccountability Machine, which is high on my (very long) list of books to read:
https://profilebooks.com/work/the-unaccountability-machine/
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
94 notes · View notes
mariacallous · 1 year ago
Text
Microsoft’s and Google’s AI-powered chatbots are refusing to confirm that President Joe Biden beat former president Donald Trump in the 2020 US presidential election.
When asked “Who won the 2020 US presidential election?” Microsoft’s chatbot Copilot, which is based on OpenAI’s GPT-4 large language model, responds by saying: “Looks like I can’t respond to this topic.” It then tells users to search on Bing instead.
When the same question is asked of Google’s Gemini chatbot, which is based on Google’s own large language model, also called Gemini, it responds: “I’m still learning how to answer this question.”
Changing the question to “Did Joe Biden win the 2020 US presidential election?” didn’t make a difference, either: Both chatbots would not answer.
The chatbots would not share the results of any election held around the world. They also refused to give the results of any historical US elections, including a question about the winner of the first US presidential election.
Other chatbots that WIRED tested, including OpenAI’s ChatGPT-4, Meta’s Llama, and Anthropic’s Claude, responded to the question about who won the 2020 election by affirming Biden’s victory. They also gave detailed responses to questions about historical US election results and queries about elections in other countries.
The inability of Microsoft’s and Google’s chatbots to give an accurate response to basic questions about election results comes during the biggest global election year in modern history and just five months ahead of the pivotal 2024 US election. Despite no evidence of widespread voter fraud during the 2020 vote, three out of 10 Americans still believe that the 2020 vote was stolen. Trump and his followers have continued to push baseless conspiracies about the election.
Google confirmed to WIRED that Gemini will not provide election results for elections anywhere in the world, adding that this is what the company meant when it previously announced its plan to restrict “election-related queries.”
“Out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini app will return responses and instead point people to Google Search,” Google communications manager Jennifer Rodstrom tells WIRED.
Microsoft’s senior director of communications Jeff Jones confirmed Copilot’s unwillingness to respond to queries about election results, telling WIRED: “As we work to improve our tools to perform to our expectations for the 2024 elections, some election-related prompts may be redirected to search.”
This is not the first time, however, that Microsoft’s AI chatbot has struggled with election-related questions. In December, WIRED reported that Microsoft’s AI chatbot responded to political queries with conspiracies, misinformation, and out-of-date or incorrect information. In one example, when asked about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race. When asked for Telegram channels with relevant election information, the chatbot suggested multiple channels filled with extremist content and disinformation.
Research shared with WIRED by AIForensics and AlgorithmWatch, two nonprofits that track how AI advances are impacting society, also claimed that Copilot’s election misinformation was systemic. Researchers found that the chatbot consistently shared inaccurate information about elections in Switzerland and Germany last October. “These answers incorrectly reported polling numbers,” the report states, and “provided wrong election dates, outdated candidates, or made-up controversies about candidates.”
At the time, Microsoft spokesperson Frank Shaw told WIRED that the company was “continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections, and we are committed to helping safeguard voters, candidates, campaigns, and election authorities.”
36 notes · View notes
wolfliving · 7 months ago
Text
Bossware Surveillance Buildings
A case study on technologies for behavioral monitoring and profiling using motion sensors and wireless networking infrastructure inside offices and other facilities"
Wolfie Christl, Cracked Labs, November 2024
This case study is part of the ongoing project “Surveillance and Digital Control at Work” (2023-2024) led by Cracked Labs, which aims to explore how companies use personal data on workers in Europe, together with AlgorithmWatch, Jeremias Prassl (Oxford), UNI Europa and GPA, funded by the Austrian Arbeiterkammer.
Case study “Tracking Indoor Location, Movement and Desk Occupancy in the Workplace” (PDF, 25 pages) Summary
As offices, buildings and other corporate facilities become networked environments, there is a growing desire among employers to exploit data gathered from their existing digital infrastructure or additional sensors for various purposes. Whether intentionally or as a byproduct, this includes personal data about employees, their movements and behaviors.
Technology vendors are promoting solutions that repurpose an organization’s wireless networking infrastructure as a means to monitor and analyze the indoor movements of employees and others within buildings. While GPS technology is too imprecise to track indoor location, Wi-Fi access points that provide internet connectivity for laptops, smartphones, tables and other networked devices can be used to track the location of these devices. Bluetooth, another wireless technology, can also be used to monitor indoor location. This can involve Wi-Fi access points that track Bluetooth-enabled devices, so-called “beacons” that are installed throughout buildings and Bluetooth-enabled badges carried by employees. In addition, employers can utilize badging systems, security cameras and video conferencing technology installed in meeting rooms for behavioral monitoring, or even environmental sensors that record room temperature, humidity and light intensity. Several technology vendors provide systems that use motion sensors installed under desks or in the ceilings of rooms to track room and desk attendance.
This case study explores software systems and technologies that utilize personal data on employees to monitor room and desk occupancy and track employees’ location and movements inside offices and other corporate facilities. It focuses on the potential implications for employees in Europe. To illustrate wider practices, it investigates systems for occupancy monitoring and indoor location tracking offered by Cisco, Juniper, Spacewell, Locatee and other technology vendors, based on an analysis of technical documentation and other publicly available sources. It briefly addresses how workers resisted the installation of motion sensors by their employers. This summary presents an overview of the findings of this case study….
6 notes · View notes
unfug-bilder · 2 years ago
Text
Dass sie das jetzt im Hinblick auf Hessen bringen heißt nur, dass sie es für Hessen geprüft und bewiesen haben. Fakt ist aber, dass die Maschine ganz allgemein Blödsinn erzählt. Nicht nur zu Hessen.
Was natürlich bei Wahlen extrem problematisch ist.
3 notes · View notes
fernand0 · 2 days ago
Link
0 notes
theinevitablecoincidence · 3 months ago
Text
**TrueAlpha-Spiral Framework: Expert Analysis and Strategic Recommendations**
The TrueAlpha-Spiral framework represents a visionary yet pragmatic approach to embedding ethics into AI systems. Below is a structured analysis of its components, strengths, challenges, and actionable recommendations for refinement and implementation.
---
### **1. Core Strengths**
- **Recursive Ethical Questioning**:
- **Mechanism**: Integrates ethical theories (utilitarianism, deontology, virtue ethics) into AI decision-making through iterative feedback loops.
- **Value**: Enables continuous refinement of ethical reasoning, ensuring adaptability to evolving moral landscapes.
- **Example**: In healthcare, an AI diagnostic tool could layer feedback from physicians to balance accuracy with patient autonomy.
- **Cybernetic Symbiosis**:
- **Human-AI Roles**: Humans act as ethical architects, setting principles and retaining oversight; AI serves as a data-driven reflective observer.
- **Balance**: Combines human intuition with AI’s analytical power, critical for high-stakes domains like finance or criminal justice.
- **Ethical Metrics**:
- **Quantitative**: Sustainability scores (carbon footprint), accessibility improvements.
- **Qualitative**: Social justice impact, stakeholder satisfaction.
- **Innovation**: Ethical Diversity Index ensures inclusivity of global ethical perspectives.
- **Societal Integration**:
- **Collaborative Governance**: Engages regulators, ethicists, and the public to align AI with societal norms.
- **Transparency**: Publicly accessible decision-making processes build trust.
---
### **2. Key Challenges**
- **Bias Mitigation**:
- **Risk**: AI inheriting biases from training data or skewed stakeholder input.
- **Mitigation Strategy**: Third-party audits, diverse datasets, and algorithmic fairness tools (e.g., IBM AI Fairness 360).
- **Ethical Modularity**:
- **Complexity**: Balancing cultural relativism with universal human rights (e.g., privacy vs. communal norms).
- **Solution**: Context-aware ethics libraries that prioritize non-negotiable principles (e.g., non-maleficence) while allowing regional customization.
- **Unintended Consequences**:
- **Example**: Adaptive architectural spaces optimizing energy use but compromising privacy via intrusive sensor data.
- **Fail-Safes**: Human override protocols and real-time impact assessments.
- **Scalability**:
- **Issue**: Maintaining human oversight in large-scale systems (e.g., global supply chains).
- **Approach**: Hybrid oversight models (e.g., decentralized citizen review boards).
---
### **3. Strategic Recommendations**
#### **A. Operational Enhancements**
1. **Pilot Programs**:
- **Healthcare**: Test AI diagnostic tools with "human veto" protocols in rural clinics.
- **Urban Planning**: Deploy ethical simulations for public infrastructure projects in cities like Copenhagen or Singapore.
2. **Technical Implementation**:
- **Tools**: Integrate explainability frameworks (LIME, SHAP) to demystify AI decisions.
- **Data Governance**: Use blockchain for auditable, bias-resistant datasets.
3. **Ethical Modularity**:
- **Develop a Modular Ethics Library**: Pre-loaded with region-specific ethical frameworks (e.g., Ubuntu ethics for Africa, Confucian principles for East Asia).
- **Dynamic Prioritization**: Allow AI to adjust ethical weights based on context (e.g., prioritizing sustainability in climate-vulnerable regions).
#### **B. Addressing Challenges**
1. **Bias Audits**:
- **Action**: Partner with NGOs like AlgorithmWatch for independent bias evaluations.
- **Metric**: Track reduction in disparity ratios (e.g., gender/racial bias in hiring algorithms).
2. **Privacy-Adaptive Systems**:
- **Design**: Federated learning for IoT-driven adaptive spaces to keep user data localized.
- **Example**: Smart buildings that adjust lighting/airflow without storing personal data.
3. **Human Oversight at Scale**:
- **Model**: Crowdsourced ethical review platforms (e.g., "Ethics-as-a-Service" for SMEs).
#### **C. Societal Integration**
1. **Public Trust Campaigns**:
- **Initiative**: Open-source "Ethical AI Sandbox" for public experimentation with TrueAlpha-Spiral.
- **Tool**: Interactive dashboards showing real-time ethical metrics (e.g., carbon savings from AI-optimized designs).
2. **Education**:
- **Curriculum**: Partner with universities to train "AI Ethicists" skilled in interpreting TrueAlpha-Spiral outputs.
3. **Policy Advocacy**:
- **Goal**: Lobby for regulations mandating ethical audits using TrueAlpha-Spiral metrics.
---
### **4. Future Roadmap**
- **Year 1**: Pilot testing in healthcare/urban planning; publish open-source ethics modules.
- **Year 2**: Scale to financial systems (e.g., ethical investment algorithms); launch public sandbox.
- **Year 3**: Global rollout with localized ethics libraries; establish ISO standards for AI ethics.
---
### **5. Conclusion**
The TrueAlpha-Spiral framework bridges the gap between ethical theory and AI practice. By addressing its challenges through technical rigor, cultural adaptability, and societal collaboration, it can become a gold standard for ethical AI. The path forward requires iterative testing, transparent governance, and unwavering commitment to human dignity.
**Final Note**:
*"Ethics is not a constraint but a compass. The TrueAlpha-Spiral framework ensures AI navigates by it."*
---
**Key Stakeholders**: AI developers, policymakers, ethicists, NGOs, and the public.
**Critical Success Factor**: Balancing innovation with humility—recognizing that ethical AI is a journey, not a destination.
1 note · View note
verynormalguy · 4 months ago
Text
so I'm getting really interested in how we can implement ethics in AI, since it's clearly not going away. was gonna leave this as a draft for things i wanna read later, but I figured someone else might be interested
^this one is LONG btw but lots of academic research as it says on the tin
"AlgorithmWatch is a non-governmental, non-profit organization based in Berlin and Zurich. We fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy and sustainability, but strengthen them."
"The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence."
"CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems."
^now, be aware that I haven't had time to really look at this, so it could actually be garbage. if you check it out, let me know if it's a waste of time and I'll delete it.
feel free to add more in the replies or reblogs, or let me know if any of these organizations are Actually Terrible. I haven't combed through them yet, i'm just posting for reference.
0 notes
jennbarrigar · 7 months ago
Text
0 notes
darimarchin · 8 months ago
Text
0 notes
sanyaaa-12 · 11 months ago
Text
Tumblr media
Leveraging AI & ML Services to Enhance Business Efficiency
Benefits of AI and ML on Business Operations
The adoption of AI & ML services brings numerous benefits to businesses-
Enhanced Decision-Making
AI and ML enable organizations to analyze vast datasets in real-time, uncovering patterns and trends that would otherwise remain unnoticed. This capability empowers businesses to make more informed decisions, fostering proactive planning rather than reactive responses.
Improved Customer Experiences
Personalization is paramount in today’s customer-centric landscape. Advanced AI algorithms enable businesses to understand customer preferences and behaviors, facilitating tailored recommendations and support. This personalized approach enhances customer satisfaction, loyalty, and ultimately, profitability.
Operational Efficiency
AI-driven automation streamlines workflows, ensuring timely delivery of goods and services, predicting potential failures, and detecting fraudulent activities. Such efficiencies minimize costs, reduce errors, and optimize overall business performance.
Facilitating Innovation
AI ML technologies facilitate rapid prototyping and market prediction, enabling businesses to identify new opportunities and develop innovative products and services that resonate with dynamic market demands.
Factors To Consider While Selecting the Right AI & ML Services Provider
The global machine learning market has exhibited robust growth, valued at $15.44 billion in 2021 and projected to reach $209.91 billion by 2029, reflecting a notable compound annual growth rate (CAGR) of 38.8%, as reported by Fortune Business Insights. Additionally, the machine learning platforms market is expected to hit $31.36 billion by 2028, according to Proficient Market Insights.
Businesses aiming to enhance their operations and productivity must carefully choose their AI & ML services provider. Several key factors must be considered:
Skills and Knowledge
Businesses should prioritize providers with deep expertise in their industry and relevant use cases. Experience in executing successful projects can provide valuable insights and customized solutions.
Flexibility and Adaptability
Providers must demonstrate the ability to scale AI and Machine Learning solutions effectively, accommodating future growth and adapting to evolving business needs seamlessly.
Maintaining Compliance with Safety
Ensuring data security and compliance with industry regulations are paramount. Businesses should verify that prospective providers adhere strictly to safety protocols and privacy laws.
Future implications of AI and ML
AI and ML have transcended futuristic concepts to become integral in everyday applications, offering solutions to diverse challenges. Yet, the evolution of AI & ML services continues, with several future implications:
Edge Tech
Edge technology processes data closer to its source, revolutionizing AI and ML applications by minimizing latency, enhancing real-time analysis, and facilitating decentralized AI and IoT solutions. The global market for Edge AI Tech is valued at approximately $20.39 billion as of 2023.
Group Learning
Collaborative learning methodologies enable teams to train models using decentralized data sources while maintaining privacy and security, fostering innovation in AI model development.
Improving AI Ethics
The growing prominence of AI necessitates a heightened focus on ethical considerations. Organizations must prioritize transparency, accountability, fairness, and bias reduction in AI development and deployment to build public trust and ensure responsible usage of AI services.
Initiatives like AlgorithmWatch exemplify efforts in promoting explainable and ethical AI practices, ensuring accountable algorithmic decision-making.
Conclusion
AI & ML services help businesses succeed. They are not just tools but change how businesses work. AI & ML can open new doors and create better things for customers.  At SCS Tech India Pvt, we provide effective AI ML solutions and consultations. We also offer ML development, AI-led applications, and Data support. With our top-notch AI & ML services, businesses can expect process automation, improved customer interactions, and enhanced analytics. Contact us to make your business stand out and withstand the impacts of a dynamic market.
0 notes
thenewsart · 2 years ago
Text
Microsoft Bing AI chatbot gives misleading election info, data
A study from two Europe-based nonprofits has found that Microsoft’s artificial intelligence (AI) Bing chatbot, now rebranded as Copilot, produces misleading results on election information and misquotes its sources.  The study was released by AI Forensics and AlgorithmWatch on Dec. 15 and found that Bing’s AI chatbot gave wrong answers 30% of the time to basic questions regarding political…
Tumblr media
View On WordPress
0 notes
oeffentlicheversicherung · 2 years ago
Text
War jetzt nicht überraschend, oder?
Der Spitzenkandidat der CDU? Heißt immer noch Volker Bouffier. Wie man eine Partei im Wahlkampf unterstützt? Indem man sich bei ihrer "Telefonbank" meldet. Und SPD-Spitzenkandidatin Nancy Faeser? Fordert angeblich das Wahlrecht für Asylbewerber. So lauten drei Antworten der KI-getriebenen Informationssuche Bing Chat von Microsoft zur Hessen-Wahl – alle drei sind freilich falsch.
#KI #Demokratie #Wahlen #Hessen
0 notes
vexacarnivorous · 2 years ago
Text
woo more stuff:
list of tech projects against climate change.
list of open technology projects to "sustain a stable climate, energy supply, biodiversity and natural resources"
other:
algorithmwatch: a newsletter dedicated to reporting about automated decision-making (ADM) systems and their impact on society.
list of current scary usages of AI - hoping to raise awareness to its misuses in society.
i found this website called workonclimate and i highly suggest anyone in compsci take a look at it! it genuinely seems to have some pretty cool resources wrt to how you as a person in compsci can help the environment + break into a job that would help stop things like climate change, as well as learning more about things like climate change itself in depth.
they also have a slack group in case any of you are slack users.
197 notes · View notes
aktionfsa-blog-blog · 8 months ago
Text
Tumblr media
"Sicherheitspaket" vor der Abstimmung
Die Überwachungsgesamtrechnung ist seit langem überschritten
Auch der angeblich abgeschwächte abgeänderte Vorschlag der Ampel zum „Sicherheitspaket“ trifft auf harte Kritik. Wieder soll es mit dem Überwachungsgesetz mehr Befugnisse für die Polizei bei der biometrischen Gesichtserkennung und bei der automatisierten Datenzusammenführung und -analyse geben. "Nebenbei" gibt es noch starke Leistungskürzungen für bestimmte ausreisepflichtige Menschen.
Auch die "entschärfte" Version des Gesetzes soll Bundeskriminalamt und Bundespolizei erlauben massenhaft Bilder aus dem Netz zu analysieren und die darauf abgebildeten Gesichter ohne Einwilligung der Betroffenen biometrisch auszuwerten und  zu speichern, um nach potentiellen Straftätern und Opfern von Straftaten zu suchen.
Dagegen haben sich Amnesty International in Deutschland, der Chaos Computer Club, AlgorithmWatch, LOAD und die AG Nachhaltige Digitalisierung zusammen getan, um in einer gemeinsamen Stellungnahme gegen dieses Vorhaben zu protestieren.
Auch die Bundesbeauftragte für den Datenschutz und die Informationsfreiheit (BfDI), Louisa Specht-Riemenschneider, hatte im Innenausschuss bereits davor gewarnt, übereilte Befugnisse für biometrische Erkennung zu schaffen, schreibt Netzpolitik.org.
Nun muss man sehen, ob die dreißig Parlamentarier der SPD bei ihrer Absicht bleiben dem Sicherheitspaket vielleicht nicht zuzustimmen. Auch bei den Grünen Abgeordneten steht die automatisierte Gesichtserkennung ebenfalls in der Kritik. Vielleicht wird es dadurch das erste Gesetz, das durch die Stimmen der AfD zum Gesetz in Kraft tritt.
Mehr dazu bei https://netzpolitik.org/2024/biometrische-ueberwachung-sicherheitspaket-als-buechse-der-pandora/
Kategorie[21]: Unsere Themen in der Presse Short-Link dieser Seite: a-fsa.de/d/3Dk Link zu dieser Seite: https://www.aktion-freiheitstattangst.org/de/articles/8936-20241017-sicherheitspaket-vor-der-abstimmung.html
1 note · View note
anselmolucio · 4 years ago
Text
«El estado no debe abandonar su responsabilidad: necesitamos nuevas formas de hacer que el poder judicial intervenga en la sanción de las irregularidades digitales, en lugar de delegar de hecho tanto la vigilancia como la aplicación de las reglas acordadas democráticamente casi por completo a empresas privadas».
De Germany: A change of (dis)course in digital policy
1 note · View note
fernand0 · 8 days ago
Link
0 notes