#techpolicy
Explore tagged Tumblr posts
adafruit · 2 months ago
Text
Tumblr media
Desk of Ladyada - Tariff Talk! ASK AWAY! 📦🌎💸
We're doing the show tonight or Tues, so much is going on. Lots more questions came in too! Post yours before we go live…
This week was not a big week for engineering because we're neck-deep in tariff management! We had a video on Wednesday about the first round of tariffs announced—they have since been ratcheted up to 20% + 125% = 145% minimum on products from China and Hong Kong. "Retaliatory" tariffs on products from other countries have been paused, but there's still a 10% global tariff.
Then on Friday, it looks like some HTS codes
were "exempted" temporarily
Let's take a look at the CBP (link) and the exempt tariff codes
and see how they might apply to the kinds of stuff we use to do our electronics engineering.
The Great Search - How to Source Tariffed & Non-Tariffed Items on DigiKey
https://www.digikey.com/short/2nb7nzb3
Since this is the topic of the day, we'll be looking at how to purchase an item from DigiKey with tariffs in mind. DigiKey is a free trade zone, which means that if you're not in the USA, you will not have to pay additional tariffs on goods that are re-exported. However, for USA destinations, the new tariffs can add a significant cost if the component has its last manufacturing step in China.
Let's use the example of a simple I2C-controlled temperature sensor to see how our sourcing decisions may be impacted by tariffs.
9 notes · View notes
futuretrendstoday · 4 months ago
Text
Google Updates AI Ethics Policy Amid Growing Military AI Use:
Big Tech’s Shift Toward AI-Powered Defense
Google has quietly revised its AI ethics principles, removing a key pledge to refrain from developing artificial intelligence for weapons or harmful technologies. The change comes in the wake of U.S. President Donald Trump’s decision to revoke an executive order from former President Joe Biden, which sought to promote the safe and ethical development of AI.
Tumblr media
The update has sparked concern among human rights advocates, as Google’s previous commitment explicitly ruled out developing AI for technologies that cause harm, facilitate injury, or violate international norms on surveillance and human rights. The shift aligns Google with a growing trend of AI militarization among major tech companies, raising questions about the future role of AI in global defense strategies.
Tech Giants Increasingly Involved in Defense
The military integration of AI has accelerated in recent months. In September 2024, top officials from the Biden administration met with leading AI firms, including OpenAI, to discuss the national security implications of AI. A month later, the administration announced an initiative to leverage AI for defense purposes, signaling a shift in policy that big tech companies quickly followed.
Meta, for instance, announced in November 2024 that its Llama AI models would be made available to U.S. defense and national security agencies—despite its own policies prohibiting military applications. Around the same time, Anthropic partnered with Palantir and Amazon Web Services to provide AI solutions to intelligence and defense sectors. OpenAI soon joined the trend, collaborating with defense startup Anduril Industries to integrate its AI models into U.S. military drone defense systems.
National Security Interests vs. Ethical Concerns
Defending these policy shifts, companies cite geopolitical competition and national security threats. Google, in a recent blog post, pointed to the global AI race, rising tensions with China, and evolving security landscapes as justification for its revised AI principles.
This development follows the ongoing U.S.-China trade war over AI technology. The U.S. imposed export controls on advanced AI chips to China in October 2022, prompting retaliatory restrictions from Beijing on key raw materials for chip manufacturing. More recently, the emergence of highly efficient AI models from China’s DeepSeek, allegedly developed using U.S.-banned Nvidia chips, has further intensified competition. The growing tensions appear to be shaping how U.S. tech firms approach AI policy, particularly regarding defense applications.
The Human Cost of AI in Warfare
AI’s role in military operations has already raised ethical alarms. In the ongoing Gaza conflict, the Israeli military has reportedly relied on advanced AI tools, developed with infrastructure support from Microsoft and Google, to identify targets. However, soldiers have acknowledged inaccuracies in AI targeting, contributing to a rising civilian death toll, which Gaza authorities estimate has surpassed 61,000.
With Google’s removal of its pledge against AI applications that cause harm, human rights advocates warn that these policies may lead to further ethical dilemmas. International law emphasizes “security of person” as a fundamental right, and critics argue that Google’s policy shift undermines these principles.
The Growing Need for AI Safeguards
Despite Google’s assurance that its AI products will align with international human rights principles, organizations such as Human Rights Watch remain skeptical. They highlight the company’s failure to provide concrete explanations on how it will uphold ethical AI use while working with defense agencies.
The revocation of Biden’s executive order has further removed regulatory guardrails on AI development, making independent oversight more critical than ever. While the order was not without flaws, it represented an effort to establish responsible AI guidelines—guidelines that now seem to be fading as tech companies deepen their involvement in defense initiatives.
As AI becomes increasingly intertwined with military operations, global leaders and advocacy groups face mounting pressure to implement safeguards that prevent the unchecked proliferation of AI-driven warfare technologies.
2 notes · View notes
Text
Tumblr media
A bill that President Joe Biden approved mandates that ByteDance, the parent company of TikTok, give out its assets within nine months to a year in order to prevent the applicability of an effective ban in the US.
What You Think🤔 About It Tell Me In Comment💬
3 notes · View notes
monpetitrobot · 1 day ago
Link
0 notes
Text
QUANTUM LEGAL FRONTIERS 2025:
Quantum Computers Are Almost Here. But Who's Writing the Rules?
Title: Greetings Tumblr friends,
So here’s something that’s been on my mind lately: quantum computing.
You’ve probably seen it popping up more and more — in science news, in tech forecasts, maybe even in pop culture. It’s the idea that computers are about to leap from processing bits (0s and 1s) to something way wilder: qubits. That means insanely fast computations, smarter AI, and breakthroughs in things like drug discovery, finance, and climate modeling.
Sounds exciting, right? It is.
But also... kind of terrifying. Because our laws are not ready for this.
Here’s the problem:
Quantum computers could:
Break the encryption that keeps your data safe
Render some privacy protections useless
Create AI systems that are harder to control or explain
Challenge the way we think about ownership and responsibility
Basically, the rules that protect our digital lives were written for yesterday’s tech. But quantum computing is tomorrow — and it’s coming fast.
That’s where this event comes in:
🗓️ Quantum Legal Frontiers 2025 📍 Online 📅 Happening July 3, 2025 🌐 quantumlegalfrontiers.vercel.app
This is an international online conference created by Dr. Pavan Duggal, one of the world’s top experts in cyber law and emerging tech. He’s hosted major events on AI, the metaverse, and digital governance — and now he’s turning attention to quantum law.
Why this matters:
This event will bring together legal minds, tech innovators, ethicists, scientists, and policymakers from all over the world. They’ll be talking about:
Quantum cybersecurity and what happens after encryption breaks
Privacy laws in a world where “anonymous” might not mean much anymore
Intellectual property when quantum inventions come into play
How to make quantum-powered AI accountable
What to do about quantum-enhanced cybercrime
The need for global cooperation and legal standards
It’s serious stuff, but it’s also fascinating. This is history-in-the-making level discourse — and you can be part of it.
If you're into any of these:
Tech + ethics
Law + future
Sci-fi turning into reality
The intersection of science, society, and justice
This conference is made for you.
Check it out, share it, and let’s start thinking about how we make the quantum future fair, safe, and human-centered.
✨ quantumlegalfrontiers.vercel.app
1 note · View note
Text
Sovereign AI is reshaping national power. Aligning data infrastructure with intelligence is crucial for autonomy, security, and innovation. Explore how nations can lead in the AI era.
0 notes
fatihulusoy · 19 days ago
Text
First AI Driven Lawyers and now AI JUDGES?
Tumblr media
Greetings folks, Arabs being arabs whatever they put themselves in to... Now UAE is experimenting on use of ai to moderate court paperwork to "begin using artificial intelligence to help write its laws"
Yeah... Tell me its going to fail without telling me its going to fail..
So basically, "The Mighty", "The Home of High Tech" and "The Hivemind of IDIOTS" at UAE recently announced plans to use AI to help draft and update laws — a move Sheikh Mohammed bin Rashid Al Maktoum called a way to “accelerate legislation by up to 70%.” On the surface, it sounds like the future of governance: fast, efficient, data-driven. But before you picture AI as the legal equivalent of Harvey Specter, drafting flawless laws with a snap of its fingers, let’s take a closer look.
Why AI in lawmaking is not just a cool tech upgrade
Laws aren’t lines of code or simple contracts. They’re living frameworks shaped by human values, ethical debates, political horse-trading, and complex societal needs. AI, despite its strengths, faces big challenges here.
Bruce Schneier laid it out clearly:
“AI-generated law is not a first, and it’s not necessarily terrible, but it’s emblematic of a wider trend. The real danger isn’t AI making mistakes—humans do that too—but that AI gives those in power new, powerful tools to entrench their interests.”
The risks under the hood
Bias baked in. AI learns from existing data. If that data carries societal biases, AI replicates and amplifies them. Schneier points out, “Algorithms are only as fair as the data we feed them.” This means AI could unknowingly draft laws that deepen inequality or marginalize vulnerable groups.
Opaque decision-making. AI’s inner workings are often a “black box.” How it arrives at a suggestion or a draft isn’t always clear. Schneier warns, “When we can’t understand how a system makes decisions, we lose accountability.” Transparency is vital in lawmaking — people need to trust how laws come to be.
Oversimplification of complexity. AI reduces messy social realities to data points and patterns. But laws impact people’s lives in unpredictable, emotional, and nuanced ways. As Schneier puts it, “Security and privacy are social as well as technical problems, and algorithms don’t always get the social context.” The same applies to law.
The accountability gap. Who’s responsible if AI-crafted laws harm citizens? Unlike a human lawyer or legislator who can be held accountable, AI is a tool—no legal personhood. Schneier stresses the need for “clear accountability mechanisms before deploying AI in critical governance roles.”
A Side Story If You Will: The AI lawyer flop
There was that infamous case where an AI was used to draft legal contracts but ended up producing flawed, inconsistent documents—missing critical clauses and creating legal landmines. It was a stark reminder: AI can assist, but it can’t replace human legal judgment anytime soon. The stakes in lawmaking are way too high for rookie mistakes.
The UAE’s AI law initiative: a double-edged sword?
Schneier’s full take highlights the UAE’s $3 billion plan to become an “AI-native” government. It’s ambitious and far-reaching. But, crucially, the UAE is a federation of monarchies with limited political rights and a history of centralized power.
Schneier notes:
“AI’s capability to write complex laws can be exploited to embed policy preferences subtly, carving out ‘microlegislation’ loopholes that favor the powerful—something political scientist Amy McKay warns about.”
In other words, AI could become a sophisticated tool for power concentration, not democratization.
What about speed?
While speeding up lawmaking sounds great, Schneier cautions:
“Drafting isn’t the bottleneck. Humans still need to debate, amend, and agree on laws. The political horse-trading doesn’t get faster just because AI drafts faster.”
The hopeful side: AI for public engagement
AI can be a force for good in lawmaking if used to enhance transparency and public participation. Schneier points to experiments worldwide—like in Kentucky, Massachusetts, France, and Taiwan—where AI-powered platforms help governments listen better to constituents and build more inclusive policies.
For the UAE, the challenge is clear:
“If you’re building an AI-native government, do it to empower people, not just machines.”
Final take
Dont get me wrong AI is a powerful tool with enormous potential—but in lawmaking... it’s just that: a tool. It’s not the final arbiter. Until AI can be made transparent, fair, and accountable, human judgment, empathy, and oversight remain irreplaceable.
Think of AI like the eager associate on a legal team—great at research and support, but the partners (humans) must still make the tough calls. Skip that, and you risk creating a legal mess that no closer, even Harvey Specter or otherwise, can fix.
SOURCES:
Bruce Schneier, AI-Generated Law — Full article (2025.05.15) https://www.schneier.com/blog/archives/2025/05/ai-generated-law.html
Amy McKay on Microlegislation (cited in Schneier’s article) — Political Science perspectives on AI and law loopholes (search scholarly articles or summaries)
UAE’s announcement of AI use in lawmaking (news coverage example) https://www.thenationalnews.com/uae/government/2025/04/15/uae-launches-ai-to-help-write-laws/
Ohio AI regulatory revision success story https://www.governing.com/next/ohio-uses-ai-to-trim-unnecessary-laws.html
1 note · View note
alltimeupdating · 24 days ago
Text
0 notes
techinewswp · 1 month ago
Text
0 notes
gedzolini · 2 months ago
Text
Hiring Algorithmic Bias: Why AI Recruiting Tools Need to Be Regulated Just Like Human Recruiters
Artificial intelligence is a barrier for millions of job searchers throughout the world. Ironically, AI tends to inherit and magnify human prejudices, despite its promise to make hiring faster and fairer. Companies like Pymetrics, HireVue, and Amazon use it because of this. It may be harder to spot and stop systematic prejudice than bias from human recruiters if these automated hiring technologies are allowed to operate unchecked. The crucial question that this raises is whether automated hiring algorithms should be governed by the same rules as human decision-makers. As more and more evidence points to, the answer must be yes.
AI's Rise in Hiring
The use of AI in hiring is no longer futuristic, it is mainstream. According to a site Resume Genius around 48% of hiring managers in the U.S. use AI to support HR activities, and adoption is expected to grow. These systems sort through resumes, rank applicants, analyze video interviews, and even predict a candidate’s future job performance based on behavior or speech patterns. The objective is to lower expenses, reduce bias, and decrease human mistakes. But AI can only be as good as the data it is taught on, and technology can reinforce historical injustices if the data reflects them. One of the main examples is Amazon’s hiring tool. They created a hiring tool in 2014 that assigned résumé scores to applicants. The goal was to more effectively discover elite personnel by automating the selection process. By 2015, however, programmers had identified a serious weakness: the AI was discriminatory against women. Why? because over a ten-year period, it had been trained on resumes submitted to Amazon, the majority of which were from men. The algorithm consequently started to penalize resumes that mentioned attendance at all-female universities or contained phrases like "women's chess club captain." Bias persisted in the system despite efforts to "neutralize" gendered words. In 2017, Amazon discreetly abandoned the project. This exemplifies a warning about the societal repercussions of using obscure tools to automate important life opportunities, not just merely a technical error. So, where does the law stand?
Legal and Ethical Views on AI Bias
The EEOC (Equal Employment Opportunity Commission) of the United States has recognized the rising issue. To guarantee that algorithmic employment methods meet human rights legislation, the EEOC and the Department of Justice established a Joint Initiative on Algorithmic Fairness in May 2022. Technical guidance on the application of Title VII of the Civil Rights Act, which forbids employment discrimination, to algorithmic tools was subsequently released.
The EEOC’s plan includes:
Establishing an internal working group to coordinate efforts across the agency.
Hosting listening sessions with employers, vendors, researchers, and civil rights groups to understand the real-world impact of hiring technologies.
Gathering data on how algorithmic tools are being adopted, designed, and deployed in the workplace.
Identifying promising practices for ensuring fairness in AI systems.
Issuing technical assistance to help employers navigate the legal and ethical use of AI in hiring decisions.
But there's a problem. Most laws were written with human decision-makers in mind. Regulators are still catching up with technologies that evolve faster than legislation. Some states, like Illinois and New York, have passed laws requiring bias audits or transparency in hiring tools, but these are exceptions, not the rule. The vast majority of hiring algorithms still operate in a regulatory gray zone. This regulatory gap becomes especially troubling when AI systems replicate the very biases that human decision-makers are legally prohibited from acting on.If an HR manager refused to interview a woman simply because she led a women’s tech club, it would be a clear violation of employment law. Why should an AI system that does the same get a pass? Here are some reasons AI hiring tools must face the same scrutiny as humans:
Lack of Transparency
AI systems are often “black boxes”, their decision-making logic is hidden, even from the companies that deploy them. Job applicants frequently don’t know an algorithm was involved, let alone how to contest its decisions.
Scale of Harm
A biased recruiter might discriminate against a few candidates. A biased algorithm can reject thousands in seconds. The scalability of harm is enormous and invisible unless proactively audited.
Accountability Gap
When things go wrong, who is responsible? The vendor that built the tool? The employer who used it? The engineer who trained it? Current frameworks rarely provide clear answers.
Public Trust
Surveys suggest that public confidence in AI hiring is low. A 2021 Pew Research study found that a majority of Americans oppose the use of AI in hiring decisions, citing fairness and accountability as top concerns.
Relying solely on voluntary best practices is no longer sufficient due to the size, opacity, and influence of AI hiring tools. Strong regulatory frameworks must be in place to guarantee that these technologies be created and used responsibly if they are to gain the public's trust and function within moral and legal bounds.
What Regulation Should Look Like
Significant security must be implemented to guarantee AI promotes justice rather than harming it. These regulations are:
Mandatory bias audits by independent third parties.
Algorithmic transparency, including disclosures to applicants when AI is used.
Explainability requirements to help users understand and contest decisions.
Data diversity mandates, ensuring training datasets reflect real-world demographics.
Clear legal accountability for companies deploying biased systems.
Regulators in Europe are already using this approach. The proposed AI Act from the EU labels hiring tools as "high-risk" and places strict constraints on their use, such as frequent risk assessments and human supervision.
Improving AI rather than abandoning it is the answer. Promising attempts are being made to create "fairness-aware" algorithms that strike a compromise between social equality and prediction accuracy. Businesses such as Pymetrics have pledged to mitigate bias and conduct third-party audits. Developers can access resources to assess and reduce prejudice through open-source toolkits such as Microsoft's Fairlearn and IBM's AI Fairness 360. A Python library called Fairlearn aids with assessing and resolving fairness concerns in machine learning models. It offers algorithms and visualization dashboards that may reduce the differences in predicted performance between various demographic groupings. With ten bias prevention algorithms and more than 70 fairness criteria, AI Fairness 360 (AIF360) is a complete toolkit. It is very adaptable for pipelines in the real world because it allows pre-, in-, and post-processing procedures. Businesses can be proactive in detecting and resolving bias before it affects job prospects by integrating such technologies into the development pipeline. These resources show that fairness is a achievable objective rather than merely an ideal.
Conclusion
Fairness, accountability, and public trust are all at considerable risk from AI's unrestrained use as it continues to influence hiring practices. With the size and opacity of these tools, algorithmic systems must be held to the same norms that shield job seekers from human prejudice, if not more rigorously. The goal of regulating AI in employment is to prevent technological advancement from compromising equal opportunity, not to hinder innovation. We can create AI systems that enhance rather than undermine a just labor market if we have the appropriate regulations, audits, and resources. Whether the decision-maker is a human or a machine, fair hiring should never be left up to chance.
0 notes
qubesmagazine · 2 months ago
Text
0 notes
adafruit · 2 months ago
Text
youtube
The Great Search - How to Source Tariffed & Non-Tariffed Items on DigiKey 📦🌎💸 https://youtu.be/I-eD6FSM0bU
6 notes · View notes
samalexs-stuff · 2 months ago
Text
Why Lawmakers Are Urging Action on Chinese Agricultural Drones
US lawmakers are calling for stricter regulations on Chinese agricultural drones to mitigate national security risks. As farming technology evolves, should security or innovation take precedence? Let’s dive into the potential risks and solutions.
Store link: https://samliglobal.en.alibaba.com/
Zhongshan Samli Drones Co. Ltd
Tumblr media
CONTACT US
WhatsApp: +86 152 1870 3002
We chat: Samli 2022
mail address: [email protected]
0 notes
shimanshijji · 2 months ago
Text
Why Lawmakers Are Urging Action on Chinese Agricultural Drones
US lawmakers are calling for stricter regulations on Chinese agricultural drones to mitigate national security risks. As farming technology evolves, should security or innovation take precedence? Let’s dive into the potential risks and solutions.
Store link: https://samliglobal.en.alibaba.com/
Zhongshan Samli Drones Co. Ltd
Tumblr media
CONTACT US
WhatsApp: +86 152 1870 3002
We chat: Samli 2022
mail address: [email protected]
0 notes
monpetitrobot · 9 days ago
Link
0 notes
aihanleyue · 2 months ago
Text
Why Lawmakers Are Urging Action on Chinese Agricultural Drones
US lawmakers are calling for stricter regulations on Chinese agricultural drones to mitigate national security risks. As farming technology evolves, should security or innovation take precedence? Let’s dive into the potential risks and solutions.
The rise of Chinese agricultural drones in the US has sparked debates around national security, data privacy, and environmental impacts. Lawmakers are pushing for new regulations and domestic manufacturing incentives to ensure US farmers are protected and innovation continues.
What do you think about the use of Chinese drones in US agriculture?
Store link: https://samliglobal.en.alibaba.com/
Zhongshan Samli Drones Co. Ltd
Tumblr media
CONTACT US
WhatsApp: +86 152 1870 3002
We chat: Samli 2022
mail address: [email protected]
0 notes