#NIST guidelines
Explore tagged Tumblr posts
Text
NIST supply chain security guidance for CI/CD environments: What you need to know

With the proposed CI/CD guidelines (NIST SP 800-204D), NIST aims to help developers build more secure software by addressing risks in the supply chain. https://jpmellojr.blogspot.com/2023/09/nist-supply-chain-security-guidance.html
#NIST#supplychainsecurity#CI/CD#DevSecOps#softwaredevelopment#frameworks#guidelines#pipelinesecurity#cloudnative#microservices#securitydebt
0 notes
Text
He should be arrested for violating our privacy. He was not vetted by congress and has no security clearance.
Contact your state’s attorney general and request help.
Can we ask the ACLU to file a class action suit? Who’s with me?
“Let’s get into the details. Musk’s staffers have been caught plugging external hard drives into federal agency systems and reportedly locking others out of private rooms to perform—who knows what actions. This behavior violates key cybersecurity laws under FISMA and NIST guidelines, which are designed to protect sensitive federal information. Here’s why this is a serious problem.
Federal systems are strictly regulated, allowing only approved devices to connect. Unauthorized external drives can introduce viruses, ransomware, or other harmful software that may compromise entire networks and disrupt essential operations. This puts system stability and continuity of services at risk, endangering critical infrastructure.
These devices could also be used to steal or damage critical information, including personal data for millions of Americans—such as Social Security recipients and taxpayers. Unauthorized access creates significant vulnerabilities, exposing sensitive data to the risk of cyberattacks. Such attacks could cripple vital services and compromise the privacy and safety of millions of people.
Additionally, federal agencies have strict access controls to prevent unauthorized data manipulation or theft. When unauthorized devices are connected, these protections are bypassed, allowing unauthorized users to potentially alter or extract sensitive data. This undermines system integrity and opens the door to both internal and external threats.
External drives also often lack essential security features, such as encryption and antivirus scanning, making them vulnerable to cybercriminal exploitation. These security gaps further increase the risk of data breaches and system compromise, which can have far-reaching consequences.
Federal systems handle trillions of dollars in payments and manage personal data for millions of U.S. citizens. By bypassing cybersecurity laws and protocols, Musk’s staffers are putting these systems—and the public—at serious risk. This activity is illegal, reckless, and unacceptable. Immediate oversight and intervention are necessary to stop these violations!” ~ A N P S
225 notes
·
View notes
Text
alt text under cut
Alt National Park Service·
Let’s get into the details. Musk’s staffers have been caught plugging external hard drives into federal agency systems and reportedly locking others out of private rooms to perform—who knows what actions. This behavior violates key cybersecurity laws under FISMA and NIST guidelines, which are designed to protect sensitive federal information. Here’s why this is a serious problem.
Federal systems are strictly regulated, allowing only approved devices to connect. Unauthorized external drives can introduce viruses, ransomware, or other harmful software that may compromise entire networks and disrupt essential operations. This puts system stability and continuity of services at risk, endangering critical infrastructure.
These devices could also be used to steal or damage critical information, including personal data for millions of Americans—such as Social Security recipients and taxpayers. Unauthorized access creates significant vulnerabilities, exposing sensitive data to the risk of cyberattacks. Such attacks could cripple vital services and compromise the privacy and safety of millions of people.
Additionally, federal agencies have strict access controls to prevent unauthorized data manipulation or theft. When unauthorized devices are connected, these protections are bypassed, allowing unauthorized users to potentially alter or extract sensitive data. This undermines system integrity and opens the door to both internal and external threats.
External drives also often lack essential security features, such as encryption and antivirus scanning, making them vulnerable to cybercriminal exploitation. These security gaps further increase the risk of data breaches and system compromise, which can have far-reaching consequences.
Federal systems handle trillions of dollars in payments and manage personal data for millions of U.S. citizens. By bypassing cybersecurity laws and protocols, Musk’s staffers are putting these systems—and the public—at serious risk. This activity is illegal, reckless, and unacceptable. Immediate oversight and intervention are necessary to stop these violations!
64 notes
·
View notes
Text

Let’s get into the details. Musk’s staffers have been caught plugging external hard drives into federal agency systems and reportedly locking others out of private rooms to perform—who knows what actions. This behavior violates key cybersecurity laws under FISMA and NIST guidelines, which are designed to protect sensitive federal information. Here’s why this is a serious problem.
Federal systems are strictly regulated, allowing only approved devices to connect. Unauthorized external drives can introduce viruses, ransomware, or other harmful software that may compromise entire networks and disrupt essential operations. This puts system stability and continuity of services at risk, endangering critical infrastructure.
These devices could also be used to steal or damage critical information, including personal data for millions of Americans—such as Social Security recipients and taxpayers. Unauthorized access creates significant vulnerabilities, exposing sensitive data to the risk of cyberattacks. Such attacks could cripple vital services and compromise the privacy and safety of millions of people.
Additionally, federal agencies have strict access controls to prevent unauthorized data manipulation or theft. When unauthorized devices are connected, these protections are bypassed, allowing unauthorized users to potentially alter or extract sensitive data. This undermines system integrity and opens the door to both internal and external threats.
External drives also often lack essential security features, such as encryption and antivirus scanning, making them vulnerable to cybercriminal exploitation. These security gaps further increase the risk of data breaches and system compromise, which can have far-reaching consequences.
Federal systems handle trillions of dollars in payments and manage personal data for millions of U.S. citizens. By bypassing cybersecurity laws and protocols, Musk’s staffers are putting these systems—and the public—at serious risk. This activity is illegal, reckless, and unacceptable. Immediate oversight and intervention are necessary to stop these violations!
42 notes
·
View notes
Photo

(via NIST proposes barring some of the most nonsensical password rules | Ars Technica)
A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.
Since then, most services require the use of stronger passwords made up of randomly generated characters or phrases. When passwords are chosen properly, the requirement to periodically change them, typically every one to three months, can actually diminish security because the added burden incentivizes weaker passwords that are easier for people to set and remember.
Another requirement that often does more harm than good is the required use of certain characters, such as at least one number, one special character, and one upper- and lowercase letter. When passwords are sufficiently long and random, there’s no benefit from requiring or restricting the use of certain characters. And again, rules governing composition can actually lead to people choosing weaker passcodes.
The latest NIST guidelines now state that:
Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords and
Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
Let’s hope this becomes the new standard SOON.
#NIST#STOP making end users periodically change their passwords#STOP requiring different character types
20 notes
·
View notes
Text
Researchers found a vulnerability in a Kia web portal that allowed them to track millions of cars, unlock doors, honk horns, and even start engines in seconds, just by reading the car's license plate. The findings are the latest in a string of web bugs that have impacted dozen of carmakers. Meanwhile, a handful of Tesla Cybertrucks have been outfitted for war and are literally being-battle tested by Chechen forces fighting in Ukraine as part of Russia’s ongoing invasion.
As Israel escalates its attacks on Lebanon, civilians on both sides of the conflict have been receiving ominous text messages—and authorities in each country are accusing the other of psychological warfare. The US government has increasingly condemned Russia-backed media outlets like RT for working closely with Russian intelligence—and many digital platforms have removed or banned their content. But they’re still influential and trusted alternative sources of information in many parts of the world.
And there's more. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
New Digital Identity Guidelines Strike Back at Dreadful Password Policies
A new draft of the US National Institute of Standards and Technology's “Digital Identity Guidelines” finally takes steps to eliminate reviled password management practices that have been shown to do more harm than good. The recommendations, which will be mandatory for US federal government entities and serve as guidelines for everyone else, ban the practice of requiring users to periodically change their account passwords, often every 90 days.
The policy of regularly changing passwords evolved out of a desire to ensure that people weren't choosing easily guessable or reused passwords; but in practice, it causes people to choose simple or formulaic passwords so they will be easier to keep track of. The new recommendations also ban “composition rules,” like requiring a certain number or mix of capital letters, numbers, and punctuation marks in each password. NIST writes in the draft that the goal of the Digital Identity Guidelines is to provide “foundational risk management processes and requirements that enable the implementation of secure, private, equitable, and accessible identity systems.”
DOJ Indicts Alleged Iranian Hackers Over Trump Campaign Breach
The US Department of Justice unsealed charges on Friday against three Iranian men who allegedly compromised Donald Trump’s presidential campaign and leaked stolen data to media outlets. Microsoft and Google warned last month that an Iranian state-sponsored hacking group known as APT42 had targeted both the Joe Biden and Donald Trump presidential campaigns, and successfully breached the Trump campaign. The DOJ claims the hackers compromised a dozen people as part of its operation, including a journalist, a human rights advocate, and several former US officials. More broadly, the US government has said in recent weeks that Iran is attempting to interfere in the 2024 election.
“The defendants’ own words made clear that they were attempting to undermine former President Trump’s campaign in advance of the 2024 U.S. presidential election,” Attorney General Merrick Garland said at a press conference on Friday. "We know that Iran is continuing with its brazen efforts to stoke discord, erode confidence in the US electoral process, and advance its malign activities.”
Irish Regulator Fines Meta More Than $100 Million Over 2019 Password Lapse
The Irish Data Protection Commission fined Meta €91 million, or roughly $101 million, on Friday for a password storage lapse in 2019 that violated the European Union's General Data Protection Regulation. Following a report by Krebs on Security, the company acknowledged in March 2019 that a bug in its password management systems had caused hundreds of millions of Facebook, Facebook Lite, and Instagram passwords to be stored without protection in plaintext in an internal platform. Ireland's privacy watchdog launched its investigation into the incident in April 2019.
“It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," Irish DPC deputy commissioner Graham Doyle said in a statement. “It must be borne in mind that the passwords, the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.”
The Tor Project and the Tails Privacy Operating System Are Merging
The digital anonymity nonprofit the Tor Project is merging with privacy- and anonymity-focused Linux-based operating system Tails. Pavel Zoneff, the Tor Project’s communications director, wrote in a blog post on Thursday that the move will facilitate collaboration and reduce costs, while expanding both groups' reach. “Tor and Tails provide essential tools to help people around the world stay safe online,” he wrote. “By joining forces, these two privacy advocates will pool their resources to focus on what matters most: ensuring that activists, journalists, other at-risk and everyday users will have access to improved digital security tools.”
19 notes
·
View notes
Text
NIST Scraps Passwords Complexity and Mandatory Changes in New Guidelines

Source: https://www.infosecurity-magazine.com/news/nist-scraps-passwords-mandatory/
More info: https://pages.nist.gov/800-63-4/sp800-63b.html
5 notes
·
View notes
Text
South Korea Flow Cytometry Market Growth Forecast (2024–2033)

In South Korea, flow cytometry has become a game-changing technique thanks to developments in both industry and research applications. Applications for this adaptable technology range from industrial biotechnology to clinical diagnostics by enabling high-throughput study of individual cells. With major contributions from major organizations like Becton, Dickinson and Company, Danaher Corporation (Beckman Coulter), and Thermo Fisher Scientific Inc., South Korea's dedication to innovation and R&D spending has accelerated the adoption of flow cytometry.
Despite obstacles including expensive prices and complicated technology, the South Korean flow cytometry industry is expected to grow at a promising rate because of government programs and rising investments in biotech and healthcare.
The South Korea flow cytometry market was valued at $74.55 million in 2023 and is projected to reach $244.5 million by 2033, growing at a robust CAGR of 12.89% during the forecast period (2024-2033). Investments in flow cytometry in the country saw a significant year-on-year increase of 23.23% from $431.3 million in 2020 to $531.5 million in 2021.

SWOT Analysis
Strengths • Strong emphasis on R&D, with investments growing by 23.23% in one year. • Versatile applications across clinical, research, and industrial sectors. • Advanced technological offerings from global leaders like BD and Thermo Fisher Scientific.
Weaknesses • High cost of equipment, ranging from $100,000 to $500,000 per unit, limits accessibility for smaller institutions. • Technological complexity necessitates specialized training and expertise.
Opportunities • Increasing healthcare investments and an aging population with rising demand for diagnostic services. • Advancements in automation and AI integration in flow cytometry systems. • Growing adoption in emerging applications like immunotherapy and cell-based assays.
Threats • Economic downturns potentially impacting healthcare and research funding. • Dependence on imported equipment increases vulnerability to supply chain disruptions.
Key Regulatory Bodies
1. Clinical and Laboratory Standards Institute (CLSI): • Provides guidelines for assay validation, instrument monitoring, and quality control. • Relevant guidelines include: o H62: Validation of Assays Performed by Flow Cytometry o H43-A2: Clinical Flow Cytometric Analysis of Neoplastic Hematolymphoid Cells o H52-A2: Red Blood Cell Diagnostic Testing Using Flow Cytometry
2. National Institute of Standards and Technology (NIST): • Focuses on creating standards for quantitative flow cytometry measurements. • Collaborates with South Korea’s Korea Research Institute of Standards and Science.
Who are the key players in the South Korean flow cytometry market?

Gain a comprehensive market overview from industry experts and unlock valuable insights to drive your business growth. Download our sample now!
Gain deep information on Healthcare Vertical. Click Here!
Conclusion
The market for flow cytometry in South Korea is expected to increase significantly due to rising biotechnology and healthcare investments, technological breakthroughs, and growing diagnostic and research applications. South Korea is positioned as a major player in the global flow cytometry scene despite obstacles including high costs and dependency on imports, as well as opportunities in automation, AI integration, and developing clinical applications. The trajectory of the South Korean flow cytometry market over the next ten years will be further supported by strategic partnerships and adherence to strict regulatory standards.
#South Korea Flow Cytometry Market#South Korea Flow Cytometry Industry#South Korea Flow Cytometry Report#health#healthcare
0 notes
Text
ISO/IEC 42001:2023 Awareness Course –AI Management System
ISO/IEC 42001:2023 Awareness Course is a foundational learning experience designed for professionals, decision-makers, and technical teams seeking to understand and implement a robust Artificial Intelligence Management System (AIMS). As the global adoption of AI accelerates, so too do concerns regarding its ethical use, regulatory alignment, and organizational control. The ISO/IEC 42001:2023 standard is the first of its kind – providing a globally recognized framework specifically for managing the lifecycle, risks, governance, and accountability of AI systems in enterprises. At the heart of this awareness course is the commitment to responsible and structured AI deployment, aligned with the guidelines and controls outlined in ISO/IEC 42001:2023. Whether you’re a compliance officer, project manager, C-suite executive, AI engineer, or risk officer, understanding this new standard is critical for ensuring that your AI systems are safe, transparent, ethical, and compliant. By participating in the ISO/IEC 42001:2023 Awareness Course, you will gain a practical understanding of the scope and structure of the standard, how it fits into existing ISO management system frameworks (such as ISO/IEC 27001 and ISO 9001), and how to apply its principles to real-world AI development, deployment, and oversight. With a focus on risk-based thinking, transparency controls, and continuous improvement, this awareness course empowers you to begin aligning your organization’s AI capabilities with global best practices – a move that can build public trust, reduce liability, and drive innovation responsibly. This course breaks down the clauses and annexes of ISO/IEC 42001:2023, explaining how organizations can: Establish an AI policy that supports their business strategy and stakeholder expectations Define the boundaries and context for AI systems Integrate lifecycle-based AI risk assessments Monitor, evaluate, and audit AI-related activities Ensure data quality, human oversight, and accountability mechanisms The ISO/IEC 42001:2023 Awareness Course is not just theory. It includes contextualized examples, practical scenarios, and simplified guides to bring each section of the standard to life. You will also learn how ISO 42001 supports ESG goals, regulatory compliance (such as EU AI Act or NIST AI RMF), and sustainable innovation practices. Why take this course now? AI is no longer a futuristic concept – it is an operational reality. Failing to adopt a structured AI governance approach can result in algorithmic bias, regulatory fines, data breaches, and reputational harm. ISO/IEC 42001:2023 is your roadmap to confidently managing AI responsibly and transparently. Join our ISO/IEC 42001:2023 Awareness Course to stay ahead of regulatory requirements, reduce operational risk, and build a future-ready AI strategy.
0 notes
Text
In what ways does Velthrad improve ADGM standards for cybersecurity?

Cybersecurity has become essential for regulatory regimes such as the Abu Dhabi Global Market (ADGM) in the current digital-first era. Businesses operating inside ADGM are required to follow stringent cybersecurity requirements due to the growing sensitivity of data and compliance duties. As a top supplier of cybersecurity and IT infrastructure, Velthrad is essential in assisting businesses in meeting and surpassing ADGM's cybersecurity requirements.
Comprehending ADGM's Cybersecurity Needs
Information security governance, data privacy, threat detection, and risk management are all highly valued by the ADGM. It is expected of businesses operating in ADGM to put strong cybersecurity frameworks in place that comply with local UAE regulatory requirements as well as international best practices like ISO/IEC 27001 and NIST.
More than just basic IT support is needed to meet these criteria; strategic, tailored cybersecurity strategy and execution are needed. Velthrad becomes a crucial companion at this point.
Velthrad's Compliance with ADGM Cybersecurity Guidelines
1. Advanced Detection and Prevention of Threats
To proactively detect and eliminate any cyberthreats, Velthrad uses AI-powered threat intelligence tools, intrusion detection systems, and next-generation firewalls. ADGM's necessity for constant monitoring and prompt incident reaction is strongly aligned with this real-time protection.
2. Assistance with Regulatory Compliance
Velthrad provides customised solutions to assist companies in aligning their cybersecurity guidelines with ADGM's compliance requirements. Their team of professionals guarantees appropriate documentation, data security protocols, and system setups that lower regulatory risk and guarantee audit preparedness.
3. Architecture of Zero Trust
Velthrad uses a Zero Trust strategy in accordance with contemporary cybersecurity concepts. Before allowing access, each user, device, and connection is checked. In settings like ADGM, where private financial and legal information is shared and kept, this paradigm works particularly well.
4. Privacy Controls and Data Encryption
Velthrad uses enterprise-grade encryption to protect data while it's in transit and at rest. As required by ADGM requirements, they improve data confidentiality by putting in place stringent access restrictions and role-based security procedures.
5. Security audits and risk assessments
Velthrad helps companies find weaknesses in their cybersecurity posture by conducting frequent security audits and thorough risk assessments. Businesses in ADGM must undergo these assessments in order to be compliant and strong against changing threats.
6. Awareness and Training of Employees
Velthrad also carries out security awareness programmes because it understands how crucial human mistake is to cyber attacks. Key elements of ADGM's people-centric cybersecurity methodology, such as phishing, password hygiene, and safe digital habits, are taught to employees through these programmes.
In conclusion
For companies in ADGM, Velthrad serves as a strategic security partner in addition to being a cybersecurity vendor. Velthrad enables businesses to function safely and securely inside the Abu Dhabi Global Market's regulatory ecosystem by providing proactive protection, regulatory alignment, and a dedication to developing best practices. For businesses looking to satisfy the demanding cybersecurity ADGM criteria, Velthrad is a reliable and progressive partner.
0 notes
Link
#AICompliance#automatedauditing#breachprevention#cloudsecurity#EUCS#FedRAMP#PCIDSS#regulatorytechnology
0 notes
Text
Why Factory Verified Calibration Services Matter: Choosing the Right Calibration Company
As pioneers in the test and measurement sector, Arabcal LLC defines the standard for precision and dependability. Through our calibration solutions, you receive factory-level accuracy for your equipment. Experience the advantage of working with experts who have led the way in measurement excellence for decades.
Explore the Exceptional Benefits of Arabcal LLC Certified Calibration Solutions
Comprehensive Maintenance Checks: Our Factory Verified Calibration goes beyond standard calibrations, incorporating additional preventative maintenance checks across the full range of instrument operation. Keeping your equipment in top shape, optimizing performance throughout the plan.
Firmware and Software Upgrades: Stay ahead of the curve with regular firmware and software updates for your instruments. Experience the latest features and enhancements, ensuring your equipment operates at its full potential.
What is the difference between Factory-Verified & OEM Compliant Calibration?
In the accuracy-focused field of calibration, recognizing the differences between service types is essential to maintain peak instrument performance. Arabcal LLC, a frontrunner in calibration solutions, provides two specialized services: Factory-Certified Calibration and OEM-Standard Calibration.
High-Quality and Accurate & Accredited Calibration Service
We are proud of our team of expert Calibration Specialists and Customer Support Engineers, whose exceptional knowledge ensures reliable servicing of a broad spectrum of test and measurement instruments.
You can trust that each instrument is calibrated with thorough attention to detail, adhering to strict certification and accreditation standards set by independent recognized bodies and aligned with NIST traceability guidelines, guaranteeing consistent precision, accuracy, and dependability.
With Arabcal LLC, you can have complete confidence that your critical equipment is handled by experts who are committed to delivering precision, dependability, and consistent accuracy.
Tailored Solutions for Your Specific Needs
Every engineering and manufacturing operation is unique, and we recognize that. That’s why we don’t believe in a one-size-fits-all approach. Our committed team of Calibration Experts, Material Coordinators, and Customer Support Engineers works collaboratively with you to understand your specific needs, challenges, and objectives. We then craft tailored service solutions that align seamlessly with your goals, empowering you to concentrate on innovation and optimizing production performance.
Trusted Partnership Approach
At Arabcal LLC, we view ourselves not merely as service providers but as dedicated partners in your success. Our focus on fostering long-term partnerships ensures you can depend on us for not only exceptional service but also continuous support and teamwork. Together, we’ll navigate challenges and drive your operational goals forward.
Arabcal LLC: Setting the Standard for Precision Calibration Services in QatarIn conclusion, when it comes to precision, reliability, and customized calibration solutions, Arabcal LLC stands out among the leading calibration companies in Qatar. With a commitment to excellence, advanced technology, and a customer-first approach, Arabcal delivers factory-verified and OEM-compliant calibration services that meet the highest industry standards. Their expert team ensures your equipment performs at its best, making Arabcal LLC the trusted partner for businesses seeking accuracy, compliance, and long-term operational success in Qatar’s growing industrial landscape.
#instrument and equipment supplier in uae#calibration#iso 17025 accredited calibration company in uae#calibration lab#qatar
0 notes
Text
IAQ-Calc Indoor Air Quality Meters 7545
The 7545 IAQ Meter simultaneously measures and data logs multiple parameters, including CO, CO₂, temperature, and humidity; and calculations are dew point, wet bulb temperature, and percent of outside air. Indoor Air Quality Meter 7545 is an outstanding instrument for investigating and monitoring indoor air quality, or IAQ. The proprietary TSI LogDat2™ Downloading Software permits easy transfer of data to a computer. Data can be reviewed on-screen or downloaded to a computer for easy report generation. The statistics function displays average, maximum, and minimum values and the number of recorded samples.
Applications:
Verify building HVAC system performance
Examine building IAQ conditions to optimize worker productivity
Comply with regulations and guidelines
Included Items:
Instrument
Hard carrying-case
(4) AA alkaline batteries
Operation and service manual
NIST calibration certificate
LogDat2 Downloading Software with USB cable
AC adapter
Features and Benefits:
Low-drift NDIR CO2 sensor for stable, accurate readings
Temperature and relative humidity measurements help determine thermal comfort
Directly calculates dew point and wet bulb temperatures
Electrochemical sensor measures CO
Calculates % outside air from either CO₂ or temperature
Displays up to three parameters
Logs up to 26,900 data points with key (4) measured parameters enabled
Sampling function records multiple point measurements visit www.technovalue.in for more
#BuildingCommissioning#FacilityManagement#WorkplaceWellness#SmartBuildings#GreenBuildings#OccupantHealth#CO2Monitoring#HumiditySensor#TemperatureSensor#NDIRSensor#DewPoint
0 notes
Text
The Biden administration’s approach to the governance of artificial intelligence (AI) began with the Blueprint for an AI Bill of Rights, released in October 2022. This framework highlighted five key principles to guide responsible AI development, including protections against algorithmic bias, privacy considerations, and the right to human oversight.
These early efforts set the tone for more extensive action, leading to the release of the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, or the White House EO on AI, on October 30, 2023. This EO marked a critical step in defining AI regulation and accountability across multiple sectors, emphasizing a “whole-of-government” approach to address both opportunities and risks associated with AI. Last week, it reached its one-year anniversary.
The 2023 Executive Order on Artificial Intelligence represents one of the U.S. government’s most comprehensive efforts to secure the development and application of AI technology. This EO set ambitious goals aimed at establishing the U.S. as a leader in safe, ethical, and responsible AI use. Specifically, the EO directed federal agencies to address several core areas: managing dual-use AI models, implementing rigorous testing protocols for high-risk AI systems, enforcing accountability measures, safeguarding civil rights, and promoting transparency across the AI lifecycle. These initiatives are designed to mitigate potential security risks and uphold democratic values while fostering public trust in the rapidly advancing field of AI.
To recognize the one-year anniversary of the EO, the White House released a scorecard of achievements, pointing to the elevated work of various federal agencies, the voluntary agreements made with industry stakeholders, and the persistent efforts made to ensure that AI benefits the global talent market, accrues environmental benefits, and protects—not scrutinizes or dislocates—American workers.
One example is the work of the U.S. AI Safety Institute (AISI), housed in the National Institute of Standards and Technology (NIST), which has spearheaded pre-deployment testing of advanced AI models, working alongside private developers to strengthen AI safety science. The AISI has also signed agreements with leading AI companies to conduct red-team testing to identify and mitigate risks, especially for general-purpose models with potential national security implications.
In addition, NIST released Version 1.0 of its AI Risk Management Framework, which provides comprehensive guidelines for identifying, assessing, and mitigating risks across generative AI and dual-use models. This framework emphasizes core principles like safety, transparency, and accountability, establishing foundational practices for AI systems’ development and deployment. And just last week, the federal government released the first-ever National Security Memorandum on Artificial Intelligence, which will serve as the foundation for the U.S.’s safety and security efforts when it comes to AI.
The White House EO on AI marks an essential step in shaping the future of U.S. AI policy, but its path forward remains uncertain with the pending presidential election. Since much of the work is being done by and within federal agencies, its tenets may outlive any possible repeal of the EO itself, ensuring the U.S. stays relevant in the development of guidance that balances the promotion of innovation with safety, particularly in national security. However, the EO’s long-term impact will depend on the willingness of policymakers to adapt to AI’s rapid development, while maintaining a framework that supports both innovation and public trust. Regardless of who leads the next administration, navigating these challenges will be central to cementing the U.S.’s role in the AI landscape on the global stage.
In 2023, Brookings scholars weighed in following the adoption of the White House EO. Here’s what they have to say today around the one-year anniversary.
4 notes
·
View notes
Text
Information Security: Building a Resilient Future Through Data Protection
In an increasingly digital world, data has become one of the most valuable assets for individuals, businesses, and governments alike. Every transaction, communication, and operation involves data in some form. As the volume, complexity, and sensitivity of this data continue to grow, so does the risk associated with its misuse or compromise. This is where Information Security steps in as a critical component of modern digital infrastructure.
What is Information Security?
Information Security (InfoSec) refers to the practice of defending information—whether digital or physical—from unauthorized access, disruption, modification, or destruction. It encompasses a broad set of strategies, policies, and tools aimed at ensuring the confidentiality, integrity, and availability of data, commonly referred to as the CIA Triad:
Confidentiality ensures that sensitive data is only accessible to those who have the proper authorization.
Integrity maintains the accuracy and completeness of data by preventing unauthorized modification.
Availability guarantees that information and systems are accessible to authorized users when needed.
The Evolving Threat Landscape
The threat landscape in information security is constantly changing, driven by technological advancements and the increasing sophistication of cybercriminals. Some of the most common threats include:
Phishing and Social Engineering Attacks: These exploit human behavior to gain unauthorized access to systems or sensitive information.
Ransomware: Malicious software that encrypts data and demands payment for its release.
Insider Threats: These occur when employees or contractors misuse their access for malicious purposes or by mistake.
Advanced Persistent Threats (APTs): Long-term, targeted attacks often carried out by well-funded threat actors.
Zero-Day Exploits: Attacks that take advantage of unknown or unpatched vulnerabilities in software or hardware.
With these threats increasing in frequency and impact, proactive and adaptive information security strategies are more important than ever.
Key Domains of Information Security
Information security is a multi-layered field that includes several specialized areas:
1. Network Security
Protects internal networks from intrusions by implementing firewalls, intrusion detection systems (IDS), and virtual private networks (VPNs).
2. Application Security
Focuses on securing software applications by identifying and fixing vulnerabilities during the development process and through updates.
3. Endpoint Security
Secures devices such as computers, smartphones, and tablets that connect to the network.
4. Data Security
Involves encryption, masking, and secure storage to protect data both at rest and in transit.
5. Identity and Access Management (IAM)
Controls who has access to what resources, and ensures proper authentication and authorization.
6. Cloud Security
As businesses increasingly move to the cloud, protecting cloud-based infrastructures, platforms, and data becomes essential.
Information Security Frameworks and Standards
To ensure consistency and compliance, many organizations adopt established frameworks and standards such as:
ISO/IEC 27001: A globally recognized standard for managing information security.
NIST Cybersecurity Framework: A set of guidelines for improving critical infrastructure security.
GDPR and HIPAA: Regulatory standards that enforce strict data protection rules in specific sectors and regions.
Implementing these frameworks helps organizations reduce risk, maintain compliance, and create a culture of security awareness.
The Human Factor in Information Security
Despite advances in technology, humans remain one of the weakest links in the security chain. Employees may fall victim to phishing emails, use weak passwords, or accidentally leak sensitive data. That’s why training and awareness are just as important as technical solutions.
Effective information security programs include:
Regular employee training and simulated phishing exercises
Clear security policies and procedures
Encouraging a security-first mindset across departments
Future Trends in Information Security
The future of Information Security is being shaped by new technologies and shifting work environments. Some key trends include:
AI and Machine Learning in Cybersecurity: These technologies help detect anomalies, automate threat responses, and improve predictive analytics.
Zero Trust Architecture: A security model that assumes no user or device should be trusted by default.
Decentralized Identity Management: Blockchain and similar technologies are offering new ways to verify identity securely.
Quantum Computing: While still in development, quantum computers pose both opportunities and challenges for encryption and data security.

0 notes
Text
Securing AI: Navigating Risks and Compliance for the Future
Artificial Intelligence is no longer a futuristic concept; it's a fundamental driver of modern business and society. From enhancing customer experiences and optimizing supply chains to accelerating scientific discovery, AI's transformative power is undeniable. However, as AI systems become more complex and deeply integrated into our lives, a critical challenge emerges: how do we ensure AI is secure, trustworthy, and compliant with evolving regulations?
Neglecting AI security and compliance isn't just a best practice; it's an existential necessity. The potential for catastrophic failures, data breaches, biased outcomes, and erosion of public trust is very real if these aspects are not prioritized.
The New Landscape of AI Risks
AI introduces a new set of vulnerabilities that extend beyond traditional cybersecurity concerns:
Data Vulnerabilities:
Training Data Poisoning: Malicious actors can inject flawed or biased data into a model's training set, causing it to learn incorrect or harmful behaviors.
Data Leakage/Inference Attacks: AI models, especially generative ones, might inadvertently reveal sensitive information from their training data during inference.
Data Privacy Breaches: The sheer volume and sensitivity of data used by AI heighten privacy risks if not managed meticulously (e.g., PII in training data).
Model Vulnerabilities:
Adversarial Attacks: Small, often imperceptible, alterations to input data can cause an AI model to misclassify or behave unexpectedly (e.g., making a stop sign look like a yield sign to an autonomous vehicle).
Model Inversion: Reverse-engineering a model to reconstruct its training data, potentially exposing sensitive information.
Model Stealing/Intellectual Property Theft: Unauthorized replication of a proprietary AI model, undermining competitive advantage.
Backdoors and Trojan Attacks: Malicious code inserted into a model that activates under specific, hidden conditions.
Systemic and Ethical Risks:
Bias Amplification: If not carefully managed, AI models can amplify existing biases in data, leading to discriminatory outcomes in areas like hiring, lending, or law enforcement.
"Black Box" Accountability: For complex deep learning models, understanding why a decision was made can be difficult, posing challenges for auditing, debugging, and legal accountability.
Autonomous System Failures: In critical applications (e.g., self-driving cars, industrial control), AI failures can have severe real-world consequences.
Supply Chain Risks:
Vulnerabilities can be introduced through third-party pre-trained models, open-source libraries, or data providers that lack rigorous security vetting.
The Evolving World of AI Compliance
As AI's impact grows, so does the regulatory pressure to ensure its responsible development and deployment. Compliance is shifting from a reactive afterthought to a proactive, integrated component of the AI lifecycle.
Data Privacy Regulations (GDPR, CCPA): These existing laws directly impact AI development by governing how data is collected, stored, processed, and used for training models, especially concerning personal identifiable information.
The EU AI Act: A landmark regulation, the EU AI Act classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes stringent requirements on high-risk AI, including data governance, human oversight, robustness, accuracy, and cybersecurity. It sets a global precedent.
NIST AI Risk Management Framework: The U.S. National Institute of Standards and Technology (NIST) has developed a voluntary framework to help organizations manage risks related to AI, focusing on governance, mapping, measuring, and managing AI risks.
Industry-Specific Regulations: Sectors like healthcare, finance, and defense are developing their own AI-specific guidelines to ensure safety, fairness, and accountability.
Strategies for Securing AI and Ensuring Compliance
Navigating this complex landscape requires a comprehensive and continuous approach:
Security and Privacy by Design: Integrate security and privacy considerations from the very first stages of AI system design, not as an afterthought. This includes threat modeling, privacy-enhancing technologies (PETs), and anonymization techniques.
Robust MLOps & Governance: Implement mature MLOps practices that ensure secure development pipelines, version control for models and data, automated testing, access management, and continuous monitoring of deployed models for drift, bias, and performance degradation.
Comprehensive Data Governance: Establish clear policies for data lineage, quality, access control, and retention. Regularly audit training data for bias, representativeness, and privacy compliance.
Explainable AI (XAI) and Interpretability: Develop models whose decisions can be understood and explained to humans. This is crucial for debugging, building trust, and proving compliance in regulated industries.
Bias Detection and Mitigation: Proactively identify and address algorithmic bias throughout the AI lifecycle using fairness metrics, diverse datasets, and techniques like re-weighting or adversarial debiasing. Regular audits for discriminatory outcomes are essential.
Continuous Monitoring and Threat Intelligence: Implement systems to monitor AI models in production for adversarial attacks, data anomalies, and performance degradation. Stay informed about emerging AI-specific threats and vulnerabilities.
Cross-Functional Collaboration: AI security and compliance are not solely the responsibility of data scientists or security teams. Legal, ethics, business, and engineering teams must collaborate closely to ensure a holistic approach.
Conclusion: Trustworthy AI is Secure AI
The promise of AI is immense, but its sustained growth and positive impact hinge on our ability to build it responsibly and securely. By proactively addressing the unique risks associated with AI and embracing a culture of security by design and continuous compliance, organizations can not only mitigate potential harm but also foster the trust necessary for AI to truly flourish. Securing AI is not a barrier to innovation; it is the foundation upon which the future of intelligent technology will be built.
0 notes