#Windows PowerShell Automation
Explore tagged Tumblr posts
virtualizationhowto · 2 years ago
Text
PowerShell Get Registry Value
PowerShell Get Registry Value @vexpert #vmwarecommunities #100daysofhomelab #homelab #PowerShellRegistry #WindowsRegistry #Get-ItemPropertyCmdlet #Get-ChildItemCommand #RegistryKeys #RegistryValues #PowerShellScript #RemoteComputerManagement
The Windows Registry and registry editor have long been tools Windows admins have used to resolve issues, apply settings, etc. However, PowerShell makes automating registry changes or programmatically getting registry keys and their values very easy. PowerShell can interact with registry keys and easily get registry values. Table of contentsWhat is PowerShell?What is the Windows…
Tumblr media
View On WordPress
0 notes
wholesalebackup · 1 month ago
Text
0 notes
dario48gay · 1 year ago
Text
so yea, I automate booping
maybe you saw it, prob you didn't, but I automated booping
but dario, you probably aren't asking, how can I do it too in a simple yet powerful way?
well, I'm here to answer the question nobody asked, with a little bit of bash
first, choose you victim(s)
Tumblr media
this goober will do
then boop em
Tumblr media Tumblr media
next, open inspect elements and navigate to the network tab
Tumblr media
now press the bin to clear everything
Tumblr media
now boop
Tumblr media
when you have done that various GET requests will have appeared, choose the boop one, right click->copy as->copy as cURL
Tumblr media
this will give you your cookie too, this kind of cookie mus NEVER, and I repeat NEVER be shared, it gives access to your account bypassing any kind of 2fa
then create a basic bash script, just a for loop will do but if u don't know what u are doing, copy this (if on windows copy as powershell instead and figure it out)
Tumblr media
no, I did not cum on the script, I censored my cookie
this will trigger the anti-spam, so putting a sleep 0.1s before the done might be useful
the script is (put whatever you want instead of 100)
for i in {1..100}; do
[insert the curl]
[optional]sleep 0.1s
done
69 notes · View notes
cyberstudious · 9 months ago
Text
Tumblr media
Tools of the Trade for Learning Cybersecurity
I created this post for the Studyblr Masterpost Jam, check out the tag for more cool masterposts from folks in the studyblr community!
Cybersecurity professionals use a lot of different tools to get the job done. There are plenty of fancy and expensive tools that enterprise security teams use, but luckily there are also lots of brilliant people writing free and open-source software. In this post, I'm going to list some popular free tools that you can download right now to practice and learn with.
In my opinion, one of the most important tools you can learn how to use is a virtual machine. If you're not already familiar with Linux, this is a great way to learn. VMs are helpful for separating all your security tools from your everyday OS, isolating potentially malicious files, and just generally experimenting. You'll need to use something like VirtualBox or VMWare Workstation (Workstation Pro is now free for personal use, but they make you jump through hoops to download it).
Below is a list of some popular cybersecurity-focused Linux distributions that come with lots of tools pre-installed:
Kali is a popular distro that comes loaded with tools for penetration testing
REMnux is a distro built for malware analysis
honorable mention for FLARE-VM, which is not a VM on its own, but a set of scripts for setting up a malware analysis workstation & installing tools on a Windows VM.
SANS maintains several different distros that are used in their courses. You'll need to create an account to download them, but they're all free:
Slingshot is built for penetration testing
SIFT Workstation is a distro that comes with lots of tools for digital forensics
These distros can be kind of overwhelming if you don't know how to use most of the pre-installed software yet, so just starting with a regular Linux distribution and installing tools as you want to learn them is another good choice for learning.
Free Software
Wireshark: sniff packets and explore network protocols
Ghidra and the free version of IDA Pro are the top picks for reverse engineering
for digital forensics, check out Eric Zimmerman's tools - there are many different ones for exploring & analyzing different forensic artifacts
pwntools is a super useful Python library for solving binary exploitation CTF challenges
CyberChef is a tool that makes it easy to manipulate data - encryption & decryption, encoding & decoding, formatting, conversions… CyberChef gives you a lot to work with (and there's a web version - no installation required!).
Burp Suite is a handy tool for web security testing that has a free community edition
Metasploit is a popular penetration testing framework, check out Metasploitable if you want a target to practice with
SANS also has a list of free tools that's worth checking out.
Programming Languages
Knowing how to write code isn't a hard requirement for learning cybersecurity, but it's incredibly useful. Any programming language will do, especially since learning one will make it easy to pick up others, but these are some common ones that security folks use:
Python is quick to write, easy to learn, and since it's so popular, there are lots of helpful libraries out there.
PowerShell is useful for automating things in the Windows world. It's built on .NET, so you can practically dip into writing C# if you need a bit more power.
Go is a relatively new language, but it's popular and there are some security tools written in it.
Rust is another new-ish language that's designed for memory safety and it has a wonderful community. There's a bit of a steep learning curve, but learning Rust makes you understand how memory bugs work and I think that's neat.
If you want to get into reverse engineering or malware analysis, you'll want to have a good grasp of C and C++.
Other Tools for Cybersecurity
There are lots of things you'll need that aren't specific to cybersecurity, like:
a good system for taking notes, whether that's pen & paper or software-based. I recommend using something that lets you work in plain text or close to it.
general command line familiarity + basic knowledge of CLI text editors (nano is great, but what if you have to work with a system that only has vi?)
familiarity with git and docker will be helpful
There are countless scripts and programs out there, but the most important thing is understanding what your tools do and how they work. There is no magic "hack this system" or "solve this forensics case" button. Tools are great for speeding up the process, but you have to know what the process is. Definitely take some time to learn how to use them, but don't base your entire understanding of security on code that someone else wrote. That's how you end up as a "script kiddie", and your skills and knowledge will be limited.
Feel free to send me an ask if you have questions about any specific tool or something you found that I haven't listed. I have approximate knowledge of many things, and if I don't have an answer I can at least help point you in the right direction.
22 notes · View notes
Text
TUTORIAL 3
Making Gifs Purely with FFmpeg Scripts
FULL TUTORIALS | RECOLORED GIFS
Directly opposite to the last tutorial, this tutorial is for girls who LOVE command lines. You can make a goddamned motherfucking pristine, RECOLORED gif set from your command line, and feel cool as fuck while you're at it. And doing so is probably not quite as devoid of visual stimulus as you think!!
FULL SIZE EXAMPLE SET HERE | FULL CODE FOR THE SET HERE
Operating systems: Mac, Windows, and Linux
Quality potential: High
Software needed: FFmpeg
Difficulty: Advanced as far as gif-making, but this is actually a good first bash scripting project in my opinion if you've ever wanted to learn how.
Requirements: General familiarity with your computer's shell (Powershell on Windows, Terminal on Mac) helps a lot! I will try to make it as easy as possible to follow.
Tumblr media Tumblr media Tumblr media
LIMITATIONS: 1) Frame by frame gif-making methods take up a lot of space on your drive in the interim. 2) The captioning method currently provided in the tutorial is not good for longer captions (there is a better method I plan to append eventually). 3) Recoloring options are minimal in this tutorial. Curves and many other color adjustments are available in FFmpeg but I haven't yet explored them.
Tumblr media
First, let me get this response out of the way:
"I don’t understand how I can possibly do this if I can’t see what I’m doing!"
That’s the neat part—you DO get to see what you’re doing!
The first visual is your file browser.
If you were using Photoshop or GIMP to make these gifs, you’d view all of your frames in a sidebar as small thumbnails and select and scroll through them to add delete, and group select and manipulate them. The FRAMES folder in our file browser will serve this function, letting us see every frame in our set, delete any frames we don't want, and group them to apply the same crop, coloring, or caption to the same selection of frames.
The second visual is provided by FFplay.
FFplay is part of FFmpeg. When you use this command in your system shell, it opens an image or gif so you can see the results of a crop, recolor, caption (or all three!) on your work before actually applying it! This is how we will actually see, at full resolution on our screen, exactly how our image is looking.
WINDOWS USERS: CHANGE ALL FORWARD SLASHES IN THIS SCRIPT TO BACK SLASHES!!!
___________________________________________
1. Installing FFmpeg
___________________________________________
I recommend you install via Homebrew if you're on Mac or Linux by pasting this into your terminal (if/once Homebrew is installed):
brew install ffmpeg
Windows users or other users who don't want Homebrew can follow directions on the FFmpeg website.
___________________________________________
2. Screencapping Frames with FFmpeg
___________________________________________
There are many ways to do this. However, since this is the Pure FFmpeg tutorial, I'm going to point you to these instructions on how to automate this process with a script template I have already written for you and an example (screencapping for this exact set).
If you follow that tutorial exactly, you should come back for step 3 with a folder on your Desktop called FRAMES.
___________________________________________
3. Organize in the FRAMES Folder
___________________________________________
Go to your FRAMES folder in your file browser. Your frames are already ordered for you in the correct time sequence.
Delete any unwanted frames.
I need to "trim" some frames from the beginning and end of my set. For example, at the beginning of my set, the first 17 screencaps are of Sam from the first 564 milliseconds of the start of the 8.02 time stamp. I don't want this these frames in my gifset. I want my first gif to start on Dean when he appears starting in the 18th screencap.
Tumblr media
If I hold shift and select all of the unwanted frames, I can delete them. Similarly, my screencaps end at the 8:11 mark, and only the first 156 milliseconds are on Dean. After that, the shot switches back to the purple guy. I don't want those shots of the purple guy, so I'll delete frames 247 through 280 at the end of the set, leaving me with frame_0018.png through frame_0246.png as all the frames of my gifset.
Make "Shot Folders".
As an example of what I mean by shots, look at the three example gifs at the top of the tutorial. The first gif has just one shot (i.e., the camera stays on Dean). The second gif has two shots (the camera is on the guy painted purple, and then it's on Dean again. In the third gif, there are two shots again—one on Sam, then one on Dean. So that's 5 shots. I want to separate frames belonging to each of the 5 shots into subfolders labeled 1 through 5 (see gif below).
Tumblr media
______________________________
4. Setting up FFplay
______________________________
Now that we have all of our frames organized, we want to set up FFplay—the method we'll use to view our image manipulations before applying them.
NOTE: Type down all of the commands as you run through this whole tutorial!! In a word document, a text editor, etc. Do NOT just type your commands in the shell without keeping track of what you typed! You can also see an example of how to map out your scripts by following how I wrote them for Github here, and even download/copy paste my fulls script there and use it as a template for your own set.
Template for Creating Our Preliminary Gifs for FFplay
The first thing we need to do is make a preliminary test gif of every shot in our gifset. To do this, use the template below, changing only the parts in red:
cd ~/Desktop/FRAMES/ShotNumber ffmpeg -start_number FileNumberOnFirstFrame -i frame_%04d.png -vf "fps=30" ffplay -vf Test.gif
The first line, beginning with "cd", tells me the file path to my shot folder where my frames are located.
The line below that combines the frames of my shot into a gif called Test.gif.
ffmpeg envokes the FFmpeg utility.
-start_number FileNumberOnFirstFrame -i frame_%04d.png tells FFmpeg to order the frames in the gif by the 4-digit number at the end of their file name, starting with FileNumberOnFirstFrame, which you should replace with the 4-digit number in the file name of the first frame in your shot folder.
-vf "fps=30" tells FFmpeg I want the gif to move at 30 frames per second (if you screencapped at a rate other than 30 FPS, you can simply change the number to match your frame rate).
In my example:
Say I want to generate a gif of my first shot. Then I need ShotNumber to be 1, and FileNumberOnFirstFrame to be 0018 (that's the 4-digit number on the first frame in my first shot, after deleting the first 17 frames). So my command looks like this:
cd ~/Desktop/FRAMES/1 ffmpeg -start_number 0018 -i frame_%04d.png -vf "fps=30" Test.gif
See Step 4 in my script on Github to see the script for all 5 shots.
Playing a gif from FFplay
Now if I want to play the gif I just made in FFplayer. I just type the following:
ffplay -loop 0 Test.gif
A window should pop up and loop through my gif (edit: I have adjusted this tutorial so the FFplay gif loops infinitely, by adding the setting -loop 0).
After repeating the above commands to generate a Test.gif in every shot folder, we're ready to move on to the rest of the steps.
__________________
5. Test Crop
_________________
Now that FFplay is set up, it's time to start manipulating our shots by applying changes to Test.gif in every shot folder. The first thing we want to do is crop.
NOTE 1: Cropping is different from scaling. At the very end of the tutorial, when we export, we will scale our gifs down to 540 px. For now, we want to work in full resolution, which for Supernatural is 1080p.
NOTE 2: Throughout this tutorial, when I add to an existing command, I will bold the new portion, and mark all the parts you can/should change in red.
Cropping Template
To crop a gif in FFplay, we add -vf  "crop=w:h:x:y" to our FFplay command as follows:
cd ~/Desktop/FRAMES/ShotNumber ffplay -vf  "crop=w:h:x:y" -loop 0 Test.gif
Where
w is the width you want your cropped frames to be (in pixels)
h is the height you want your cropped frames to be (in pixels)
x and y are coordinates of the top left corner of your crop selection (in pixels)
To understand the x and y coordinates, think about if you wanted to use crop an image in a drawing software. You would generally select your crop tool, start in the top left corner, and drag your cursor down and right (blue dot in the illustration below represents where the crop starts in an example). So FFmpeg asks you to specify the top left corner where your crop starts with an x and y coordinate (in pixels), and uses the w and h variables to determine how large the crop should be.
Tumblr media
My frames are all 1920x1080 pixels. I would like my gifset to be square—1080x1080 pixels. So I already know I want my w and h to both be 1080. Since I want to start my crop at the top of a frame, losing no height, my y-coordinate (height to start my crop from) should also 1080. My x-coordinate is the only think I'm not sure about. I only know it needs to be bigger than 0 (0 would start my crop on the left edge) and smaller than 960, which would be the midway point.
So what am I going to do? ...I'm gonna make an educated guess of what x-coordinate would center Dean in a square frame in my FFplayer command, and if I don't like how he's centered, I'll simply move the x-coordinate over a little (decrease to move left, increase to move right).
So in my Example...
In FFmpeg, for my first shot, I'm going to guess x=300.
cd ~/Desktop/FRAMES/1 ffplay -vf "crop=1080:1080:300:1080" -loop 0 Test.gif
The FFplayer window shows me this:
Tumblr media
And I feel like the crop starts a little too far to the right. So I'm going to decrease my x-coordinate just a little—to 250. After replacing the 300 with a 250 and running again in FFplay, I feel good about 250, so that's the crop I'll set: -vf "crop=1080:1080:250:1080".
I need to follow this same process to determine the crop of my other shots (and keep a copy of the commands I used later!). What is useful however, is that 3 of my shots are all on Dean sitting (shots 1, 3, and 5). This means I can apply the same crop I made for shot 1 to shots 3 and 5.
See Step 5 in my script on Github to see all 5 shots.
___________________________________________
6. Test Coloring and Sharpening
___________________________________________
Because the lighting in this scene is pretty good, I did a very simple recoloring on this set. I may update later with a more extensive coloring tutorial with more options later (I'll link the post to this section if I do). Previously, we added a crop to our command: -vf "crop=1080:1080:250:1080" and now we want to test coloring options and a sharpening effect.
Coloring Template
I'm going to throw in the basics, and give you an updated FFplay template (new parts are bolded, variables you can adjust are in red and set at their default values).
cd ~/Desktop/FRAMES/ShotNumber ffplay -vf "crop=w:h:x:y,eq=brightness=0.00:saturation=1.00:contrast=1.00,smartblur=1.5:-0.35:-3.5:0.65:0.25:2.0" -loop 0 Test.gif
NOTE: Notice that inside the quotes, a comma separates the command options crop, eq, and smartblur. Equals signs and colons distinguish between sub-options belonging to each of those three categories.
EQ:
brightness=0.00. Values can be adjusted anywhere between -1 and 1, with 0 as the default). I would adjust this setting in 0.01 unit increments.
saturation=1.0. Values between 0 and 3 are possible. 1 is the default value. I would adjust this setting in 0.1 unit increments at a time.
contrast=1.00. Increases contrast between light and shadow in the image (values between -1000 and 1000 are possible. I recommend you change this setting in 0.1 unit increments.
There are more color and image adjustment options than these available in FFmpeg. You can see a full list of properties you can adjust from eq here.
Smartblur:
smartblur=1.5:-0.35:-3.5:0.65:0.25:2.0 sharpens the frames. I recommend you not touch this setting. If you do want to adjust it, check the FFmpeg documentation. You can also remove this whole portion (and the associated comma) if you don't want a sharpening adjustment.
NOTE: It is also possible to add a curves adjustment to brighten certain parts of an image instead of the whole image. I haven’t worked with curves enough from the command line to give you good ideas on setting your curve and this set didn’t really need it, but if I get into it more in the future, I'll include it in a supplementary tutorial and link it here.
If you feel like you are losing all sense of objectivity and just want to see the images without your coloring at any point, simply re-run your ffplay from the end of the cropping section.
ffplay -vf  "crop=w:h:x:y" Test.gif
In my example (shot 1):
Because all my characters are in the same room with similar lighting, I ended up applying the same color adjustment to all of my shots. But here's all my settings tested together on my first shot in FFplay:
cd ~/Desktop/FRAMES/1 ffplay -vf "crop=1080:1080:250:1080,eq=brightness=0.06:saturation=1.70:contrast=1.10,smartblur=1.5:-0.35:-3.5:0.65:0.25:2.0" -loop 0 Test.gif
See Step 6 in my script on Github to see all 5 shots.
___________________________________________
7. Apply crop and coloring to frames
___________________________________________
Up until this point, you have not actually applied your crop and color adjustments to your frames. You have simply tested your crop and coloring manipulations on a gif of all your frames that is still, if you go look at it in your file browser, not cropped.
So let's actually apply our adjustments to these frames!
NOTE: Captioning needs to be done after this step because auto centering the text won't work properly without actually cropping first). This is probably all for the better as our command is getting long and difficult to decipher!
Template to ACTUALLY Crop and Color
cd ~/Desktop/FRAMES/ShotNumber mkdir crop for i in *.png; do ffmpeg -y -i "$i" -vf "fps=30,crop=w:h:x:y,eq=brightness=0.00:saturation=1.00:contrast=1.00,smartblur=1.5:-0.35:-3.5:0.65:0.25:2.0" crop/${i%.png}.png; done
The first line (beginning with cd) tells us to go to our shot folder.
mkdir crop tells our computer to make a new subfolder in our shot folder called "crop".
for i in *.png; do ffmpeg -y -i "$i"; and the closing: crop/${i%.png}.png; done tells FFmpeg to do the same crop and color on every .png file in our shot folder, and save these adjusted shots into the crop subfolder.
In my example (shot 1)
cd ~/Desktop/FRAMES/1 mkdir crop for i in *.png; do ffmpeg -y -i "$i" -vf "crop=1080:1080:250:1080,eq=brightness=0.06:saturation=1.7:contrast=1.1,smartblur=1.5:-0.35:-3.5:0.65:0.25:2.0" crop/${i%.png}.png; done
See Step 7 in my script on Github to see all 5 shots.
___________________________________________
8. Captioning
___________________________________________
Now we need to apply captions. You can also do an FFplay command here, but I'm going to show you something a little different with FFplay this time to see your caption. Instead of looking at captions over a whole gif, let's just test how our captions look on the first frame in our shot, since that's really all we need for captioning.
Captioning FFplay Template
Not all sets (and not all shots!) need captions (and if your set doesn't, you can skip down to the next section!) However, two of my shots need captions: Shot 1 and Shot 3. If you want to add captions, you can test them in FFplay on a single shot using this command:
cd ~/Desktop/FRAMES/ShotNumber/crop ffplay -vf "drawtext=text='Your Text Goes Here':x=(w-text_w)/2:y=(h-text_h*2):fontsize=40:bordercolor=black:borderw=3:fontcolor=white" input.png
The default font is Arial. There are ways to set other fonts, but I haven't looked into them much because I'm pretty happy with Arial. This basic template with the existing numbers should give you a decent result, but if you want to change font sizing or colors or anything:
drawtext=text='Your Text Goes Here' | This is the most important bit. You place your caption in the red part. Note that your text will appear on just one line. If you have a longer statement from a character that will need two lines, you can draw your box twice. (I will add a link here for this alternative when I eventually try it).
x=(w-text_w)/2:y=(h-text_h*2) | This part centers the text on the bottom of the screen. I recommend you leave it alone as it’s intended to auto-center your text for you no matter what your crop ratio is. You will need to change the the y=(h-text_h*2) argument only if you have two lines of text to caption your frames with.
fontsize=40 | Changes the font size, of course.
bordercolor=black | changes the border color.
borderw=3 | changes the border weight.
fontcolor=white | This changes the font color.
input.png is the file name of the frame you want to view your caption on.
By the way, when changing text or border colors, you can type in colors coded into the documentation (the default is black) or insert a HEX color code in this field. For example, I always use yellow captions for Dean, so I found a yellow HEX code (#FDDA0D) I liked on htmlcolorcodes.com.
Here's how I FFplay my caption "You missed a spot."
cd ~/Desktop/FRAMES/1/crop ffplay -vf "drawtext=text='You missed a spot.':x=(w-text_w)/2:y=(h-text_h*2):fontsize=40:bordercolor=black:borderw=3:fontcolor=#FDDA0D" frame_0018.png
See Step 8 in my script on Github to see both captioned shots.
Template for applying your caption:
If you like the way your caption looks in FFplay, you can apply it to all the frames in your shot folder with:
cd ~/Desktop/FRAMES/ShotNumber/crop mkdir captioned for i in .png; do ffmpeg -y -i "$i" -vf "drawtext=text='Your Text Goes Here.':x=(w-text_w)/2:y=(h-text_h2):fontsize=40:bordercolor=black:borderw=3:fontcolor=white" captioned/${i%.png}.png; done
Where
mkdir captioned makes a folder in the crop folder called "captioned"
for i in *.png; do ffmpeg -y -i "$i"; and the closing: crop/${i%.png}.png; done tells FFmpeg to place the same caption over every .png file in our shot folder, and save these adjusted shots into the captioned subfolder.
Here's the settings to apply captions to my first shot.
cd ~/Desktop/FRAMES/1/crop mkdir captioned for i in .png; do ffmpeg -y -i "$i" -vf "drawtext=text='You missed a spot.':x=(w-text_w)/2:y=(h-text_h2):fontsize=40:bordercolor=black:borderw=3:fontcolor=#FDDA0D" captioned/${i%.png}.png; done
See Step 8 in my script on Github to see both captioned shots.
___________________________________________
9. Organize shots into GIF folders
___________________________________________
We're almost done!!! It's time to go back to our file browser! Now that all of our frames in all of our shots are prepared how we want, we need to reorganize our shots into GIF folders. This is our way of group selecting all the frames that go into one gif for export.
In my case, I want three gifs, so I'm going to make three new folders in the FRAMES folder: gif_1, gif_2, and gif_3. My FRAMES folder should now look like this:
Tumblr media
Now I want to take my finished shots from my shot folders and copy their frames into the gif folder they are associated with. For example:
Gif 1 just contains the first shot. I also captioned shot 1. So I'm going to copy all the frames in FRAMES/1/crop/captioned into the gif_1 folder.
Gif 2 contains shots 2 and 3. Shot 2 has captions, so I need to take all the frames from FRAMES/2/crop/captioned and all the frames in FRAMES/3/crop and copy them in the gif_2 folder.
Gif 3 contains shots 4 and 5. I'm going to copy the contents of the crop folders for shots 4 and 5 into this folder (no captions on this gif).
___________________________________________
10. Compiling into GIF
___________________________________________
To combine all the frames from one gif folder into a gif, I'm going to use this template script:
cd ~/Desktop/FRAMES/GifFolder ffmpeg -y -start_number FileNumberOnFirstFrame -i frame_%04d.png -vf "fps=30,scale=540:-1:flags=lanczos" ~/Desktop/FRAMES/GifName.gif
This is very similar to the script in step 4. This template opens one of our new gif folders, and takes all the frames, starting with the first frame in the folder (recall: you need to tell FFmpeg the 4-digit number on that first frame with FileNumberOnFirstFrame) and then turns it into a gif running at 30 FPS, scaled down to 540 by 540 pixels.
The parts we can/should adjust are as follows:
GifFolder is the name of our gif folder where our frames for our gif are located.
scale=540:-1:flags=lanczos is the command to scale down our gif (using what's called the lanczos method) to 540 x 540 pixels so it'll be small enough to upload to Tumblr.
fps=30 tells FFmpeg the proper FPS for our set (if you made your frames using this tutorial, 30 FPS is correct. If you took screencaps manually, it will be the frame rate of whatever you giffed. For TV, this might be 23 FPS for example).
~/Desktop/FRAMES/GifName.gif names our gif (specify with GifName) and outputs it to the FRAMES main folder.
Here's the command for Gif 1 in my set:
cd ~/Desktop/FRAMES/gif_1 ffmpeg -y -start_number 0018 -i frame_%04d.png -vf "fps=30,scale=540:-1:flags=lanczos" ~/Desktop/FRAMES/1.gif
See Step 10 in my script on Github to the script on all three gifs.
THE END!!!!
Do this step on all your gif folders and you're done!!!
You can view my full script for this example gifset on Github here and if you'd like, simply modify that script to make your own gifset!
13 notes · View notes
dexpose2 · 2 days ago
Text
Enhancing Cybersecurity Resilience 
Discover how a modern Threat Intelligence Platform empowers organizations with actionable insights, proactive defense, and next-gen cybersecurity capabilities.
In the ever-evolving digital world, cybersecurity threats have grown in complexity, frequency, and scale. From ransomware and phishing attacks to data breaches and nation-state cyber espionage, the digital threat landscape has expanded into a vast, dynamic battlefield. For organizations aiming to remain secure, simply reacting to threats after they happen is no longer a viable option.
Tumblr media
Today, security teams must be proactive, predictive, and precise in their threat response strategies. This is where a Threat Intelligence Platform (TIP) plays a vital role. Rather than operating in silos or relying solely on isolated security tools, a modern TIP serves as a centralized hub for collecting, analyzing, and distributing threat data in real-time—empowering cybersecurity professionals to make informed decisions faster.
In this article, we delve deep into what makes a Threat Intelligence Platform essential in the modern age, how it integrates with other security protocols, and why forward-looking organizations are investing in it to build stronger cyber defenses.
Understanding the Role of a Threat Intelligence Platform
A Threat Intelligence Platform is not just another security tool—it is a strategic asset. It aggregates threat data from multiple sources, correlates information, and offers context-rich insights that help in identifying and mitigating cyber threats before they manifest into breaches.
The core functions of a TIP include:
Data Aggregation: Collects structured and unstructured threat data from external feeds, internal systems, and global cybersecurity communities.
Analysis and Correlation: Uses advanced algorithms, machine learning, and contextual linking to correlate data points across diverse threat vectors.
Threat Prioritization: Automatically ranks threats based on relevance, severity, and business impact.
Actionable Intelligence Delivery: Integrates with SIEMs, firewalls, and incident response platforms to deliver real-time alerts and automated response actions.
Ultimately, a TIP transforms raw data into actionable threat intelligence, reducing the burden on security analysts and enabling faster incident detection and response.
The Increasing Complexity of the Cyber Threat Landscape
Cybercriminals today use sophisticated methods to infiltrate networks. Techniques like spear-phishing, zero-day exploits, and AI-driven malware are growing in prevalence. Furthermore, the lines between criminal groups, hacktivists, and nation-state actors are increasingly blurred, adding layers of complexity.
Key trends include:
Multi-Stage Attacks: Attackers often use a chain of exploits to gain access and maintain persistence in a system.
Supply Chain Threats: Vendors and third-party providers are now frequent targets, becoming indirect entry points.
Living-off-the-Land Attacks: Cybercriminals use legitimate tools (e.g., PowerShell, Windows Management Instrumentation) to avoid detection.
To counter such tactics, organizations need visibility not just within their internal environments but across the global threat ecosystem. This is where the strategic advantage of a TIP comes into play.
Features of an Effective Threat Intelligence Platform
Not all TIPs are created equal. The most effective platforms provide a range of features that enhance detection, analysis, and response:
1. Automated Data Collection
A TIP should automatically collect data from various threat feeds, security logs, dark web sources, and open databases. Automation reduces manual workload and ensures real-time threat visibility.
2. Advanced Threat Correlation
The platform must be capable of correlating indicators of compromise (IOCs) such as IP addresses, file hashes, domain names, and malware signatures across different incidents to uncover patterns.
Tumblr media
3. Integration with Security Ecosystems
A TIP that integrates with SIEM, SOAR, EDR, and firewall solutions ensures that insights are not siloed but can trigger immediate security actions.
4. Contextual Threat Enrichment
Context is key. The TIP should enrich threat data with geographical, behavioral, and historical insights to aid faster and more accurate decision-making.
5. Collaborative Intelligence Sharing
Sharing threat intelligence across industry groups, ISACs, and global cyber communities strengthens collective security.
Use Case: Proactive Defense in Financial Services
Financial institutions are a prime target for cybercriminals due to the high-value data they hold. Let’s consider a mid-size bank integrating a TIP into its security framework.
Scenario:
A suspicious domain is flagged on a global threat feed. The TIP correlates this IOC with phishing attempts targeting banking customers in Asia. Upon further analysis, the platform uncovers malware strains associated with credential theft. The platform alerts the SOC (Security Operations Center) and triggers automated rules to block the domain at the firewall level.
This real-time detection and action, driven by the TIP, prevents a potential breach and demonstrates how actionable intelligence can drastically reduce incident response time and damage.
How TIPs Leverage Open-Source Intelligence (OSINT)
Open-Source Intelligence (OSINT) refers to data collected from publicly available sources such as news reports, forums, blogs, social media, code repositories, and more. When integrated with a Threat Intelligence Platform, OSINT enhances the scope and depth of intelligence gathering.
For example, attackers often discuss vulnerabilities or leak data on underground forums. A TIP that harvests OSINT can pick up on these signals early, offering preemptive insights before the threat becomes widespread.
Additionally, by analyzing chatter patterns, sentiment, and keywords, TIPs can identify emerging attack vectors or vulnerable sectors—enabling organizations to prepare in advance.
The Power of Cyber Threat Analysis in TIPs
Cyber Threat Analysis is the heart of any effective threat intelligence strategy. A TIP streamlines this process by offering analytics dashboards, pattern recognition engines, and predictive modeling tools. It breaks down raw data into:
Tactics, Techniques, and Procedures (TTPs) of attackers.
Attack timelines and kill chain mapping.
Threat actor attribution based on behavior and tools used.
This deep analysis enables security teams to distinguish between random noise and real threats, reducing alert fatigue and boosting operational efficiency.
Moreover, advanced platforms use AI and machine learning to improve over time, identifying new threat variants and behavioral patterns that may otherwise go unnoticed.
Importance of Digital Risk Protection in TIP Integration
Today’s attack surfaces extend beyond corporate firewalls. Executives, employees, vendors, and third-party platforms create a digital footprint that’s often unmonitored. This is where Digital Risk Protection (DRP) comes into play.
Tumblr media
By integrating DRP capabilities into a TIP, organizations gain visibility into:
Brand impersonation attempts.
Leaked credentials on the dark web.
Domain spoofing and phishing sites.
Social media threats and impersonation.
This holistic visibility allows organizations to safeguard their brand reputation, intellectual property, and customer trust more effectively.
Threat Intelligence Platform Deployment: Challenges and Solutions
While the advantages of TIPs are compelling, implementing them does come with challenges. Let’s explore a few and how organizations can address them:
1. Data Overload
Many TIPs collect vast amounts of data, which can become overwhelming. The solution lies in deploying machine learning filters and customizable threat scoring models to highlight what matters most.
2. False Positives
Too many false positives can waste time and resources. A TIP with behavior-based analysis and contextual enrichment significantly reduces such noise.
3. Integration Issues
Legacy systems might not integrate well with newer TIPs. Opt for platforms with strong API support and modular architecture for smoother interoperability.
4. Skilled Workforce
Threat intelligence requires expertise. While TIPs automate many functions, upskilling teams and investing in regular training ensures that organizations derive maximum value from their platforms.
Future of Threat Intelligence Platforms
With the increasing digitization of services and rise in remote work, the importance of real-time, predictive cybersecurity has never been greater. The future of TIPs lies in:
AI-Driven Threat Prediction: Using neural networks and behavioral analytics to forecast threats before they manifest.
Cloud-Native Architecture: Offering scalable, elastic intelligence solutions for hybrid and cloud-native enterprises.
Blockchain for Threat Data Integrity: Ensuring the immutability and trustworthiness of shared intelligence.
Global Threat Intelligence Sharing Consortiums: Encouraging collective intelligence sharing to counter global threats collaboratively.
Organizations that invest early in TIPs are setting themselves up for long-term resilience and competitiveness.
Conclusion
In an age where cyber threats evolve by the minute, waiting for an incident to happen is no longer a luxury any organization can afford. A modern Threat Intelligence Platform doesn’t just help organizations react—it equips them to anticipate, prepare, and act swiftly.
From aggregating data to conducting in-depth Cyber Threat Analysis, from tapping into Open-Source Intelligence (OSINT) to enhancing Digital Risk Protection, a TIP acts as a central nervous system for proactive cybersecurity defense.
At DeXpose, we believe in empowering enterprises with cutting-edge threat intelligence solutions. Our mission is to help businesses stay a step ahead of adversaries by providing tools and insights that foster cyber resilience.
Are you ready to strengthen your organization’s cyber defense posture? Reach out to DeXpose today and explore how our intelligence-driven solutions can transform your approach to cybersecurity.
0 notes
aitoolswhitehattoolbox · 16 days ago
Text
Python
Develop and maintain automated scripts using Python to streamline backend processes.Integrate and manage APIs for seamless data exchange between systems.Design, implement, and optimize backend automation tasks for improved efficiency.Leverage PowerShell for Windows-based automation tasks and scripting.Utilize Unix/Linux shell scripting to automate system administration tasks.Monitor and…
0 notes
pteltdxelware · 26 days ago
Text
Master IT Infrastructure: Windows Server Administration Course in Australia
In today’s digitally driven world, servers are the foundation of every business’s IT operation. From hosting applications and managing user access to maintaining data security, servers do it all—and the professionals who manage them are critical to every organization's success. If you’re an IT professional in Australia looking to enhance your system administration skills, the Windows Server Administration Course in Australia offered by Xelware is your ideal next step.
This course is specially designed for both aspiring and experienced administrators, giving you practical, real-world skills to configure, manage, and maintain Windows Server environments effectively. Whether you're aiming to boost your resume, prepare for certification, or support your organization's infrastructure more confidently, this training is your gateway.
Why Learn Windows Server Administration?
Windows Server is one of the most widely used operating systems for managing enterprise-level IT infrastructure. It powers file servers, domain controllers, DNS, DHCP, and much more. With the continued growth of hybrid environments—mixing on-premise and cloud technologies—the need for skilled Windows Server administrators is only increasing.
By mastering Windows Server administration, you can:
Improve the performance, stability, and security of IT environments
Manage networks, users, and policies with precision
Ensure business continuity with proper backup and recovery practices
Reduce system downtime and boost productivity across the organization
Become a key resource in IT operations, infrastructure design, and support
What You’ll Learn in the Course
The Windows Server Administration Course in Australia from Xelware is based on Microsoft’s official curriculum and is updated to reflect the latest best practices and technologies in system administration.
Key areas covered include:
Installing and configuring Windows Server roles and features
Managing Active Directory Domain Services (AD DS)
Implementing DNS, DHCP, and IPAM
Maintaining server performance and monitoring systems
Ensuring security with group policies, file permissions, and firewall configurations
Automating tasks with PowerShell and administrative templates
Performing regular backups and disaster recovery
The course also emphasizes practical labs and case studies so you can apply what you’ve learned in realistic scenarios.
Why Choose Xelware in Australia?
Xelware is a trusted name in professional IT training, known for delivering industry-relevant content with expert instruction. Their Windows Server Administration Course is tailored for Australian learners and businesses, offering flexible schedules, local time zones, and trainers with hands-on experience in enterprise IT environments.
Here’s what sets Xelware apart:
Instructor-led sessions delivered live and online across Australia
Microsoft-certified trainers who teach from real-world experience
Interactive labs and assignments that build job-ready skills
Certification preparation and guidance to help you succeed
Post-training support and resources to reinforce your learning
Whether you're in Sydney, Brisbane, Adelaide, Melbourne, or working remotely from any corner of Australia, Xelware makes high-quality IT training accessible to you.
Who Should Take This Course?
This course is ideal for:
IT support professionals seeking to move into system administration
Network administrators wanting to broaden their expertise
System administrators responsible for Windows-based infrastructure
Students and recent graduates looking to specialize in server technology
IT managers who want to better understand their infrastructure
A basic understanding of networking concepts and the Windows operating system is recommended, but the course is structured to guide learners at all levels.
Final Thoughts: Build a Reliable IT Backbone
Every successful business relies on stable, secure, and efficient server infrastructure. As a Windows Server Administrator, you’ll play a vital role in ensuring the smooth operation of IT systems. The Windows Server Administration Course in Australia from Xelware gives you the skills and confidence to take on that responsibility with expertise.
0 notes
haplogamingchef · 26 days ago
Text
Boost Your Fortnite FPS in 2025: The Complete Optimization Guide
youtube
Unlock Maximum Fortnite FPS in 2025: Pro Settings & Hidden Tweaks Revealed
In 2025, achieving peak performance in Fortnite requires more than just powerful hardware. Even the most expensive gaming setups can struggle with inconsistent frame rates and input lag if the system isn’t properly optimized. This guide is designed for players who want to push their system to its limits — without spending more money. Whether you’re a competitive player or just want smoother gameplay, this comprehensive Fortnite optimization guide will walk you through the best tools and settings to significantly boost FPS, reduce input lag, and create a seamless experience.
From built-in Windows adjustments to game-specific software like Razer Cortex and AMD Adrenalin, we’ll break down each step in a clear, actionable format. Our goal is to help you reach 240+ FPS with ease and consistency, using only free tools and smart configuration choices.
Check System Resource Usage First
Before making any deep optimizations, it’s crucial to understand how your PC is currently handling resource allocation. Begin by opening Task Manager (Ctrl + Alt + Delete > Task Manager). Under the Processes tab, review which applications are consuming the most CPU and memory.
Close unused applications like web browsers or VPN services, which often run in the background and consume RAM.
Navigate to the Performance tab to verify that your CPU is operating at its intended base speed.
Confirm that your memory (RAM) is running at its advertised frequency. If it’s not, you may need to enable XMP in your BIOS.
Tumblr media
Avoid Complex Scripts — Use Razer Cortex Instead
While there are command-line based options like Windows 10 Debloater (DBLO), they often require technical knowledge and manual PowerShell scripts. For a user-friendly alternative, consider Razer Cortex — a free tool that automates performance tuning with just a few clicks.
Here’s how to use it:
Download and install Razer Cortex.
Open the application and go to the Booster tab.
Enable all core options such as:
Disable CPU Sleep Mode
Enable Game Power Solutions
Clear Clipboard and Clean RAM
Disable Sticky Keys, Cortana, Telemetry, and Error Reporting
Tumblr media
Use Razer Cortex Speed Optimization Features
After setting up the Booster functions, move on to the Speed Up section of Razer Cortex. This tool scans your PC for services and processes that can be safely disabled or paused to improve overall system responsiveness.
Steps to follow:
Click Optimize Now under the Speed Up tab.
Let Cortex analyze and adjust unnecessary background activities.
This process will reduce system load, freeing resources for Fortnite and other games.
You’ll also find the Booster Prime feature under the same application, allowing game-specific tweaks. For Fortnite, it lets you pick from performance-focused or quality-based settings depending on your needs.
Optimize Fortnite Graphics Settings via Booster Prime
With Booster Prime, users can apply recommended Fortnite settings without navigating the in-game menu. This simplifies the optimization process, especially for players not familiar with technical configuration.
Key settings to configure:
Resolution: Stick with native (1920x1080 for most) or drop slightly for extra performance.
Display Mode: Use Windowed Fullscreen for better compatibility with overlays and task switching.
Graphics Profile: Choose Performance Mode to prioritize FPS over visuals, or Balanced for a mix of both.
Once settings are chosen, click Optimize, and Razer Cortex will apply all changes automatically. You’ll see increased FPS and reduced stuttering almost immediately.
Track Resource Gains and Performance Impact
Once you’ve applied Razer Cortex optimizations, monitor the system changes in real-time. The software displays how much RAM is freed and which services have been stopped.
For example:
You might see 3–4 GB of RAM released, depending on how many background applications were disabled.
Services like Cortana and telemetry often consume hidden resources — disabling them can free both memory and CPU cycles.
Tumblr media
Enable AMD Adrenalin Performance Settings (For AMD Users)
If your system is powered by an AMD GPU, the Adrenalin Software Suite offers multiple settings that improve gaming performance with minimal setup.
Recommended options to enable:
Anti-Lag: Reduces input latency, making your controls feel more immediate.
Radeon Super Resolution: Upscales games to provide smoother performance at lower system loads.
Enhanced Sync: Improves frame pacing without the drawbacks of traditional V-Sync.
Image Sharpening: Adds clarity without a major hit to performance.
Radeon Boost: Dynamically lowers resolution during fast motion to maintain smooth FPS.
Be sure to enable Borderless Fullscreen in your game settings for optimal GPU performance and lower system latency.
Match Frame Rate with Monitor Refresh Rate
One of the simplest and most effective ways to improve both performance and gameplay experience is to cap your frame rate to match your monitor’s refresh rate. For instance, if you’re using a 240Hz monitor, setting Fortnite’s max FPS to 240 will reduce unnecessary GPU strain and maintain stable frame pacing.
Benefits of FPS capping:
Lower input latency
Reduced screen tearing
Better thermals and power efficiency
This adjustment ensures your system isn’t overworking when there’s no benefit, which can lead to more stable and predictable gameplay — especially during extended play sessions.
Real-World Performance Comparison
After applying Razer Cortex and configuring system settings, players often see dramatic performance improvements. In test environments using a 2K resolution on DirectX 12, systems previously capped at 50–60 FPS with 15–20 ms response times jumped to 170–180 FPS with a 3–5 ms response time.
When switching to 1080p resolution:
Frame rates typically exceed 200 FPS
Reduced frame time results in smoother aiming and lower delay
Competitive advantage improves due to lower latency and higher visual consistency
These results are reproducible on most modern gaming rigs, regardless of brand, as long as the system has adequate hardware and is properly optimized.
Switch Between Performance Modes for Different Games
One of Razer Cortex’s strongest features is its flexibility. You can easily switch between optimization profiles depending on the type of game you’re playing. For Fortnite, choose high-performance settings to prioritize responsiveness and frame rate. But for visually rich, story-driven games, you might want higher quality visuals.
Using Booster Prime:
Choose your desired game from the list.
Select a profile such as Performance, Balanced, or Quality.
Apply settings instantly by clicking Optimize, then launch the game directly.
This quick toggle capability makes it easy to adapt your system to different gaming needs without having to manually change settings every time.
Final Performance Test: Fortnite in 2K with Performance Mode
To push your system to the limit, test Fortnite under 2K resolution and Performance Mode enabled. Without any optimizations, many systems may average 140–160 FPS. However, with all the Razer Cortex and system tweaks applied:
Frame rates can spike above 400 FPS
Input delay and frame time reduce significantly
Gameplay becomes smoother and more responsive, ideal for fast-paced shooters
Tumblr media
Conclusion: Unlock Peak Fortnite Performance in 2025
Optimizing Fortnite for maximum FPS and minimal input lag doesn’t require expensive upgrades or advanced technical skills. With the help of tools like Razer Cortex and AMD Adrenalin, along with proper system tuning, you can dramatically enhance your gameplay experience.
Key takeaways:
Monitor and free system resources using Task Manager
Use Razer Cortex to automate performance boosts with one click
Apply optimized settings for Fortnite via Booster Prime
Match FPS to your monitor’s refresh rate for smoother visuals
Take advantage of GPU-specific software like AMD Adrenalin
Customize settings for performance or quality based on your gaming style
By following this fortnite optimization guide, you can achieve a consistent fortnite fps boost in 2025 while also reducing input lag and ensuring your system runs at peak performance. These steps are applicable not only to Fortnite but to nearly any competitive game you play. It’s time to make your hardware work smarter — not harder.
🎮 Level 99 Kitchen Conjurer | Crafting epic culinary quests where every dish is a legendary drop. Wielding spatulas and controllers with equal mastery, I’m here to guide you through recipes that give +10 to flavor and +5 to happiness. Join my party as we raid the kitchen and unlock achievement-worthy meals! 🍳✨ #GamingChef #CulinaryQuests
For More, Visit @https://haplogamingcook.com
0 notes
galactissolutions · 1 month ago
Text
What is PowerShell?
PowerShell consists of two parts: a command-line shell and a scripting language. It started out as a framework to automate administrative tasks in Windows. PowerShell has grown into a cross-platform tool that’s used for many kinds of tasks. A command-line shell lacks a graphical interface, where you use a mouse to interact with graphical elements. Instead, you type text commands into a computer…
0 notes
souhaillaghchimdev · 1 month ago
Text
Automation Programming Basics
Tumblr media
In today’s fast-paced world, automation programming is a vital skill for developers, IT professionals, and even hobbyists. Whether it's automating file management, data scraping, or repetitive tasks, automation saves time, reduces errors, and boosts productivity. This post covers the basics to get you started in automation programming.
What is Automation Programming?
Automation programming involves writing scripts or software that perform tasks without manual intervention. It’s widely used in system administration, web testing, data processing, DevOps, and more.
Benefits of Automation
Efficiency: Complete tasks faster than doing them manually.
Accuracy: Reduce the chances of human error.
Scalability: Automate tasks at scale (e.g., managing hundreds of files or websites).
Consistency: Ensure tasks are done the same way every time.
Popular Languages for Automation
Python: Simple syntax and powerful libraries like `os`, `shutil`, `requests`, `selenium`, and `pandas`.
Bash: Great for system and server-side scripting on Linux/Unix systems.
PowerShell: Ideal for Windows system automation.
JavaScript (Node.js): Used in automating web services, browsers, or file tasks.
Common Automation Use Cases
Renaming and organizing files/folders
Automating backups
Web scraping and data collection
Email and notification automation
Testing web applications
Scheduling repetitive system tasks (cron jobs)
Basic Python Automation Example
Here’s a simple script to move files from one folder to another based on file extension:import os import shutil source = 'Downloads' destination = 'Images' for file in os.listdir(source): if file.endswith('.jpg') or file.endswith('.png'): shutil.move(os.path.join(source, file), os.path.join(destination, file))
Tools That Help with Automation
Task Schedulers: `cron` (Linux/macOS), Task Scheduler (Windows)
Web Automation Tools: Selenium, Puppeteer
CI/CD Tools: GitHub Actions, Jenkins
File Watchers: `watchdog` (Python) for reacting to file system changes
Best Practices
Always test your scripts in a safe environment before production.
Add logs to track script actions and errors.
Use virtual environments to manage dependencies.
Keep your scripts modular and well-documented.
Secure your scripts if they deal with sensitive data or credentials.
Conclusion
Learning automation programming can drastically enhance your workflow and problem-solving skills. Start small, automate daily tasks, and explore advanced tools as you grow. The world of automation is wide open—begin automating today!
0 notes
virtualizationhowto · 2 years ago
Text
PowerShell Kill a Process from the Command Line
PowerShell Kill a Process from the Command Line #homelab #PowerShellProcessManagement #TerminatingProcessesInWindows #UsingTaskkillCommand #PowerShellVsCommandPrompt #AutomateKillingProcesses #PowerShellForceTermination #ManagingRemoteServerProcesses
Killing processes in Windows has long been the easiest way to deal with unresponsive programs that won’t close using the usual means by clicking the “X” in the top right-hand corner. Generally speaking, using the Windows Task Manager is the first method most use to find and close processes that are not responding. However, using the command line, we can leverage command prompt commands and…
Tumblr media
View On WordPress
0 notes
billtj · 2 months ago
Text
5 reasons I use PowerShell to automate boring Windows tasks
0 notes
jcmarchi · 3 months ago
Text
Shielding Prompts from LLM Data Leaks
New Post has been published on https://thedigitalinsider.com/shielding-prompts-from-llm-data-leaks/
Shielding Prompts from LLM Data Leaks
Opinion An interesting IBM NeurIPS 2024 submission from late 2024 resurfaced on Arxiv last week. It proposes a system that can automatically intervene to protect users from submitting personal or sensitive information into a message when they are having a conversation with a Large Language Model (LLM) such as ChatGPT.
Mock-up examples used in a user study to determine the ways that people would prefer to interact with a prompt-intervention service. Source: https://arxiv.org/pdf/2502.18509
The mock-ups shown above were employed by the IBM researchers in a study to test potential user friction to this kind of ‘interference’.
Though scant details are given about the GUI implementation, we can assume that such functionality could either be incorporated into a browser plugin communicating with a local ‘firewall’ LLM framework; or that an application could be created that can hook directly into (for instance) the OpenAI API, effectively recreating OpenAI’s own downloadable standalone program for ChatGPT, but with extra safeguards.
That said, ChatGPT itself automatically self-censors responses to prompts that it perceives to contain critical information, such as banking details:
ChatGPT refuses to engage with prompts that contain perceived critical security information, such as bank details (the details in the prompt above are fictional and non-functional). Source: https://chatgpt.com/
However, ChatGPT is much more tolerant in regard to different types of personal information – even if disseminating such information in any way might not be in the user’s best interests (in this case perhaps for various reasons related to work and disclosure):
The example above is fictional, but ChatGPT does not hesitate to engage in a conversation on the user on a sensitive subject that constitutes a potential reputational or earnings risk (the example above is totally fictional).
In the above case, it might have been better to write: ‘What is the significance of a leukemia diagnosis on a person’s ability to write and on their mobility?’
The IBM project identifies and reinterprets such requests from a ‘personal’ to a ‘generic’ stance.
Schema for the IBM system, which uses local LLMs or NLP-based heuristics to identify sensitive material in potential prompts.
This assumes that material gathered by online LLMs, in this nascent stage of the public’s enthusiastic adoption of AI chat, will never feed through either to subsequent models or to later advertising frameworks that might exploit user-based search queries to provide potential targeted advertising.
Though no such system or arrangement is known to exist now, neither was such functionality yet available at the dawn of internet adoption in the early 1990s; since then, cross-domain sharing of information to feed personalized advertising has led to diverse scandals, as well as paranoia.
Therefore history suggests that it would be better to sanitize LLM prompt inputs now, before such data accrues at volume, and before our LLM-based submissions end up in permanent cyclic databases and/or models, or other information-based structures and schemas.
Remember Me?
One factor weighing against the use of ‘generic’ or sanitized LLM prompts is that, frankly, the facility to customize an expensive API-only LLM such as ChatGPT is quite compelling, at least at the current state of the art – but this can involve the long-term exposure of private information.
I frequently ask ChatGPT to help me formulate Windows PowerShell scripts and BAT files to automate processes, as well as on other technical matters. To this end, I find it useful that the system permanently memorize details about the hardware that I have available; my existing technical skill competencies (or lack thereof); and various other environmental factors and custom rules:
ChatGPT allows a user to develop a ‘cache’ of memories that will be applied when the system considers responses to future prompts.
Inevitably, this keeps information about me stored on external servers, subject to terms and conditions that may evolve over time, without any guarantee that OpenAI (though it could be any other major LLM provider) will respect the terms they set out.
In general, however, the capacity to build a cache of memories in ChatGPT is most useful because of the limited attention window of LLMs in general; without long-term (personalized) embeddings, the user feels, frustratingly, that they are conversing with a entity suffering from Anterograde amnesia.
It is difficult to say whether newer models will eventually become adequately performant to provide useful responses without the need to cache memories, or to create custom GPTs that are stored online.
Temporary Amnesia
Though one can make ChatGPT conversations ‘temporary’, it is useful to have the Chat history as a reference that can be distilled, when time allows, into a more coherent local record, perhaps on a note-taking platform; but in any case we cannot know exactly what happens to these ‘discarded’ chats (though OpenAI states they will not be used for training, it does not state that they are destroyed), based on the ChatGPT infrastructure. All we know is that chats no longer appear in our history when ‘Temporary chats’ is turned on in ChatGPT.
Various recent controversies indicate that API-based providers such as OpenAI should not necessarily be left in charge of protecting the user’s privacy, including the discovery of emergent memorization, signifying that larger LLMs are more likely to memorize some training examples in full, and increasing the risk of disclosure of user-specific data –  among other public incidents that have persuaded a multitude of big-name companies, such as Samsung, to ban LLMs for internal company use.
Think Different
This tension between the extreme utility and the manifest potential risk of LLMs will need some inventive solutions – and the IBM proposal seems to be an interesting basic template in this line.
Three IBM-based reformulations that balance utility against data privacy. In the lowest (pink) band, we see a prompt that is beyond the system’s ability to sanitize in a meaningful way.
The IBM approach intercepts outgoing packets to an LLM at the network level, and rewrites them as necessary before the original can be submitted. The rather more elaborate GUI integrations seen at the start of the article are only illustrative of where such an approach could go, if developed.
Of course, without sufficient agency the user may not understand that they are getting a response to a slightly-altered reformulation of their original submission. This lack of transparency is equivalent to an operating system’s firewall blocking access to a website or service without informing the user, who may then erroneously seek out other causes for the problem.
Prompts as Security Liabilities
The prospect of ‘prompt intervention’ analogizes well to Windows OS security, which has evolved from a patchwork of (optionally installed) commercial products in the 1990s to a non-optional and rigidly-enforced suite of network defense tools that come as standard with a Windows installation, and which require some effort to turn off or de-intensify.
If prompt sanitization evolves as network firewalls did over the past 30 years, the IBM paper’s proposal could serve as a blueprint for the future: deploying a fully local LLM on the user’s machine to filter outgoing prompts directed at known LLM APIs. This system would naturally need to integrate GUI frameworks and notifications, giving users control – unless administrative policies override it, as often occurs in business environments.
The researchers conducted an analysis of an open-source version of the ShareGPT dataset to understand how often contextual privacy is violated in real-world scenarios.
Llama-3.1-405B-Instruct was employed as a ‘judge’ model to detect violations of contextual integrity. From a large set of conversations, a subset of single-turn conversations were analyzed based on length. The judge model then assessed the context, sensitive information, and necessity for task completion, leading to the identification of conversations containing potential contextual integrity violations.
A smaller subset of these conversations, which demonstrated definitive contextual privacy violations, were analyzed further.
The framework itself was implemented using models that are smaller than typical chat agents such as ChatGPT, to enable local deployment via Ollama.
Schema for the prompt intervention system.
The three LLMs evaluated were Mixtral-8x7B-Instruct-v0.1; Llama-3.1-8B-Instruct; and DeepSeek-R1-Distill-Llama-8B.
User prompts are processed by the framework in three stages: context identification; sensitive information classification; and reformulation.
Two approaches were implemented for sensitive information classification: dynamic and structured classification: dynamic classification determines the essential details based on their use within a specific conversation; structured classification allows for the specification of a pre-defined list of sensitive attributes that are always considered non-essential. The model reformulates the prompt if it detects non-essential sensitive details by either removing or rewording them to minimize privacy risks while maintaining usability.
Home Rules
Though structured classification as a concept is not well-illustrated in the IBM paper, it is most akin to the ‘Private Data Definitions’ method in the Private Prompts initiative, which provides a downloadable standalone program that can rewrite prompts – albeit without the ability to directly intervene at the network level, as the IBM approach does (instead the user must copy and paste the modified prompts).
The Private Prompts executable allows a list of alternate substitutions for user-input text.
In the above image, we can see that the Private Prompts user is able to program automated substitutions for instances of sensitive information. In both cases, for Private Prompts and the IBM method, it seems unlikely that a user with enough presence-of-mind and personal insight to curate such a list would actually need this product  – though it could be built up over time as incidents accrue.
In an administrator role, structured classification could work as an imposed firewall or censor-net for employees; and in a home network it could, with some difficult adjustments, become a domestic network filter for all network users; but ultimately, this method is arguably redundant, since a user who could set this up properly could also self-censor effectively in the first place.
ChatGPT’s Opinion
Since ChatGPT recently launched its deep research tool for paid users, I used this facility to ask ChatGPT to review related literature and give me a ‘cynical’ take on IBM’s paper. I received the most defensive and derisive response the system has ever given when asked to evaluate or parse a new publication:
ChatGPT-4o has a low opinion of the IBM project.
‘If users don’t trust OpenAI, Google, or Anthropic to handle their data responsibly,’ ChatGPT posits. ‘why would they trust a third-party software layer sitting between them and the AI? The intermediary itself becomes a new point of failure—potentially logging, mishandling, or even leaking data before it ever reaches the LLM. It solves nothing if it just creates another entity to exploit user data.’
This objection seems self-serving and non-applicable, given the storied history of open source projects that genuinely benefit end-users through the concerted long-term efforts of community developers and contributors; and given the growing potential of mobile devices to run – and even train – lightweight LLMs. Certainly in this instance, the use case is not terribly demanding.
Next, ChatGPT confidently misses the point of having a lightweight LLM provide oversight of input towards a commercial LLM that could not possibly be run on a local machine (because the LLM is too big, and allowing local access is too risky for the company that makes it):
‘Effective privacy filtering requires understanding the input before deciding what to strip out. If an intermediary is smart enough to recognize sensitive data (names, addresses, financial details), then it’s already processing and analyzing that data—meaning it has to store or examine the very thing it’s supposed to erase.
‘This introduces a paradox: a) A dumb filter (basic regex substitution) will be ineffective and miss subtle privacy risks. b) A smart filter (LLM-powered rewriter) has to process the full input, making it just as dangerous as the original LLM.
‘If you trust the intermediary enough to read and rewrite your inputs, why not just trust the LLM itself?’
The answer to the final question here is that the local LLM is intended to be entirely on the side of the user, and inevitably open source, with minimal or zero need for network access. An equivalent commercial version, however well-intentioned at the outset, would eventually be vulnerable to corporate shifts and changes to the terms of service, whereas a suitable open source license would prevent this kind of ‘inevitable corruption’.
ChatGPT further argued that the IBM proposal ‘breaks user intent’, since it could reinterpret a prompt into an alternative that affects its utility. However, this is a much broader problem in prompt sanitization, and not specific to this particular use case.
In closing (ignoring its suggestion to use local LLMs ‘instead’, which is exactly what the IBM paper actually proposes), ChatGPT opined that the IBM method represents a barrier to adoption due to the ‘user friction’ of implementing warning and editing methods into a chat.
Here, ChatGPT may be right; but if significant pressure comes to bear because of further public incidents, or if profits in one geographical zone are threatened by growing regulation (and the company refuses to just abandon the affected region entirely), the history of consumer tech suggests that safeguards will eventually no longer be optional anyway.
Conclusion
We can’t realistically expect OpenAI to ever implement safeguards of the type that are proposed in the IBM paper, and in the central concept behind it; at least not effectively.
And certainly not globally; just as Apple blocks certain iPhone features in Europe, and LinkedIn has different rules for exploiting its users’ data in different countries, it’s reasonable to suggest that any AI company will default to the most profitable terms and conditions that are tolerable to any particular nation in which it operates –  in each case, at the expense of the user’s right to data-privacy, as necessary.
First published Thursday, February 27, 2025
Updated Thursday, February 27, 2025 15:47:11 because of incorrect Apple-related link – MA
0 notes
Text
TOOL TUTORIAL 4
Screencapping Frames with FFmpeg
FRAME BY FRAME SCREENCAPPING METHODS
Tool type: Command Line tool
Operating systems: Mac, Windows, and Linux
Difficulty: Even if it's your first ever time opening your command line and trying to type anything, I think you can do this! I believe in you! :D
Input: Video files (any video file format).
This tutorial is largely based on instructions provided by u/ChemicalOle as part of their GIMP GIFS 101 tutorial for r/HighQualityGifs.
____________________________________
WINDOWS USERS: CHANGE ALL FORWARD SLASHES IN THIS SCRIPT TO BACK SLASHES!!!
____________________________________
Tutorials I've made so far have covered gif-making methods where video footage is transformed straight to .gif and there is no need to screencap frames. When making gifs in Photoshop (and if you want to make gifs using GIMP) rather than input video straight into the program, you often load a stack of screencaps into the program—one screencap representing every frame. After all, gifs are nothing more than a series of images all stacked together playing one by one in a loop. So lets learn a super faster automated way of screencapping every frame in a video clip automatically. Even if you've never opened your OS's command line interface in your life, I think you can do this!
1. Install FFmpeg
I recommend you install FFmpeg via Homebrew if you're on Mac or Linux by pasting this into your terminal (if/once Homebrew is installed):
brew install ffmpeg
Windows users or other users who don't want Homebrew can follow install instructions on the FFmpeg website.
2. Make a Folder
Make a folder on your desktop called FRAMES and place your video source file in that folder.
I’m going to rename my source video video.mp4 for this tutorial, but you can also just change “video.mp4” to the name of the file in the script below—this includes changing the video file extension in the script below as needed. I don’t think there’s a video file type FFmpeg cannot work with.
3. Determine when the moment you want to gif begins and ends
I’m going to gif a short moment from Season 1 Episode 7 of Supernatural as an example.
According to my video player (I'm using IINA) the exchange I want to gif starts at 8:02.565 and ends at 08:10.156. While you can trim precisely by the millisecond, I’m going to assume your video player doesn’t show milliseconds since many don't.
I'm going to keep the start of my clip at 08:02 but round up the end timestamp 1 second to make sure I get all the milliseconds in that second I want to include. In my case: I need to start capturing at precisely 8.02 and end capturing at 08.11, which is 9 seconds later.
4. Use this script template
You want to use the following template script u/ChemicalOle provided (replacing each # with a number you need):
cd YourFilePathGoesHere ffmpeg -i video.mp4 -r ## -t # -ss ##:## -f image2 frame_%4d.png
video.mp4 is where your video file name goes (including changing the video extension if you need to to .mkv, .mpg, .mov, etc).
-ss ##:## specifies when to start capturing. (I need to put 08:02 here)
-t # specifies how many seconds to spend capturing. (I need to put 9 here)
-r ## tells FFmpeg how many times to capture a .png every second (i.e. the frames per second or FPS). u/ChemicalOle recommends you set this at 30 for a 30 FPS capture.
-f image2 frame_%4d.png tells FFmpeg to output .png images with the name frame_0001.png, frame_0002.png, frame_0003.png, etc.
In my case, my script will look like this:
cd ~/Desktop/FRAMES ffmpeg -i video.mp4 -r 30 -t 9 -ss 08:02 -f image2 frame_%4d.png
The top line starting with cd just tells my terminal to change directories to where my video file is located, and where to dump the frames to be generated (in the FRAMES folder with my video file). (Windows users: change forward slashes to back slashes and that cd command will move you to your FRAMES folder too).
When you input this command into your system shell (Terminal on Mac, Powershell on Windows) and press enter, you you might feel like it’s stalling at first or not working because
"Press [q] to stop, [?] for help"
Will be printed on the screen. It is working though! Just leave it alone for minute and it'll start working. When the cursor prompt reappears, there will be a bunch of PNGs in your FRAMES folder, organized in sequence by number.
8 notes · View notes
monsterfenrick · 3 months ago
Text
MRDX on Steam: Automated Backups!
For anyone that is interested in setting up Automated Backup Scripts for MR1&2DX on Steam (Windows or Linux):
I've set up a guide page with explanation & how to set it up. Never worry about corrupted saves again!
Tumblr media
3 notes · View notes