#fslint
Explore tagged Tumblr posts
Photo

How To Find Duplicate Files And Clean Them On Linux With FSlint
0 notes
Text
Elimina archivos duplicados en Linux de un modo rápido y sencillo
Elimina archivos duplicados en Linux de un modo rápido y sencillo
¿Cuántas veces nos ha pasado que tenemos varias copias de seguridad de fotos y vídeos, con versión 1, versión 2 etc? Sobre todo con el tema de las imágenes, este hecho es más acuciante. Que si las fotos del último viaje a Filipinas, que si las fotos de la boda del año pasado donde nos lo pasamos tan bien… que las metemos en varias carpetas, las reorganizamos varias veces… copiamos y pegamos a un…
View On WordPress
0 notes
Text
Amazing Tools To Find Duplicate Files And Remove Them In Linux
Amazing Tools To Find Duplicate Files And Remove Them In Linux
Find Duplicate Files and Remove them using amazing Tools. In this article I am going to list Amazing Tools Find Duplicate Files in Linux. Manually finding and removing duplicates it’s like a idiot job and more painful then ever. To make finding easy i have found a few amazing tools in Linux.
finds duplicate files in a given set of directories
fdupes is a command/utility to find duplicates and…
View On WordPress
0 notes
Text
fslint is a fucking lifesaver i have no fucking clue how many duplicate files i have and im very excited to be rid of exactly all of them without accidentally losing anything
5 notes
·
View notes
Text
Unter Linux aufräumen: Czkawka als FSlint-Alternative
Als in Rust geschriebene Alternative zu FSlint erscheint Czkawka 1.0.0. Das Tool soll unter Linux doppelt vorhandene Dateien oder leere Ordner aufspüren. Read more www.heise.de/news/…... www.digital-dynasty.net/de/teamblogs/…

http://www.digital-dynasty.net/de/teamblogs/unter-linux-aufraumen-czkawka-als-fslint-alternative
0 notes
Text
Find and Delete Duplicate Files in Linux
Find and Delete Duplicate Files in Linux
4 Useful Tools to Find and Delete Duplicate Files in Linux
Rdfind – Finds Duplicate Files in Linux. Rdfind comes from redundant data find. …
Fdupes – Scan for Duplicate Files in Linux. Fdupes is another program that allows you to identify duplicate files on your system. …
dupeGuru – Find Duplicate Files in a Linux. …
FSlint – Duplicate File Finder for Linux.
View On WordPress
0 notes
Text
#FreeSoftware Desktop Personal Selection (dependencies implied): a draft list
gnu+linux-distro xorg xinit / lightdm icewm+shadesofgrey-theme / openbox+tint2+phwmon+gsimplecal wicd / nm-applet lxterminal xterm+alsamixer htop alsa/qjackctl/pulseaudio pcmanfm sshfs leafpad medit firefox pidgin jami / qtox amule / qbittorrent viewnior zathura / evince scribus abiword gnumeric gimp inkscape mplayer / smplayer / mpv kdenlive audacious audacity qsynth qtractor / ardour ffmpeg / handbrake musescore xsane / skanlite fdupes / fslint-gui youtube-dl gparted virt-manager ...
original post
0 notes
Text
How to identify files of the same content on Linux File copies can sometimes take up a lot of disk space and can be confusing if you want to update. Here are six commands to help you identify these files. click to join new ccie group First, use the diff command to compare files Perhaps the easiest way to compare two files is to use the diff command. The output will show the difference between the two files.The < and > symbols indicate whether the extra line is in the first (<) or second (>) file provided as a parameter. In this example, the extra line is in backup.html.Ccie exam fee If the diff shows no output, it means the two files are the same. The only downside to diff is that it can only compare two files at a time, you must identify the files to be compared. Some of the commands we will see in this article can find duplicate files for you. Second, the use of checksum This checksum (verify) command calculates the checksum file. The checksum is to reduce the content math to a very long number (eg 2819078353 228029).Although not absolutely unique, files with different contents are less likely to result in the same checksum. Third, use the find commandAlthough the find command does not have the option to find duplicate files, it can be used to search for files by name or type and run the cksum command. E.g: Fourth, use the fslint command The fslint command can be used to specifically find duplicate files. Please note that we give it a starting position. If you need to run a large number of files, the command may take a considerable amount of time to complete.This is the output of a very modest search.Ccie exam fee. Note how it lists duplicate files and find other issues, such as empty directories and error IDs You may have to install fslint on your system. You may also need to add it to your search path: Five, use the rdfind command The rdfind command will also look for duplicate (same content) files. This name stands for "Redundant Data Lookup", which determines which files are the original files based on the file date - this is helpful if you choose to remove duplicates because it will delete the newer files. You can also run this command in "dryrun" (that is, only report changes that might otherwise be made). The rdfind command also provides options such as ignoring empty files (-ignoreempty) and following symlinks (-followsymlinks). Check the man page for an explanation. Note that the rdfind command provides the option to remove duplicate files using the -deleteduplicates true setting. You may have to install the rdfind command on your system. Trying to use it to familiarize yourself with how it works can be a good idea. Sixth, use the fdupes command The fdupes command also makes it easy to identify duplicate files and provides a number of useful options - like -r recursion. In its simplest form, it combines duplicate files as follows: This is an example of using recursion. Please note that many duplicate files are important (user's .bashrc and .profile files) and obviously should not be removed. Many of the options for the fdupe command are as follows. Use the fdupes -h command or read the man page for more details. The fdupes command is another command that you want to install and use for a while to familiarize yourself with many of its options.Ccie exam fee. The Linux system provides a number of tools for locating and deleting duplicate files, as well as the options you want to run a search on and how to perform duplicate files when they are found. Publisher:IE LABpublish Website: http://ielab.network WhatsApp: +8617782638871Skype:live:ielab.anna
0 notes
Text
Maintaining a clean *nix environment
Intro
I admit it. I'm a fan of the one-bucket-principle and a really efficient search.
That might appear messy. My emails have two folders: Inbox and Archive (and a nice search function in the UI). My own life's wiki has only one folder (and I rip/grep the shit out of it). But I am not not alone. Look at the Linux Foundation's Filesystem Hierarchy Standard. This is a mess, too. Some highlights:
Directory Description /bin Essential command binaries that need to be available in a single user mode; for all users, e.g., cat, ls, cp. /sbin Essential system binaries, e.g., fsck, init, route. /usr/bin Non-essential command binaries (not needed in single user mode); for all users. /usr/sbin Non-essential system binaries, e.g., daemons for various network-services.
What? 'Essential' binaries and 'Non-essential' binaries? How the hell do you differentiate between 'command' and 'system'?
And don't get me started with that /usr-directory, Rodrigo Silva did that already:
Back in the 70's, in Unix (yes, Unix, way before Linux), floppies had little space (no HD, remember?), and at a given point the system binaries grew too much in number and size to a point they would not fit a single disk, and developers had to split them across several media and thus create new mount points for them. /bin filesystem was full, so they installed the new binaries at... /usr/bin. And /usr was, at that time, their... user directory!
Oh yeah. "Back in the 70's"...
Part 1: Your home
So. As a Linuxgirl/boy, what shall we do if we start with a mess and want to have some structure now?
First, there is a default structure one is presented with Debian-based/RedHat's distros:
. ├── Desktop ├── Documents ├── Downloads ├── Music ├── Pictures ├── Templates └── Videos
For me it is still like this, though I added a src folder for everything I check out from any git repo and a dotfile directory.
A few years ago I wrote drhousemeister to clean up the insides my Downloads folder automatically, which helped a little bit. Note to self: Rewrite that with respect to the local directory structure (e.g., put 'Documents' directly into the ~/Documents folder and so on).
So today we solve the one-bucket-to-rule-them-all a little bit. Let's give it a go and let's start with the biggest mess of all: Pictures. There are nice solutions for this. E.g., Shotwell. So uninstall all the crap you installed as an imageviewer and just have one tool for the job and let it do the job (Speaking about Gnome's Shotwell... didn't I want to give Fedora and Gnome a try recently?).
After installing Shotwell, import all the pictures and let shotwell just do its job.
I'd like to recommend to do this with any other datatype to. There are plenty of programs which can handle this for you.
But what about duplicates and the niches? Eliminate the trash throughout your filesystem in the corner's which aren't that visible. There are two tools, which can help: BleachBit and fslint. Bleachbit is basically Linux's CCleaner, while fslint will identify duplicates and help with finding and ordering stuff by size.
For many users this is probably enough already. But not when you code and/or tend to be playful and try new stuff from time to time.
Part 2: Staying Experimental
To stay experimental while coding without messing up the 'main system' the solution of two machines exists. One which would be reinstalled occasionally and the precious main one.
Luckily, there are solutions for individual environments, like sdkman for Java or virtualenv for Python. However, what about not working with different environments/versions of one lang, but completely different environments?
Meet Vagrant and Docker
Both do similiar things: Give us parts of an operating system to play with. Let's compare them:
Feature Docker Vagrant Virtualization Type Virtual Environment Virtual Machine OS support *nix *nix, Windows Startup time usually Seconds Minutes Isolation Partial Full Weight of virt. sys. Light Heavy
Right now, Docker is all the rage since 2015. Let's give it a try:
Installation:
sudo apt install docker-ce
Or use Docker's fabulous documentation. Then.
User stuffs:
sudo groupadd docker sudo usermod -aG docker $USER
Logout. Login. And run:
docker run hello-world
If everything works according to plan you are greeted with:
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world d1725b59e92d: Pull complete Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
So, there you go. Try the more 'ambitious' suggestion from docker's hello-world and enjoy your new Lab(s). :)
Edit: Some might even say, that Kubernetes is a surprisingly affordable platform for personal projects... o_O
Sources:
https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
https://www.linux.com/learn/how-organize-your-linux-file-system-clutter-free-folders
https://askubuntu.com/questions/130186/what-is-the-rationale-for-the-usr-directory
https://stackoverflow.com/questions/16647069/should-i-use-vagrant-or-docker-for-creating-an-isolated-environment
0 notes
Text
Ubuntu İçin En İyi 6 CCleaner Alternatifi
Ubuntu İçin En İyi 6 CCleaner Alternatifi
Birçok Windows tabanlı bilgisayarlarda bulabileceğiniz yaygın yazılım kategorisi; sistem optimize edici programlar ve temizleyicilerdir. Bu programlardan birisi olan Ccleaner; istenmeyen dosyaları, tarayıcı önbelleğini ve geçmişini tarayan ve silen, boş alan yaratan ve güvenliğinizi sağlayan güçlü ve popüler bir Windows temizleyicisidir.
Maalesef, CCleaner Linux sistemler için kullanılabilir…
View On WordPress
0 notes
Text
fslint pour faire le tri
Détecter les fichiers en double sur les disques durs internes et externes. Fslint le fait très bien et il fait aussi plein d'autres choses. Et il y a même fslint-gui
0 notes
Text
Unter Linux aufräumen: Czkawka als FSlint-Alternative
Als in Rust geschriebene Alternative zu FSlint erscheint Czkawka 1.0.0. Das Tool soll unter Linux doppelt vorhandene Dateien oder leere Ordner aufspüren. Read more www.heise.de/news/…... www.digital-dynasty.net/de/teamblogs/…

http://www.digital-dynasty.net/de/teamblogs/unter-linux-aufraumen-czkawka-als-fslint-alternative
0 notes