Seems like their malware relies on a couple of things:
- intended target is KDE and GNOME
- privilege escalation through LD_PRELOAD hooking from userland via open, stat, readdir access (of any other program that the user executes, see down below)
- persistence through display manager config for KDE
- persistence through desktop autostart files for GNOME
- fallback persistence through .bashrc, profile or profile.sh in /etc
- installs trojanized ssh client version
- installs a JSP webshell
- sideloads kernel module as libselinux.so and .ko module. Probably the rootkit helpers to access them from userland
Despite the snarky comments in here, this malware is actually quite sophisticated.
If you don't agree, I challenge you now to measure the time it takes for you to find all .so files on your system that are loaded right now, and have been modified since your package manager installed them.
My point being that there is no EDR on Linux that catches this (apart from ours that's WIP), because all existing tools are just checking for windows malware hashes (not even symbols) as they're intended for linux fileservers.
What do you mean by "intended target is KDE and GNOME"? It seems to me more like they're trying to hide the binary by using X11, KDE and GNOME file paths, but they're not exploiting KDE or GNOME desktops.
The article doesn't mention how the rootkit ended up on the machines in question, it seems to indicate a vulnerable web application. I wish I knew which one.
Please learn the difference of persistence and privilege escalation versus "hiding a binary".
They are certainly not trying to hide, and it has nothing to do with the initial exploit surface. You are mixing up things because you seem to be not aware how multi stage exploits work.
The article also mentioned that tomcat was targeted, but it didn't mention whether it was a zero day or a known vulnerability (like log4j vulnerabilities, for example).
- The initial access stage of that malware was a Tomcat exploit
- The privilege escalation stage was done via both userland + kernelmod, whereas userland's method was using glibc, hijacking any open() call of any process that is executed later. If you enter your sudo password any time for anything after that in any bash shell, it's escalated successfully and can install the kernel mod (that's what the .bashrc and profile entries were for).
- The persistence stage was done with a kernel mod that can now pretty much do whatever it wants.
Edit: After looking a little bit further, the initial access exploit was very likely CVE-2024-52316 [1], which is a Tomcat bug specific for that Jakarta Authentication system, given the described georegional malware campaign targets.
So when you say "intended target is KDE and Gnome" you mean target for persistence on the system?
Sorry but not all of us have formal education in this field, I'm just trying to understand if you're saying that KDE and Gnome systems are vulnerable, or Tomcat web servers.
KDE and Gnome desktop files are just used as launchers, they're not the actual thing being exploited. Persistent malware pretty much always separates the vector from the payload: the exploit just gets the door open, then downloads the actual malware from the C&C servers. Many vectors, one malware package.
> Although we lack concrete evidence regarding the initial access vector, the presence of multiple webshells (as shown in Table 1 and described in the Webshells section) and the tactics, techniques, and procedures (TTPs) used by the Gelsemium APT group in recent years, we conclude with medium confidence that the attackers exploited an unknown web application vulnerability to gain server access.
Challenge accepted. "All files loaded" is probably not what you want to do however. It is much easier to just ask rpm directly which files under your library directory has been modified, and treat any files outside known library directories as suspicious.
Anyway, this is how you check which open files match ".so" and see if they are modified since installation:
lsof | grep -o "/[^ ]*\.so[^ ]*" | while read path ; do
pkg=$(rpm -qf "$path" 2>/dev/null)
if [ $? != 0 ] ; then
echo "$path does not belong to a package"
else
rpm -V $pkg | grep -F "$path"
fi
done
Here's a .deb version; only running debsums once per package name. Errors will go to stderr:
sudo lsof | grep -o '/[^ ]*\.so[^ ]*' | awk '!seen[$0]++' | while read -r path; do
if p=$(dpkg -S "$path");then
cut -f1 -d: <<<"$p"
fi
done | awk '!seen[$0]++' | xargs -n1 debsums -s
(But is there a hard rule that says a loaded library has to be named .so, or show up as .so for lsof? I'm sure there's ways to avoid the above detection.)
This didn't quite work for me - dpkg complained about "no path found matching..." for every library. I replaced the "$path" in the dpkg command with "${path##*/}" to just match the library name.
Further inspection showed my package manager installed libraries to /lib (a soft link to /usr/lib/) but lsof returns files in /usr/lib.
Other than that it seems to work - i.e. no alarming output. :-)
> But is there a hard rule that says a loaded library has to be named .so, or show up as .so for lsof?
No, yes(-ish)
Filenames are just a convention and not necessarily enforced.
lsof will really list every file the kernel thinks has a handle held by an active process, but depending on your threat model I think you could get around this. For example you could copy the malicious code into memory and close the file, or use your preload to modify the behaviour of lsof itself (or debsums).
Anyway debsums is a great tool. I'd have used a command similar to yours, though maybe run debsums first, use `file` to filter dynamic libraries and then check which of those have been recently accessed from disk.
Didn't check the code, but lsof seems a good approach.
How does that work with the various namespaces? From root namespace you should see everything. But in a mount namespace you could bind mount under a different name. How would that confuse things? With a SELinux module even root cannot do everything. If /proc is mounted privately does it change anything?
Not sure, just starting to think. Linux has become incredibly complex since the old days...
Edit: orc routinely loads executable sections not belonging to any package.
Besides namespaces, if one were malicious, one could also hide loaded libraries from lsof another way: open the library file, memcpy() the contents into RAM somewhere, mprotect(...,PROT_EXEC) that contents and then close the library file. You'll have to do your own linking, but then no open file will appear except for a very brief moment.
Securing Linux should probably not be approached the exact same way as securing Windows. Case in point, it is true that I can't find all of the .so files that have been modified since my package manager installed them: this is because it didn't install them, because I am using NixOS. They just exist in the Nix store, which is mounted read-only, and almost all modules are loaded from an absolute runpath.
NixOS with impermanence isn't exactly a security solution, but it does have some nice properties as a base for a secure system. It's also far from the only or even most notable of the immutable Linux systems, with abroot and RPM-ostree seeing more usage. It's probable there will still be some use in endpoint security but I suspect it will be greatly diminished if immutable image-based deployments with a secure boot chain is perfected on the Linux desktop system, for similar reasons to why it isn't really as important on ChromeOS either.
I also understand that this is not a silver bullet, as it's still possible on desktop Linux to persist malware through the bashrc/etc. like this does. Personally I blank out my home directory shell profile files and keep them root-owned to try to prevent them from being used for anything. It seems like more work is needed in the Linux ecosystem to figure out what to do about attack vectors like these, though I think a proactive approach would be preferred. (Scanning for infected files is always going to be useful, but obviously it's better if you stop it from happening in the first place.) In some ways the Linux desktop has gotten a bit more secure but many things are still not sandboxed sufficiently, which is a shame considering all of the great sandboxing technology available on Linux.
I already replied somewhat in a sibling comment [1] in regards to the conceptual problem the article/malware focusses on.
In addition to that I think that it's a bad design choice relying so much on coreutils, binutils, and glibc behavior. A lot of those tools were written in a time when you trusted the system 100%, when there wasn't even an internet or downloadable programs yet.
In reality, it's unfeasible to have a user and group for each ELF binary/program that runs on your machine. Just managing file system access to all shared objects that each binary requires is a nightmare. AppArmor and other tools often go the "as good as possible" route here, but a lack of thorough profiling of binaries is usually the culprit why they can be exploited even in non-standard infrastructure systems.
The only way forward in my opinion is behavioral and network profiling (and the correlation between those) via eBPF/XDP. This way you can at least get the data to test against in those scenarios, whereas with AppArmor it's forensics that happened too late - long after you've been pwned, and you realized after hours of debugging that a rule was XOR ing another one with an unexpected side effect.
All these things that we are talking about are maintenance burden for the maintainers of the upstream distro, that in part have hundreds of "soft forks" of upstream software laying around that changes their behavior to reduce those attack surfaces. Even just little things like removing the SUID flag from packaged binaries becomes a huge burden to the maintainers, which in my opinion, should not even exist as a problem anymore.
Access to important (IAM related) things, like a keepassXC database file or the ssh keys somewhere in /home, should not be accessible to outside processes other than those that require it. The reality is though, discord can get pwned via HTML XSS messages and can just read it, and you wouldn't even know about it.
We need a better sandboxing system that by default denies access to anything on the filesystem, and requires mandatory rules/profiles what is able to be accessed. And that, hopefully, without more maintenance burden. This also implies that we have to get rid of all that $PATH related bullshit, and /usr/local and .local shenanigans have to be completely removed to get to a state where we can also call it a rootless base system.
POSIX as a reference to build your distro against isn't enough. Distros like Alpine focus on memory offsets and making exploitation of C-based software harder, but are useless once you realize everything is running as root anyways because they stop at "if you get pwned, you gotta reboot the container", so they're useless as a Desktop environment.
The issue I have with all these things is that there's this survivor's bias of Linux Desktop users that are not aware of how unsecure their system actually is. That's part of the reason why the late malware campaign trends of large APTs (APT3/APT28/APT29 etc) were so successful in targeting developer environments. They simply don't know that an "lsof" can be anything, and not the program they wanted to execute in the first place.
I think most people will be served with a simple IDS that checks the entrypoints (ssh etc.), a software update routine and hardening of world accessible services to mitigate any potential damage.
Anything else is probably going to be immeasurable security theatre.
Less than a second to concatenate /var/lib/dpkg/info/.md5sums, currently about six seconds to concatenate and filter /var//maps. Actual checking time then depends on how much is currently mapped to memory and how performant the computer is, and how well one filters out mappings to files which were only in non-executable regions, but possibly a minute or three.
Perhaps more interestingly than how long it takes, some of the files mapped to memory are already deleted. They should have been checked at the time of loading, not hours or days later, when it's not longer possible.
I think you have figured out where I am getting at.
As long as processes can rewrite their own cmdline and process names, you have a conceptual problem that you can only solve with kernel hooks (or eBPF modules).
The persistence techniques in the article were easy to follow, but all that alias mess, path mess, and glibc dependent mess makes everything that you execute untrustable.
The cli commands that were posted in the sibling comments all rely on procfs and the faked names :) so they won't actually detect it if a process rewrote its cmdline or has an in memory .so file that was changed and loaded from somewhere else (e.g. via LD_PRELOAD).
LD_PRELOAD is quite easy to detect, though nobody seems to be aware of its effects. And that is a 10 years known vulnerability and part of every standard audit by now. None of the posted answers even check for the environment files in procfs.
We're not talking about a bug in glibc here, because it is intended and documented behavior. If it was a bug, it would be much much worse.
edit: I wanted to add that the POSIX and Linux way of doing things would require a specific user for each program in order to be successful. But this is a prime example of what can go wrong when a user (and its groups) is used for multiple things. Any process that is running as the same (non-root) user can modify those procfs files. And I think that's a HUGE problem.
It doesn't target KDE, it's just the developer of backdoor runs KDE, so a running process named kde looks innocent on his machine. Similar reason for .Xl1 folder: if rootkit hid .X11 folder, it would break xorg on the developer's machine. And some server distros allow to install with KDE interface.
If I am skimming this correctly, this is a C&C client allowing remote control over the network, and uses "a rootkit" for further compromise once it somehow gets installed?
I understand the value of in-depth security reports, but the 5th time they told me "WolfsBane is the Linux counterpart of Gelsevirine, while FireWood is connected to Project Wood." I was wondering when I'd get to the meat and potatoes.
These things always get really cool names, like "Wolfsbane" and "firewood". makes me want to make some malware to see what cool name security researchers give it lol
The use of LD_PRELOAD as part of the attack surface makes me think that a statically-linked binary has some value. Not a maximalist approach like some experimental distros, but I think there's clearly some value in your standard userland utilities always performing "as you expect," which LD_PRELOAD subverts. Plenty of Linux installs around the world get on fine using BusyBox as the main (only?) userland utility package.
Unless I misread they don't state exactly how the attack escalates privileges to install the driver. Could there be two versions of the attack with varying levels of severity?
None, I would instead recommend monitoring file paths and alerting when they change. Known as a tripwire system.
In this case for example the attackers tried to hide their files by disguising them as other known file paths on the system.
If you use a tripwire setup you will get an alert when a file appears that is not supposed to be there. Of course this requires a more hands-on approach where you create excludes for all your applications.
I favor Red Hat and I know we use ossec at work. I believe you can use it under a free license but the configuration is rather complex imho.
There is also snort which is a more libre project, but it's more of a full featured IDS that try to sell subscriptions for patterns. Think of them sort of like virus definitions but for rootkits and intrusions.
You can technically setup Snort as a tripwire.
A tripwire is very simple, some people have made them from scratch using Cronjobs and shell scripts. They simply maintain a database of all your files and their checksums, and alert you when a checksum changes.
But security is more than just an IDS. I would recommend SElinux+IDS+remote logging+MFA+granular user security and more!
There is also Sysmon for Linux [1]. I work often with Windows systems that's how I know it (it's a popular choice on Windows to analyze Sysmon logs for suspicious events), but it's probably niche in Linux world.
> The FireWood backdoor, in a file named dbus, is the Linux OS continuation of the Project Wood malware...
> The analyzed code suggests that the file usbdev.ko is a kernel driver module working as a rootkit to hide processes.
Where is the backdoor coming from? If there's a backdoor, something is backdoored. An unknown exploit installing a rootkit and using a modified file, like usbdev.ko, is not a backdoor.
Which pakage / OS ships with the backdoor?
Or doesn't the author of TFA know the definition of a backdoor? Or is it me? I mean, to me the XZ utils exploit attempt was a backdoor (for example). But I see nothing here indicating the exploit they're talking about is a backdoor.
It reads like they classify anything opening ports and trying to evade detection as "backdoors".
I believe any software that, once installed on a system, gives someone else remote access to control that system is "a backdoor". So the malware itself is "the backdoor", it's not a case of "package X has a backdoor that was exploited".
Not all malware acts like a backdoor: some malware exfiltrates data, some seeks to destroy the system, some encrypts data to hold it hostage, some performs attacks on other systems using your CPU/IP/memory, etc. The malware they are describing here does act like a backdoor though, and doesn't seem to have other malicious behavior.
I agree, they're using the term backdoor in a much wider meaning than what's usually meant. E.g. the RSA created the Clipper Chip and intentionally inserted a backdoor to allow the government access, that's a backdoor. An attacker might use that later, but it was made by the original developer of the software with "good intentions". But TFA is using it to mean the situation where an attacker broke a window and climbed in from the outside and can now enter and leave through the hole they made.
>The article says they don’t know how the attacker gets access to install this back door in the first place.
It doesn't really matter, because it's orthogonal. Malware like this can be installed on a system through any exploit that provides sufficient access.
So there's two parts to defending against it: 1) finding and fixing any vulnerability that allows the installation of malware like this, and 2) since #1 is a never-ending task, knowing about this malware so you can look specifically for it and delete it when you find it.
Perhaps we use the term back door in computer security because it comes from the general English expression to get someone or something in by the back door, which more generally is any exploit?
So, an aplication started as root it does a lot, if started as a normal user, does less. Sure, any first year CS student can write something like that. Or you can.. well.. install an ssh server or a vnc server or whatever.
How it gets onto the system in the first place is the interesting (and dangerous) part, that sadly gets skimmed over here.
I agree. Sophisticated to me would be if they tried to MITM the sudo command. Instead they simply place code into profile.d and run when the user logs in.
What's the point of these kinds of articles? Most Linux malware (including this one) are not sophisticated at all, built off of pre-existing rootkit code samples off Github and quite sloppy with leaving files and traces (".Xl1", modifying bashrc, really?). And there's a weird fixation on China here, is it just more anti-China propaganda?
I was under the impression that persistent, but SILENT access was China's goal. Dropping files in home and /tmp/ seems like the total opposite of that and any competent sysadmin would detect these anomalies manually real quick with a simple "ls -a", even possibly by accident.
> The WolfsBane Hider rootkit hooks many basic standard C library functions such as open, stat, readdir, and access. While these hooked functions invoke the original ones, they filter out any results related to the WolfsBane malware.
I took this to mean some things like a simple “ls -a” might now leave out those suspicious results.
- intended target is KDE and GNOME
- privilege escalation through LD_PRELOAD hooking from userland via open, stat, readdir access (of any other program that the user executes, see down below)
- persistence through display manager config for KDE
- persistence through desktop autostart files for GNOME
- fallback persistence through .bashrc, profile or profile.sh in /etc
- installs trojanized ssh client version
- installs a JSP webshell
- sideloads kernel module as libselinux.so and .ko module. Probably the rootkit helpers to access them from userland
Despite the snarky comments in here, this malware is actually quite sophisticated.
If you don't agree, I challenge you now to measure the time it takes for you to find all .so files on your system that are loaded right now, and have been modified since your package manager installed them.
My point being that there is no EDR on Linux that catches this (apart from ours that's WIP), because all existing tools are just checking for windows malware hashes (not even symbols) as they're intended for linux fileservers.
reply