A few different articles this week about spinning up a wireguard container/jail/VM ...
But it's far, far easier to just fire up an sshd somewhere and 'sshuttle' makes it possible to turn any ssh server that you have a login on into a VPN endpoint:
I absolutely love sshuttle but IMHO nothing beats WireGuard‘s availability and simplicity. I have it setup on my laptops, iPhone and iPad. It works transparently and I can access all the stuff in my homenetwork.
I use sshuttle for situations where I don’t have root on a jumphost or only need the tunneling sometimes.
It's not. The current maintainer is aware of the speed issues (which I believe are CPU bottleneck issues) but I don't believe anything substantive has been done.
It's all hard work. The primitives of containerization are in the kernel, but executing and managing them, especially securely, takes a fair amount of trial and error to do it right.
Why is so hard to read an article from someone that is developing containers?
Quote: "Again, containers were not a top level design, they are something we build from Linux primitives. Zones, Jails, and VMs are designed as top level isolation."
Not seeking a v4/v6 flame war, It would be interesting to see the IPv6 version of this for people who want to use WireGuard to protect IPv6 flows back to "inside" and so much of this is NAT related, its not generally applicable to that case.
You could NAT IPv6 to use the model as-is, but you could also re-do the model to not need NAT but still use wireguard to get through an ACL state which protects your boundary.
The epair interfaces are part of a system that gives each jail its own separate virtualized network stack instead of sharing the host OS networking. It's enabled by default as of FreeBSD 12.0, but you can read the original pager (from 2003!) here for background and motivation information: https://papers.freebsd.org/2003/zec-vimage.files/zec-vimage-...
An epair [0] is a pair of virtual ethernet interfaces which are connected together and have network addresses: ethernet frames that go in one interface come out of the other.
A tap interface can be used by software to make ethernet traffic appear on an interface, i.e. software can write to a tap interface to simulate ethernet traffic being received on that interface, and an application can receive ethernet traffic from it. From the manpage [1]: "A write(2) call passes an Ethernet frame in to be "received" on the pseudo-interface. Each write() call supplies exactly one frame; the frame length is taken from the amount of data provided to write()."
Thanks. So, is it possible that wireguard writes to tap and the bridge or another interface on the host reads from it? Why would one prefer epairs over tap, if that's the case?
And, I don't really get why wg-jail also needs default-router to be pointed to bridge0 when the author addms epair-b to bridge0 on the host (which file are the following lines added to anyway?):
> I don't really get why wg-jail also needs default-router to be pointed to bridge0 when the author addms epair-b to bridge0 on the host.
The epair0 interfaces provide the layer 2 (Ethernet) connection between the jail and the host. The jail still needs a default IPv4 (layer 3) gateway so that it can route the traffic coming fron the WireGuard clients back out to the network/Interet (same as any other "router").
(Note: With just a single jail -- such as in this case -- the bridge0 interface isn't actually necessary (and the 192.168.20.1 address would then be assigned to the epair0b, not bridge0, interface on the host). The author went ahead and created a bridge with the intention to create additional jails in the future. This way, multiple jails can all be connected to the same internal "jail network". This is all mentioned in TFA, by the way.)
> which file are the following lines added to anyway?
lighter weight than a VM. It uses the same kernel as the rest of the system, but sandboxes the userspace to limit what impact it can have on the rest of the system. They're not quite equivalent to a container on Linux but they're very similar in functionality.
Aren't cgroups / containers a very close equivalent? In either case, it's namespacing, of a tree of userspace processes to make them invisible from the rest of the userspace.
FreeBSD jails are additional jailed (chrooted) userlands using kernel of the host. They can be as fat as complete FreeBSD userland, or as thin as just a few binaries and libraries necessary to run particular service.
They're much more than just a chroot (that's a concept of limiting your process filesystem access), can have their own TCP stack, their own firewall, their resource usage limits and so on. They are much, much closer to Linux containers, than to chroot actually.
You are right about them being much more than chroot, the ability to limit device access and set other resource limits has been possible since a long time ago. VNET jails, introduced fairly recently (12.0-RELEASE with GENERIC kernel, 11.0-RELEASE with custom compiled kernel if I'm not mistaken), can have their own TCP stack and firewall.
I am not familiar with Linux containers, but I read that FreeBSD jails are much, much different from them. As for chroot, it still plays crucial role in FreeBSD jail implementation.
> Jails to not support any other operating system than the host.
This oversimplifies. The kernel is the same in the jail and the host. But FreeBSD has a Linux syscall emulation layer, and you can definitely install a Linux userspace in a jail and run essentially Linux-but-the-kernel in the jail.
It just started passing packets (ping) last week. It would have been at this point weeks ago, had Jason not baked his e-mail address into the handshake protocol. (Harumph.)
Matt changing random algorithm parameters he didn't understand is kind of on him, sorry. I'm glad of the work he's doing, and your funding of FreeBSD native wireguard work, but just changing random cryptographic parameters before had had packets passing was an exercise in foot-shooting.
Conrad - although your observation is correct, this dig is a bod look when you've essentially never set foot outside of your fairly limited technical sandbox.
Server is an HP Microserver with an Intel Xeon E3-1265L V2 @ 2.50GHz running FreeBSD 12.1.
Client is a custom build with an Intel Core i7-4790K @ 4.00GHz running NixOS 20.03.
I would assume you're testing that on the stock kernel settings that aren't really prepared for the highest network throughtput. There's a lot that can be done in the kernel sysctl's tuning for saturating the NIC and I'd expect you to see a bit better results when doing so.
I would naively expect that the default kernel settings for both Linux and FreeBSD would allow me to saturate a 1Gbit link in a LAN.
Anyway, this looks like one of those things I could go down the rabbit hole of tuning (so I'm not just copy-pasting swathes of configuration without understanding it), but this was just a quick demo which shows that: "basically, the userspace implementation isn't too slow".
At least in case of FreeBSD, the network saturation isn't an active goal of default kernel settings, hence the link I've pasted. It's especially nice, as it explains a lot of things it proposes so that the blind copy&paste wouldn't be so blind. It's really a good read.
And I do get a point of your test and I agree with the anecdotal conclusion :)
Some quite knowledgeable people in the field of BSD networking, including Henning Brauer, maintainer of OpenBSD's PF, have little love for instruction given on site you are linking to:
Taking settings for FreeBSD and blindly applying them to OpenBSD isn't a great idea, yeah.
Running the defaults is a good place to start, but if you don't get the results you're seeking, the linked articles show a lot of settings that are worth looking at.
There are a lot of settings that are reasonable to tune for specific uses, which is why they're configurable. Knowing which ones to poke at first is a good thing.
It can saturate 1Gbps with the TUN driver, sure. 10Gb is harder with TUN. Linux's native driver is lower overhead, although as siblings point out, there is work in progress on a native FreeBSD kernel driver.
But it's far, far easier to just fire up an sshd somewhere and 'sshuttle' makes it possible to turn any ssh server that you have a login on into a VPN endpoint:
https://sshuttle.readthedocs.io/en/stable/
You don't even need to be a privileged user - just any old user login, over ssh, and you need python to exist on the remote system.