With the update to Gnome 3.36 gedit unfortunately removed the old open file pop-up (a screenshot of the old pop-up can be found in this how-to on howtogeek.com). In the commit message of the removal the developer mostly cites maintenance burden as the main reason. As a developer I can relate to that and it does make sense to avoid duplicating functionality (and code) which is already present elsewhere. However, I also must say that with the pop-up I somehow always managed to find the documents I was looking for, but the file open dialog’s recent history in its default configuration just does not show enough documents to find the ones I need. This post lists two improvements I found useful.
Continue reading “gedit open file pop-up”Getting coredumps of Qemu on Fedora
Recently it happened that a virtual machine crashed reproducible. journalctl contained messages from audit indicating the crash:
audit[88047]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 subj=system_u:system_r:svirt_t:s0:c422,c704 pid=88047 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1
I was hoping to get a coredump from it, however, coredumpctl had no corefile (COREFILE
column read “none”). There was another message in journalctl which also showed the reason:
systemd-coredump[90346]: Resource limits disable core dumping for process 88047 (qemu-system-x86).
However, ulimit -a (even as user qemu) showed that core file size is unlimited. It seems that something (probably virsh
) adjusts limits for that particular process (Max core file size is set to 0 and 0 bytes). Continue reading “Getting coredumps of Qemu on Fedora”
zsmalloc performance on ARM64 platform
To use zram the Linux kernel zsmalloc needs to be enabled. The zsmalloc functionality in turn allows to use two methods to access allocations of multiple pages: Copy-based or using VM mapping. Depending on platform one or the other is faster, and the configuration option already suggests that ARM the VM mapping method is typically faster. Hence I was wondering whether that is also true for ARM64 platforms (running in Aarch64 mode). Outcome: On a quad Cortex-A35 platform using Linux 4.14 VM mapping was ~20-50% faster.
ARM Linux Kernel early startup code debugging
This post shows how to debug early (pre-decompression/pre-relocation) initialization code of an ARM (Aarch32) Linux kernel. Debugging kernel code is often not needed and anyway rather hard due to the interaction with real hardware and concurrency in play. However, to watch, read and learn about early ARM initialization code, debugging can be really useful. Early Initialization is running without concurrency anyway, so this is not a problem in this case.
Before starting, I assume you have a working ARM cross compile environment, a compiled kernel and Qemu at hand. Make sure to compile the kernel with debug symbols (CONFIG_DEBUG_KERNEL=y
and CONFIG_DEBUG_INFO=y
). I use the following arguments to start Qemu:
$ /usr/bin/qemu-system-arm -s -S -M virt -smp 1 \ -nographic -monitor none -serial stdio \ -kernel arch/arm/boot/zImage \ -initrd core-image-minimal-qemuarm.cpio_.gz \ -append "console=ttyAMA0 earlycon earlyprintk"
Especially the arguments -s -S
are notable here, since the former makes sure Qemu’s built-in debugger is available at port 1234 and the latter stops the machine. This now allows to connect to Qemu using gdb. I use the gdb from my ARM cross compiler toolchain. Once I have a gdb prompt, lets immediately enable gdb’s automatic disassembler on next line before connecting:
$ arm-buildroot-linux-gnueabihf-gdb ... (gdb) set disassemble-next-line on (gdb) show disassemble-next-line Debugger's willingness to use disassemble-next-line is on. (gdb) target remote :1234 Remote debugging using :1234 0x40000000 in ?? () => 0x40000000: 00 00 a0 e3 mov r0, #0
Continue reading “ARM Linux Kernel early startup code debugging”
iptable prevents nftables to be loaded
Since a while I am using nftables for my firewalling needs. My nftables.conf has some prerouting settings. After playing with docker, I had the issue that I was no longer able to reload my nftables:
/etc/nftables.conf:12:9-18: Error: Could not process rule: Device or resource busy chain prerouting { ^^^^^^^^^^
Also disabling the Docker service did not help. It seems that the kernel module iptable_nat needs to be removed, but this is currently in use:
# rmmod iptable_nat rmmod: ERROR: Module iptable_nat is in use
There are some iptable rules/chains active which prevent the module from unloading. By clearing the iptable configuration, especially the nat table, it is possible to remove iptable_nat and then using nftables again.
iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X
Ubuntu 18.04 LTS (Bionic Beaver) Server Installer differences
Ubuntu 18.04 LTS Server comes now in two flavors with different installers:
- Ubuntu Server (new Ubuntu-specific Subiquity installer, ubuntu-18.04-live-server-amd64.iso)
- Alternative Ubuntu Server installer (Debian installer, ubuntu-18.04-server-amd64.iso)
Canonical itself refers to the traditional installer for advanced networking and storage features. However, there are also other differences, this blog post looks into them.
Continue reading “Ubuntu 18.04 LTS (Bionic Beaver) Server Installer differences”
Hibernate Debian running on Google Compute Engines preemptible VM
Googles Compute Engine VMs which are configured preemptible are massively cheaper than regular VMs, typically a fourth or even a fifth of the price of a regular machine. It seems quite lucrative for everything which is not mission critical.
However, it can be quite annoying when all state gets lost. Luckily Google does not just turn off the machine but sends an ACPI G2 Soft-Off signal. With Debian 9 (stretch) the ACPI daemon is processing the ACPI signals (acpid) and by default shuts down the machine. This post shows how to use hibernate instead.
Note: Since Google might start the machine on a different (virtual) hardware resuming the machine might not succeed, or even worse, lead to adverse effects. In practice, it seems to work quite well for me 🙂
Continue reading “Hibernate Debian running on Google Compute Engines preemptible VM”
i.MX 7 Cortex-M4 memory locations and performance
The NXP i.MX 7 SoC heterogeneous architecture provides a secondary CPU platform with a Cortex-M4 core. This core can be used to run a firmware for custom tasks. The SoC has several options where the firmware can be located: There is a small portion of Tightly Coupled Memory (TCM) close to the Cortex-M4 core. A slightly larger amount of On-Chip SRAM (OCRAM) is available inside the SoC too. The Cortex-M4 core is also able to run from external DDR memory (through the MMDC) and QSPI. Furthermore, the Cortex-M4 uses a Modified Harvard Architecture, which has two independent buses and caches for Code (Code Bus) and Data (System Bus). The memory addressing is still unified, but accesses are split between the buses using addresses as discriminator (addresses in the range 0x00000000-0x1fffffff
are loaded through the code bus, 0x20000000-0xdfffffff
are accessed through the data bus).
Continue reading “i.MX 7 Cortex-M4 memory locations and performance”
OpenEmbedded recipes for WireGuard VPN
This weekend I finally came around to create OpenEmbedded recipes for WireGuard. The recipe currently awaits review and hopefully will get part of the meta-networking layer, part of the meta-openembedded repository of the upstream OpenEmbedded project. There are two recipes, one for the kernel module and one for the user space tools. The user space tools have the kernel module as a dependency, hence it is sufficient to install the wireguard-tools package, e.g. by using IMAGE_INSTALL_append in your local.conf:
IMAGE_INSTALL_append = " wireguard-tools"
The kernel module needs at least a kernel version 3.18 or later and has some requirements regarding kernel configuration. The WireGuard website maintains a list of kernel requirements. If you are using the Yocto kernel, the netfilter kernel feature (features/netfilter/netfilter.scc) is enabled by default and seems to be sufficient to run WireGuard. To get started with WireGuard, refer to the excellent Quick Start guide on wireguard.io.
WireGuard, LEDE and some IPv6 fun
Today I upgrading my router to LEDE 17.01 and played a bit with IPv6 and WireGuard VPN tunnels. My Internet connection at home (connected via Cable to the Comcast network) has decent IPv6 support, which I wanted to enjoy also when on the road, using non-IPv6 networks. The first step is to setup a Wireguard tunnel, which I already did some months ago (Dan Lüdtke, author of the LEDE/OpenWrt web interface plugin for Wireguard has a good post on that. Update April: Dan has a new post which does not make use of the stacked approach. This is suitable for lots of regular setups. However, the IPv6 address setup with automatic network assignment described here is only supported by using stacked interfaces, hence this article keeps using that configuration). In my setup the Wireguard IPv4 network uses a network from the private range (192.168.2.0/24) to route IPv6 traffic. For IPv6 my goal was to assign a public subnet, so I can access the IPv6 network without any NAT directly through the tunnel. In IPv6 world, NAT is a technology which is not commonly used/considered deprecated anyway. Note that this how-to does not route the IPv4 traffic to the internet through the VPN tunnel, only IPv6 traffic.
First, a large enough IPv6 prefix needs to be available on the router in order to assign two independent IPv6 networks to my local LAN and the Wireguard VPN. One has to realize that in IPv6 world, subnets are by definition between /49 and /64. One cannot create a subnet /72 or similar since the last 64 bits are the host portion, reserved exclusively for host addresses. By default, LEDE requested a 64 bit IPv6-prefix from the provider, but this can be changed in the WAN6 network interface settings: