## Flashing router firmware through Serial Port: CFE bootloader (usually Broadcom) based routers

Here’s a summary of learnings from dd-wrt’s serial recovery instructions:

1. Use a UART controller that signals at 3.3V (e.g. FTDI TTL-232R-3V3) to talk to the board. Regular serial RS-232 ports requires a voltage level shifter converting the signal to swing between 0V to 3.3V.
2. You only need 3 pins: Tx, Rx and Ground. There’s voltage contention if you plug in the Vcc from TTL-232R-3V3 (It’s the USB’s 5V despite the signaling is 3.3V) to the 3.3V supply of the router. You don’t need the Vcc pin. It didn’t harm anything or do anything when I connected the Vcc pin.
3. Stick with all serial port defaults and only set the baud to 115200 (default is 9600) and turn off flow control (default is xon/xoff). I use Putty for terminal.
4. The terminal serves as the monitor for the computer on the router that shows a text console. Broadcom uses CFE bootloader (others use U-Boot with busybox).
5. CFE bootloader defaults to 192.168.1.1 with subnet mask 255.255.255.0 (aka /24). Set up the network interface to have a static IP on the same subnet to talk to the board.
6. Good habit: nvram erase
7. The flash program relies on TFTP protocol to receive the firmware file. So get your TFTP client ready. Microsoft included a TFTP client/server since Windows 7 but usually disabled (turn it on in Windows OptionalFeatures.exe).
8. TFTP is a simple push(put)/pull(get) design. You can either “push a file on your computer” or “get a file as filename”. You’d want to specify -i switch (binary image transfer) with Windows tftp.exe.
9. So type this command at the command prompt but don not press enter until your router is ready to grab the file: tftp -i 192.168.1.1 put {path to whatever TRX firmware file}
10. Go back to the serial terminal and tell the router to accept a TFTP push (in a window of a few seconds before it time out) and flash the memory region flash1.trx with this command: flash -ctheader : flash1.trx
11. Immediately initiate the TFTP push from your computer (Windows command line example in Step #9 above)
12. Wait for a couple of hours! The terminal might tell you that it has received the file completely, but it won’t show anything when it’s writing to the flash! It’s a painfully slow process with no feedback. Just be patient!

Some observations

• FreshTomato firmware absolutely won’t tell you on the screen after it has done flashing (Merlin-WRT does). Just turn the router back on after a couple of hours.
• Merlin firmware repeats (exposes) the raw passwords to the serial port!
• FreshTomato firmware boots to a linux prompt on the serial port

4 total views

## Asus-wrt Merlin Firmware DDNS update interval hack

The “WAN – DDNS” page only allows users to set the DDNS updater to check as frequently as every 30 minutes. My DDNS provider does not have an update frequency limit, so I’d like to have the update client check for every 1 minute. The setting is called “Verify every”:

Attempting to set it to every 1 minute gives this error message:

I searched for the “WAN-DDNS” config webpage file (Advanced_ASUSDDNS_Content.asp) in the firmware source code, and found that it’s under /www folder in the router’s linux root.

Since “Verify every” is such generic words, and Github does not support exact phrase match in search (I use “in:file” specifier in the search box), I pick “WAN IP and hostname verification” (the closest setting which I expect the code to be in the proximity of the one corresponding to “Verify every”) so it has more unique keywords. The first jump:

Since it’s just a dictionary files, we search for the associated internal variable name
DDNS_verification_enable” which points to the this line in Advanced_ASUSDDNS_Content.asp:

Since this name appeared nowhere else, I traced the “id” attribute above, which is “check_ddns_field” and I see a Javascript (.js) file that process the data from the web page forms:

The variable check_ddns_field appears in the if-else-if branches of change_ddns_settings(), so one of the few next few variables after it is likely to correspond to “Verify with”.

The variable name showed up in 4 branches of if-elseif-else switches (switching DDNS service providers), which ddns_regular_period comes right after

Searching for the class member (or struct field)

Bingo. Here’s the entry value range check code. I’ll change the “30” minutes to “1” minute to enable checking at 1 minute intervals (which I think it’s reasonably responsive for testing and general use).

I’d prefer to check if the input range check is there out of feasibility (i.e. what is the smallest increment) or it’s just set to prevent people from getting banned by the DDNS provider for checking too frequently. I looked into the last occurrence of ddns_regular_period and found this:

Which means the web forms is updating NVRAM (environmental) variable of the same name ddns_regular_period, which appears to be called only in watchdog.c:

And Dang! The code enforces if the ddns_regular_period (on NVRAM) is set to be less than the original 30 minute minimum (invalid condition), it’d be set to the default 60 minutes (1 hr).

It’s actually sloppy coding because the defaults are specified in struct fields in defaults.c:

yet that 60 minutes is hard-coded in watchdog.c. That means if I don’t catch it and only changed the default in one place, the behavior will not be what I expected given the right conditions. This is an example of why software feature expansion are likely to break things. If you have solid code, bugs on updates are likely to happen.

I was curious why it says (period*2)

and suspected the ddns_check_count is incremented in 30 second (half-minute) interval. Since it’s watchdog.c, my natural guess is that the watchdog checks every 30 seconds for these event hooks. Turns out the notes (comments) in the code has “30 seconds periods” noted everywhere.

I searched a little bit more about linux watchdogs and found this useful webpage which explained how it works. I didn’t see /dev/watchdog in my router’s rootfs (root file system) so I assumed it’s a hardware watchdog (embedded linux, so duh).

I was about to dig up the hardware manual for the chipset for my router, but I search for they string HW_RTC_WATCHDOG  first and it showed up in linux kernel code (duh):

Note that the HW_RTC_WATCHDOG is a register in this code base, not the number of seconds from Christian’s Blog. i.e. they are completely different things, but it provided a good keyword lead for me to start digging.

The code seems to be the same for various kernel version so I picked any one of them to understand the behavior. First occurrence is in wdt_enable():

The other places are suspend/resume, so I’ll ignore those for now. Note that wdt_enable() is a static function, so only need to search within the same file. The only active place that calls it is wdt_ping():

So there are only 2 things I’ll need to find out: heartbeat and WDOG_COUNTER_RATE:

…. unfinished

https://bitsum.com/firmware_mod_kit.htm

While it’s a lot of useful learning about Embedded Linux and hunting down source code, for the meantime, given that namecheap does not care if you blindly update every minute, it’s easier to just set up a cron job that runs at every N minutes using curl/wget.

dd-wrt has a place for you to enter the cron scripts with the web interface, but you might need to log into the router using SSH and register the cron job yourself:

The core command is called ‘cru‘, which typing it in the command prompt will show you the very simple usage:



Cron Utility
add:    cru a <unique id> <"min hour day month week command">
delete: cru d <unique id>
list:   cru l

<unique id> is just a tag that you make up to name your task. Again the one-liner command needs to be direct absolute path. My ‘curl‘ program is located in /usr/sbin, so the command is:

cru a ncddns * * * * * /usr/sbin/curl "https://dynamicdns.park-your-domain.com/update?host={subdomain or @ for root}&domain={registered domain name}&password={DDNS-specific password generated by namecheap's domain administration page under Advanced DNS}"

The “* * * * *” refers to run at every “minute, hour, date of month, month, date of week”, in other words, run at every minute in every waking moment. The wild card * means ALL-OF.

Cron job registration through CRU is not persistent, so to make it survive reboots, add the above cru command as a line to /jffs/scripts/services-start script. It should be executable by default, if not, make sure you set it to be executable or it won’t run.

3 total views

## Namecheap Dynamic DNS setup in dd-wrt

Namecheap support page explained the process of configuring your dd-wrt router firmware to use Namecheap’s REST (HTTP URL) update interface to dynamically update the IP of your (sub-)domain. The instruction works, but there are few items which doesn’t quite make sense to me as a programmer, and I did a few experiments to figured that it’s bogus and developed a few insights about what’s necessary and why they do it.

Their instructions looks like this:

and specific verbal instructions are:

• DDNS Service: Custom
• DYNDNS Server: dynamicdns.park-your-domain.com – the name of the server should not be changed
• Password: Dynamic DNS password for your domain (Domain List >> click on the Manage button next to the domain >> the Advanced DNS tab >> Dynamic DNS)
• Hostname: Your subdomain (@ for yourdomain.comwww for www.yourdomain.com, etc.)

I stroke out Username and Password fields because they are not used!

If you look at the URL, namecheap’s instructions are asking you to re-enter the domain and the password key-value pair AGAIN, which means Username and Password fields are not used.

My programmer instinct immediate screams the updater is assuming certain REST API syntax that are not properly substituted so they need to be entered manually, exposing the password without the benefit of the masks (forget about keeping the password top secret, router firmware guys aren’t top security engineers. Just re-generate one in Namecheap’s admin interface if it gets compromised).

I checked by entering bogus Username and Password fields (the web’s GUI/forms checks if they are blank, so you can’t get away with not entering). It worked as expected. This means the two fields are dummies with Namecheap’s instructions.

Based on the fact that Namecheap’s instructions being unable to substitute Username and Password fields and the host key must be put at the end for Hostname field to substitute correctly, I can safely speculate that the one who wrote this couldn’t find out what the syntax for the variables are, and exploit that the last parameter hostname gets attached at the end in the absence of substitution variables in the URL syntax.

Apparent people are doing something stupid like this because nobody in the chain remember to document the substitution variable names! It’s not in dd-wrt’s user interface (should have that printed the ‘usage’ info next to the URL box) and neither it’s in INADYN’s github readmes!

I decided to dig deeper and go after the dynamic DNS updater package in question. dd-wrt is using inadyn package to do the dynamic DNS update, as “INADYN” is shown in “DDNS status” box gives it away (confirmed by dd-wrt’s docs):

The service itself is called ddns though.

I ended up reading the /examples folder on the repository and found this:

Bingo! Here it is:

generic.c is a plugin also shows the above table as well

Since namecheap’s dynamic DNS service do not mandate how frequently you can update nor they charge per update, it’s easiest and most reliable to just blindly update the IP every N minutes instead of checking against a local cache to see if the external IP has really changed before updating at each poll interval

This user interface does not have the option to set the updater to run every 1 minute, so why bother since it’s just a simple program that creates a simple URL and do a curl/wget? At the end of the day, I decided to just do a cron job:

PATH=/sbin:/usr/sbin:/bin:/usr/bin
* * * * * root curl "https://dynamicdns.park-your-domain.com/update?host={subdomain or @}&domain={domain name bought}&password={generated by namecheap's account management page}"

There are 3 things that you will need to know:

• Paths is from a clean slate. Need to define it first
• * * * * * means every minute. Specify numbers/range for each time unit (minute, hour, day of month, month, day of week) if desired. Asterisk means ON EVERY.
• Need to specify the user after the time syntax and before the actual command

4 total views

## Remote Desktop INTO Linux GUI (xrdp)

To serve Linux Desktop just like other Windows computer through Windows Remote Desktop (formerly Terminal Services), so far I have found xrdp (xorgxrdp). VNCs, NX (NoMachine)/TeamViewer does not count because they share the screen of an existing session, instead of creating a new one for you.

Xrdp does not follow the use pattern as Microsoft’s RDP. When you log in to a Xrdp host (server) through a RDP client, you go into an intermediary (welcome interface) called sesman (Session Manager), which is a multi-protocol remote graphical session client (think of it as a very rudimentary Remmina).

The two session modules we are interested in here is

• Xorg (libxup.so): Xorgxrdp is the MS-RDP-like mode that starts a new X session without first attaching to a screen.
• Xvnc (libvnc.so): basically a VNC client. You start a VNC server (like X11vnc) with a display/screen (can be started in any X session you logged in, or the local user screen if you set the VNC server as a service) and connect to it in this RDP intermediary (welcome interface) without installing VNC client software.

In Windows, RDP do not distinguish between local and remote users and sessions with the same login account will take over other existing sessions. If you want each session to start fresh and leave other sessions alone, disable this in Group Policy Object editor under Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections -> "Restrict Remote Desktop Services user to a single Remote Desktop Services session“.

I am usually fine with this arrangement as well, but often prefer to connect to my remote sessions work in the background leaving the local user alone (i.e. if I want things to show up on the local monitor screen, I’ll use VNC instead). I’d also like to resume my remote sessions if I log in from another computer instead of starting from scratch with each new RDP connection. Turns out given a bunch of quirks of xrdp, this is much easier to do so than reproducing MS-RDP’s default behavior.

First of all, out of the box, the same remote user cannot overtake locally logged-in desktops nor be simultaneously logged in! It’s either one way or the other! I got bumped out immediately after logging in through sesman, or if I logged in remotely first, I get bumped out when I try to log in locally.

Found somebody suggested that certain desktop environment might have added code to prevent the second session from opening. And this blog suggested you edit the windows manager launch script

sudo nano /etc/xrdp/startwm.sh

unset DBUS_SESSION_BUS_ADDRESS
unset XDG_RUNTIME_DIR

OR

export (dbus-launch) RIGHT BEFORE the last lines which checks and calls the Xsession test -x /etc/X11/Xsession && exec /etc/X11/Xsessionexec /bin/sh /etc/X11/Xsession This only solves the part of simultaneous local & remote logons In the newer version as of writing, the default behavior is that locally logged in sessions are independent of remotely logged in sessions, yet the remotely logged in sessions resumes by default (if you log in as the same user). Turns out this is what I preferred as the local sessions should be reached with VNC instead and I’d prefer my remote sessions happen at the background without showing it on the local screen. Since the VNC server only has a password (not using system’s active directory or user management system), there is no user name. I went to [Xvnc] section of xrdp.ini and replaced username=ask with username=na. The port number -1 no longer applies as we aren’t emulating RDP with VNC anymore (where sesman creates a new VNC server instance if not previously done). Given that I’m running VNC as a service at default port 5900, I also changed it to port=5900. Session/Module “vnc-any” uses the same libvnc.so as Xvnc before it, and they are pretty much the same thing except it exposes ip:port entry so you can use it as a gateway to connect to VNC servers hosted on other machines (can be used to connect to the VNC server on the current machine you just connected to through RDP if you stick with 127.0.0.1:5900). It’s more like a convenience thing that hosts the VNC client software that you can RDP into (so you don’t need to install a VNC client from where you are). There is also a RDP client module/session called ‘neutrinordp-any’, which basically uses the linux machine you just connected to as a gateway to visit another machine hosting RDP. It’s rarely useful and it doesn’t work out of the box when I tried it (does nothing after I press OK despite entering all the info correctly). So I removed it from my xrdp.ini There’s also a minor annoyance that if you connect remotely, “Authentication Required…” message box will show up on start since remote user is a little more restrictive than local users. This can be solved by creating this file with nano sudo nano /etc/polkit-1/localauthority/50-local.d/46-allow-update-repo.pkla and paste the contents there and save it: [Allow Package Management all Users] Identity=unix-user:* Action=org.freedesktop.packagekit.system-sources-refresh ResultAny=yes ResultInactive=yes ResultActive=yes 8 total views ## Run VNC server before logging to Linux GUI I installed X11vnc and to my dismay, there isn’t a easy option that automatically configures VNC as a service like most Windows VNC software does (so you can VNC into a computer before you login as a user graphically and launch the X11vnc executable). I had to manually create a service and I ran into a few problems as the instructions on StackExchange and other forums are missing critical pieces. In here, I will use X11vnc server on Ubuntu Cinnamon (systemd) as an example. Instead of blindly pasting code here without context, I’ll sketch out the ideas here: 1. Establish a password in a password file stored in common areas such as /etc/x11vnc.pwd instead of using the user-specific home folder default ~/.vnc/passwd 2. Create a service (such as systemd) pointing to x11vnc program with the right parameters, which includes the path to the password file stored in common areas 3. Start the service It’s worth nothing that the X11server connection is unencrypted. I tried the -ssl options but my RealVNC clients complained about their version First of all, x11vnc -storepasswd creates the encrypted password file at the current home folder where you run the code. You are going to call the said password file with x11vnc -rbfauth {path to password file} parameter when launching the X11vnc server program. One way to do it is to copy the created password to a system-specific configuration folder instead of user’s home folder: sudo cp ~/.vnc/passwd /etc/x11vnc.pwd Alternatively (which I do not recommend), is to specify the password AND the password-file path directly with optional specifiers of the -storepasswd parameter. # Directly create the password file without a prompt that hides the password entry x11vnc -storepasswd my_pASSword /etc/x11vnc.pwd # Clean up your terminal command history since you've exposed the password visually history -c Unfortunately, if you want to specify the path to the password-file, you have to specify type the plain text password in the command line, which you should do it when nobody’s watching and clear the history immediately afterwards. If you are in a public place, just do it the old way and copy the password file over The core part of setting (doing the data-entry) for registering a service is the figuring out the command line parameters executing x11vnc program. At minimal, you’ll need • -rfbauth specifies where the password file is (or you can directly specify the password with -passwd, which I do not recommended) • auth: authentication means (prefers –auth guess, but you can specify where your .Xauthority file is) • -display: 0 connects to the X11 server display, which is usually 0 • -create is the missing link! you must absolutely use this tell the VNC server to make a Xvfb (X virtual framebuffer) session if no display session is found (which is the case when you are running X11vnc as a service before logging in the a Desktop Environment like Cinnamon) You’ll typically want this for a constant-on VNC server • -forever: x11server instances are by default (-once) killed after the client disconnects. -forever option keeps it there My personal preferences • -shared: I might have a few computer VNC’ing into the linux computer and I don’t want to make sure I remember to close the ones I’m not using. • -noxdamage: XDamage is a system that only updates the changed parts of the screen. Don’t need it when bandwidth isn’t super tight. • -repeat: allow hold and repeat keystrokes just like what we are used to. By default it’s set to -norepeat to avoid stuck key scenarios. For debugging (useful! that’s how I figured out the missing part that I have to use -create to make a dummy screen when using x11vnc as a service): • -o {output log file}: typically -o /var/log/x11vnc.log • -loop: if the program crashes for any reason, it’ll try to auto-restart for robustness. Might not need it if you use -forever So the core command needed is: x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever Now after we’ve decided the exact launch command, we will have to create the service entry. In systemd Linux, it’s done by writing a service configuration file in text format very much like Windows INI files under /etc/systemd/system and the filename MUST end with suffix “.service In short, create /etc/systemd/system/x11vnc.service. Basic file without logging is like this: [Unit] Description=VNC Server for X11 Requires=display-manager.service # The two below are for performance (make sure everything is ready before launching) After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/bin/x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever # The 3 lines below are option, but for robustness ExecStop=/usr/bin/x11vnc -R stop Restart=on-failure RestartSec=2 # For automatically creating symlinks with "sudo systemctl enable" only [Install] # Start at runlevel 5 (multi-user) WantedBy=multi-user.target This is the minimum skeleton that does the same less robustness against the unexpected: [Unit] Description=VNC Server for X11 Requires=display-manager.service [Service] ExecStart=/usr/bin/x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever [Install] WantedBy=multi-user.target The default permissions 644 (everybody reads but only root can write is standard for services. 640, denying unknown people the read access is also acceptable if you are paranoid) should be correct if you use sudo creating the file in the /etc/systemd/system folder. There are some older tutorials using the (/usr)/lib/systemd/system folder, which are now reserved for automatic entry by programs instead of manual service entry like what we are doing now. Technically either way works, but follow the convention so people know where to find the entries. After that enable the service file you’ve created (the “.service” suffix is optional when calling), preferably do a daemon-reload to make sure edits in the service file is reflected. If you don’t want to wait until the next book, you can start it with systemctl sudo systemctl enable x11vnc sudo systemctl daemon-reload sudo systemctl start x11vnc This kind of stuff in Linux is bitchworthy. It’s 2021 now. How come users need to mess with defining their custom services for such a common VNC use case (start before logging in graphically)? Never have to deal with this kind of shit in Windows with VNCs: they always expect users has the computer to themselves and always offer the option to set up the service automatically! 12 total views ## rsync/Deltacopy gotchas (especially Windows transfers) Deltacopy is a GUI wrapper around rsync, a feature-packed tool to copy files locally AND remotely, AND differentially (automatically figure out the parts that are different and resend. Excellent for repair) through hash comparisons. For non-programmers, hash is a unique ID computed for a chunk of data that are expected to change wildly even at the slightest data/file change/corruption). Deltacopy is very useful if you just want to do the basic stuff and not know the rsync syntax and switch combinations off the top of your head. It also provides a windows port of rsync based on Cygwin (a tiny Linux runtime environment for windows). This is the only free alternative to cwRsync, a paid Windows port of rsync. rsync is a Swiss Army Knife that can also work from one local path to another. Deltacopy is intended for remote file transfer. Deltacopy server is basically this: rsync --daemon  However in Windows, since it’s cygwin, it’s looking for linux’s /etc/rsyncd.conf by default if you do not specify the config file through --config switch. Deltacopy client basically help you generate the command to transfer files. Most of the features are done through right-click (context) menu, not toolbar or pull-down menus, which might confuse some people. You set up your tasks as Profile, which can be scheduled (the bottom panel) or executed immediately by right clicking on the profile: Run is pushing file to the server, Restore is pulling files from the server. Run Now and Restore are for executing the command (aka task) immediately. You can peek into what it generated by right-clicking on the profile and choose “Display Run/Restore Command”. First time users might not be able to find it since the only place to access it is through context menus. There are some tricky parts (gotchas) for specifying the files/folders to copy. First of all, even though you use Add Folder/Add Files button for entries Basically you can make a (source, destination) pair by modifying the selection and target path. It’s just passed onto rsync command verbatim. The target path is relative to the virtual directory set on he server (see Deltacopy Server’s directory) The destination path which is endowed with the branch folder name (one-level). In other words, if your source is C:/foo/bar, Deltacopy by default set the destination to /bar instead of /. This is probably to avoid the temptation of lumping all contents in the same remote destination root. If you just want to simply lay the files at the root virtual folder at the destination (my most common use case), you’ll have to edit and clear out the (relative) destination path. As for the source, the author of rsync chose to do it the logical (more conservative) way but not intuitive way: by default it reconstruct the source folder’s FULL path structure at the destination! For example if you intend to copy everything under C:\foo over, the destination will create {destination root}\foo in the process and put everything under it instead of directly at {destination root}. The design choice was supposed to prevent accidental overwrites as multiple source subfolders try to write over each other with the same names at the destination. Luckily, there’s a way around it! See man pages for -R –relative: Put a dot (.) at the place where the relative path starts! For example, the source is C:\foo\bar\baz and you do not want /foo to be created at the destination and want it to start with /bar instead. You should enter C:\foo\.\bar\baz as source. Everything the left of the dot (refers to self-folder) are stripped from the destination path structure. ACL support for Windows sucks because rsync lives on cygwin, which has POSIX (unix/linux) type of permissions/ACL. https://unix.stackexchange.com/questions/547275/how-do-i-use-rsync-to-reliably-transfer-permissions-acls-when-copying-from-ntf In my opinion, the best way to go about it is to not transfer ACLs from the source and follow the preexisting ACLs at the destination. I’d also leave the groups and owners alone (inherit at destination) as well I might not be on the same active directory (or workgroup user management) as the destination computer so accounts with the same name might not be actually the same accounts. --no-p --no-g --no-o –no-{command} is the complement prefix that does the opposite of the -{command}, so the above means skipping -p (perms/permissions), -g (group), -o (owner) and make sure it has full permissions for everybody. Sometimes a remote path can be mistaken as a relative local path with the hostname/IP address as the folder name if there’s no username. Start it with rsync:// as the URL scheme and the syntax is like ftp:// as far as username is concerned. Deltacopy protects the source and destination paths with double quotes (“). It’s a good practice that we should do it even with direct rsync calls. 34 total views ## Not missing Windows after trying Ubuntu Cinnamon Remix Given that I grew up as a power DOS/Windows user, I often have gripes about how frustrating Linux is and they were almost never ready for people who just want to get common things done by intuitively guessing where the feature is (therefore having to RTFM or search the web for answers). I deal with HP/Agilent/Keysight instruments a lot and appreciated their effort put on user experience (UX) design. It’s not that user who’s stupid if they have to dig through 5+ levels of menu buttons to measure a Vpp (peak to peak voltage) and the software aren’t smart enough to default to the only channel in use. That’s what Tektronix did to their nasty user interface and raised a generation of Stockholm Syndrome patients who keep buying Tek because they are traumatized by the steep learning curve and would rather walk on broken glass than having to learn a new interface from another vendor (that’s called vendor lock in). I certainly appreciate Cinnamon desktop environment (came with Linux mint) designers willing to not insist on the ‘right way of doing things’ and follow a path that’s most intuitive for users coming from a Windows background. The last time I used Linux Mint was 19. There’s still quite a lot of rough edges. Some services got stuck (time-outs) right out of the box and systemd went through slowly. It’s just not fast and responsive. When I tried it again when Mint 20.1 was released, my old i3 computer boots to the GUI in 5 seconds and I was hell of impressed. The icons and menus are also now sized balanced proportions like Windows (can’t stand the big and thick default menu-item fonts like Ubuntu). However, there’s one big impeding factor for me to make Linux Mint my primary computer: the packages repositories are one generation behind Ubuntu (the most widely supported distro)! Software often have bugs that the developers solved, living with old, ‘proven’ software slows down the iterative process. I’ve been through hell trying to access Bitlocker volume with Linux Mint 20.1 as not only it doesn’t work right of the box like Windows, I’m stuck with a command line dislocker that doesn’t integrated with the file manager (like Nemo). The zuluCrypt available with Mint 20.1 is too old to support Bitlocker properly. Trying to upgrade it to 6.0 has Qt dependencies which is unsolvable. I was able to download the unsanctioned old revision in debian package but there’s more unsolvable dependencies. The alternative option of compiling from the source is met with more dependencies fuckery and now the restrictive Mint repository might not have the exact version of the compiler required by the source code package. Aargh! I was about to give up Linux Mint and install Ubuntu and try to hold my nose changing the desktop to Cinnamon. Luckily I’ve found somebody who read my mind: there’s Ubuntu Cinnamon Remix! Not only Ubuntu Cinnamon Remix supported Bitlocker right out of the box (no need to fuck with zuluCrypt which doesn’t integrate with the file explorer anyway)! Most of the defaults make sense, buttons are often where I expect them to be. Even Win+P key works identically! The names/lingo are close to Windows whenever possible, and honestly the default Yari theme is visually slightly more pleasing than Windows as it makes very good use of the visual spaces! Here’s a few transition tips I use Winsplit-Revolution in Windows (old version is freeware) that uses the numeric keypad to lock the window to the 9 squares grid using Ctrl+Alt+{Numpad 1-9}. Save the keyboard shortcuts in case if you want to install it again on another computer: dconf dump /org/cinnamon/desktop/keybindings/ > dconf-settings.conf dconf load /org/cinnamon/desktop/keybindings/ < dconf-settings.conf There’s no Ctrl+Shift-Esc key which I often use to call Task Manager (called System monitor). I had to make the shortcut as well to feel at home. 49 total views ## Qemu for Windows Host Quirks I’m trying to cross compile my router’s firmware as I made a few edits override the update DDNS update frequency. Turns out it doesn’t work on the latest Linux so I’d need to run an older Ubuntu just to keep it happy. RANT: Package servers keeps pulling to rug on outdated linux is frustrating. Very often developers didn’t make a whole installer it so we are often wedged between downloading a package at the mercy of its availability from package managers and their servers or compiling the damn source code! With the promise that Qemu might have less overhead than Hyper-V or VirtualBox (indeed it observably is), I tried installing Qemu on Windows host and it turned out to be a frustrating nightmare. RANT: Linux is not free. The geniuses did the most sophisticated work for free but users pay time and energy cleaning after them (aka a support network dealing with daily frustrations) to made these inventions useable. There’s a company that does the clean up to make BSD (same umbrella as Linux/Unix) useable and made a lot of money: it’s called Apple Computers since Steve Jobs return. qemu is just the core components. System integration (simplifying common use cases) are practically non-existent. Think of them as the one who produced an ASIC (chip) and the end-user happens to be the application engineers. There’s a few tutorials on qemu Linux hosts for moderately complex scenarios, but you are pretty much on your own trying to piece it altogether for Windows because there are some conceptual and terminology differences. The man page --help for the qemu’s Windows host’s VM engine was blindly copied from the Linux hosts counterpart, so it tells you about qemu-bridge-helper which is missing. I stupidly went down the rabbit hole and drained my time on qemu. So I documented the quirks to help the next poor sap who has to get qemu running on Windows 10 host efficiently over Bridged-Adapter (VirtualBox lingo) networking mode. • Preparation work to get HAXM accelerator set up • Release VT-d (hardware assisted virtualizations) so HAXM can acquire it • You’ll need to remove Hyper-V completely as it will hoard VT-d’s control • Windows Sandbox and Windows Subsystem for Linux (WSL2) uses Hyper-V. If you just unchecked Hyper-V in Windows Optional Features leaving any of these 2 on, Hyper-V is still active (it only removes the icons) • HAXM v7.6.6 not recognized by qemu on clean install. Install v7.6.5 first, then remove it and install v7.6.6. Likely they forgot a step in v7.6.6’s installer • Turn on optimization by: -accel hax • Command line qemu engine • qemu-system-{architecture name}.exe is what runs the show • qemu-system-{architecture name}w.exe is the silent version of the above engine. Won’t give you a clue if something fails (like invalid parameters) • qemu-img create -f {format such as vhd/qcow2} {hard drive image name} {size like 10G} • QtEmu sucks, and they lack any better GUIs out there! • It’s basically a rudimentary command line’s GUI wrapper • It only has user mode (SLIRP) networking (default) • It’s not maintained actively so it doesn’t keep up with the parameter syntax changes (i.e. can generate invalid combinations) • Since it uses the silent (with a w suffix) engine, likely to avoid a lingering command window, it also won’t tell you shit and why if something fails. It just ignores you when you press the start button unless all the stars align (you got everything right) • Basic command line parameters • Set aside 10G for the VM: -m 10G • 1 core if unspecified. Number of available threads (in hyper-threaded system) show up as # of processors. It’s referring to logical processors, not physical cores. • Windows: -smp %NUMBER_OF_PROCESSORS% • Linux: -smp(nproc)
• Attach virtual hard drive: -hda {virtual hard drive file name}
• Attach optical drive (iso): -cdrom {iso file}

I typically want Bridged-Adapter option from VirtualBox, which means the virtual NIC plugs into the same router as the host and just appears as another computer on the same network as host. This is broken into a few components in qemu and you have to manage them separately. Great for learning about how Bridged-Adapter really works, but a lot of swearwords coming from people who just want to get basic things done.

Networking in QEMU is another can of worms if you deviate from the default SLIRP (user mode). I figured out how to work it, but the network bridge is faulty and it keeps crashing my windows with BSOD on bridge.sys with varying error tag. I have short glimpse of it working if I move very fast. Looks like the TAP driver is corrupting the memory as the bridge became very erratic that I see error messages deleting it and have persistent BSOD when the bridge starts after the VM hanged at the TAP bridge on boot.

I listed the steps below to show what should have been done to get the Bridge-Adapter (VirtualBox) equivalent function if there are no bugs in the software, but hell I’m throwing qemu for Windows to trash as it’s half-baked.

First, of all, you need to install OpenVPN to steal its TAP-Win32 virtual network card. It’s not VMware or Virtualbox that it’s part of the package. Qemu didn’t care to tightly integrate or test this driver properly.

Then you’ll need to bridge the “TAP-Windows Adapter (V#) for OpenVPN” with the network interface you want it to piggy back on.

The name of the TAP adapter is what you enter as ifname= parameter of the tap interface in qemu command line. You have to tell qemu which specifically interface you want to engage in. I named the virtual network card as ‘TAP’ above. After bridging it looks like this:

You are not done yet! The bridged network (seen as one logical interface) is confused and it won’t be able to configure with your physical network card’s DHCP client. You’ll have to go to the properties of the Network Bridge and configure the IPv4 with static IP.

You can use ipconfig /all to find out the relevant adapters acquired DHCP settings and enter it as static IP. Coordinate with the network administrator (can be yourself) to make sure you own that IP address so you won’t run into IP conflict if you reboot and somebody took your IP.

After these are all set up the parameter to add to qemu call is:

-nic tap,ifname=TAP

There are complicated settings like -net nic and -netdev -device. These are old ways to do it and have bloated abstractions. -nic switch combined them into one switch.

Then welcome to the world of Windows 10 bridge.sys crashing frequently and you might get a short window of opportunity that it boots and ifconfig acquire the IP address settings from your router (or network the physical adapter is on)’s DHCP server.

It’s like a damn research project finding out something is technically feasible but definitely not ready for production. Welcome to FOSS jungle!

Postscript: I put Hyper-V back and realized it’s insanely slow with Linux Mint as it does not support hardware graphics acceleration. It’s night and day of a difference. Qemu is fast, but it crashes on Windows 10 if I bridge the adapters!

102 total views

## Aria2 WebUI Notes

Aria2 is a convenient command line downloader that works like curl/wget on http/ftp, but it also support many other protocols, and it aria2 natively multipart download!

Instructions for Aria2 on Entware hosted by Lighttpd (defaults to Port 81): https://www.snbforums.com/threads/aria2-webui-on-asuswrt-merlin.63290/

Instructions for Nginx on Entware (defaults to Port 82): https://hqt.ro/nginx-web-server-with-php-support-through-entware/

There are some minor details that changed.

# Install the base (core) software first
# This example is for entware
opkg install aria2

wget -c -O /opt/tmp/webui-aria2.zip https://github.com/ziahamza/webui-aria2/archive/master.zip --no-check-certificate

# Make sure you have some web server installed (nginx, httpd, apache, etc.)
# Nginx HTTP server instructions
# https://hqt.ro/nginx-web-server-with-php-support-through-entware/
# Make sure you know what {Webroot} is
# for Nginx, {Webroot} is /opt/share/nginx/html

# Unpack to the zip file at /opt/tmp and clean up the zip
unzip /opt/tmp/webui-aria2.zip -d /opt/tmp/ && rm /opt/tmp/webui-aria2.zip
# Move/rename to desired location
mv /opt/tmp/webui-aria2-master {Webroot}/aria2

Nginx defaults to port 82 (change it to where you set your web server). The WebUI can be accessed at http://your_server_here:82/aria2/docs.

/doc is inconvenient, so I created a redirection by placing this index.html under aria2’s root folder:

<meta http-equiv="Refresh" content="0; url='./docs'" />

The RPC host breaks out of the box because the you’ll need to make a few adjustments to /opt/etc/aria2.conf before you can start the service without crashing it (so the WebUI of course will complain with a lot of cryptic error messages):

# Basic Options
dir={Change it to a viable folder that has enough space if /opt/var/aria2/downloads
is is not big enough}

# RPC Options
# Unless you want to get a certificate, you'll need to use unsecure mode:
rpc-secure=false
# Change your rpc-secret to be matched in "Connection Settings" in the WebUI
rpc-secret=whatever_passphrase_you_like

After you get the config file correct

# Start the installed aria2 service
\$ (the package already have a service wrap over aria2c)
# aria2 seem to assume it's port 81 so the init.d script has a "S81" prefix, but aria2 does not control the port, where you put the WebUI in http. So it's just a cosmetic filename naming convention.
/opt/etc/init.d/S81aria2 start

If the service wouldn’t start (some bad configs might have the service reported as “done” and after you check again in a second with “S81aria2 check“, it’ll report as “dead”. You can debug by looking at what went wrong at /opt/var/log/aria2.log. That’s how I figured I need to turn off “rpc-secure” parameter.

55 total views

## Mailpile Installation Notes

There’s a powerful Gmail web interface replacement for your regular mail hosted anywhere called Mailpile. Think of it as Thunderbird but hosted like a web page. There are a few things I liked about in the process of freeing myself from Gmail:

• Free to use any email (storage) services and not tie your client to it
• View multiple accounts at the same time (NextCloud won’t do it)
• Very clean, concise interface that makes sense (Gmail users will be comfortable with it)
• Very security and privacy conscious with attention to details! It even encrypts your local email cache and search index if you want to (performance penalty)
• Excellent email setting autodetection. Just type your email account and everything’s set up for you!

The only downside is that the documentation is a little lacking. There are a few concepts that are not explained that’d confuse and scare users away. That’s why I’m explaining it here.

• This is a headless service where the interface is webpage you access on a web browser.
• It’s originally designed to install and run as a local web server where you access mailpile.
• Call mailpile (can create a shortcut) and it’ll try to launch the correct page for the mailpile client.
• Mailpile does not maintain a separate user registry: it uses the hosting computer’s native user manager
• Log into your mailpile using the user account name of the computer where mailpile is installed!

• Install multipile instead if you want other computer to access the headless service
• You’ll need login to the computer that hosts the headless mailpile and run mailpile AS the user you want to setup once to establish the account before use.

# Currently there's only Debian-like distributions (because it uses apt-get)
# These instructions do not assume direct root account. Use sudo instead

# Install pre-requisite packages: curl apt-transport-https gnupg
sudo apt-get update && sudo apt-get install curl apt-transport-https gnupg

# apt-key add {contents of the package signing key provided by mailpile.is)
curl -s https://packages.mailpile.is/deb/key.asc |sudo apt-key add -

# Register mailpile.is's package server with Debian apt package manager
echo "deb https://packages.mailpile.is/deb release main" |sudo tee /etc/apt/sources.list.d/000-mailp.list
# NOTE: Official instruction says 000-mailp instead of 000-mailp.list
#       You need to have some file extension as apt-get checks

# Multipile (mailpile-apache2) = Mailpile + (allowing access from other computers through apache)
sudo apt-get update && sudo apt-get install mailpile-apache2

# You'll need to run mailpile as the user once to establish your account with mailpile before use
mailpile
# If you are on terminal interface instead of web interface, enter 'setup' at mailpile prompt:
> setup
# Follow the instructions on the web interface for setup if you do not use terminal mailpile client interface