Remote Desktop INTO Linux GUI (xrdp)

To serve Linux Desktop just like other Windows computer through Windows Remote Desktop (formerly Terminal Services), so far I have found xrdp (xorgxrdp). VNCs, NX (NoMachine)/TeamViewer does not count because they share the screen of an existing session, instead of creating a new one for you.

Xrdp does not follow the use pattern as Microsoft’s RDP. When you log in to a Xrdp host (server) through a RDP client, you go into an intermediary (welcome interface) called sesman (Session Manager), which is a multi-protocol remote graphical session client (think of it as a very rudimentary Remmina).

“Session” is the roughly same as protocol, which formerly and internally it’s called “Module” as the client programs are in implemented lib*.so object module file.

The two session modules we are interested in here is

  • Xorg (libxup.so): Xorgxrdp is the MS-RDP-like mode that starts a new X session without first attaching to a screen.
  • Xvnc (libvnc.so): basically a VNC client. You start a VNC server (like X11vnc) with a display/screen (can be started in any X session you logged in, or the local user screen if you set the VNC server as a service) and connect to it in this RDP intermediary (welcome interface) without installing VNC client software.

In Windows, RDP do not distinguish between local and remote users and sessions with the same login account will take over other existing sessions. If you want each session to start fresh and leave other sessions alone, disable this in Group Policy Object editor under Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections -> "Restrict Remote Desktop Services user to a single Remote Desktop Services session“.

I am usually fine with this arrangement as well, but often prefer to connect to my remote sessions work in the background leaving the local user alone (i.e. if I want things to show up on the local monitor screen, I’ll use VNC instead). I’d also like to resume my remote sessions if I log in from another computer instead of starting from scratch with each new RDP connection. Turns out given a bunch of quirks of xrdp, this is much easier to do so than reproducing MS-RDP’s default behavior.


First of all, out of the box, the same remote user cannot overtake locally logged-in desktops nor be simultaneously logged in! It’s either one way or the other! I got bumped out immediately after logging in through sesman, or if I logged in remotely first, I get bumped out when I try to log in locally.

Found somebody suggested that certain desktop environment might have added code to prevent the second session from opening. And this blog suggested you edit the windows manager launch script

sudo nano /etc/xrdp/startwm.sh

to add EITHER

unset DBUS_SESSION_BUS_ADDRESS
unset XDG_RUNTIME_DIR

OR

export $(dbus-launch)

RIGHT BEFORE the last lines which checks and calls the Xsession

test -x /etc/X11/Xsession && exec /etc/X11/Xsession
exec /bin/sh /etc/X11/Xsession

This only solves the part of simultaneous local & remote logons


In the newer version as of writing, the default behavior is that locally logged in sessions are independent of remotely logged in sessions, yet the remotely logged in sessions resumes by default (if you log in as the same user). Turns out this is what I preferred as the local sessions should be reached with VNC instead and I’d prefer my remote sessions happen at the background without showing it on the local screen.

Since the VNC server only has a password (not using system’s active directory or user management system), there is no user name. I went to [Xvnc] section of xrdp.ini and replaced username=ask with username=na. The port number -1 no longer applies as we aren’t emulating RDP with VNC anymore (where sesman creates a new VNC server instance if not previously done). Given that I’m running VNC as a service at default port 5900, I also changed it to port=5900.


Session/Module “vnc-any” uses the same libvnc.so as Xvnc before it, and they are pretty much the same thing except it exposes ip:port entry so you can use it as a gateway to connect to VNC servers hosted on other machines (can be used to connect to the VNC server on the current machine you just connected to through RDP if you stick with 127.0.0.1:5900). It’s more like a convenience thing that hosts the VNC client software that you can RDP into (so you don’t need to install a VNC client from where you are).


There is also a RDP client module/session called ‘neutrinordp-any’, which basically uses the linux machine you just connected to as a gateway to visit another machine hosting RDP. It’s rarely useful and it doesn’t work out of the box when I tried it (does nothing after I press OK despite entering all the info correctly). So I removed it from my xrdp.ini


There’s also a minor annoyance that if you connect remotely, “Authentication Required…” message box will show up on start since remote user is a little more restrictive than local users. This can be solved by creating this file with nano

sudo nano /etc/polkit-1/localauthority/50-local.d/46-allow-update-repo.pkla

and paste the contents there and save it:

[Allow Package Management all Users]
Identity=unix-user:*
Action=org.freedesktop.packagekit.system-sources-refresh
ResultAny=yes
ResultInactive=yes
ResultActive=yes

 1 total views,  1 views today

Run VNC server before logging to Linux GUI

I installed X11vnc and to my dismay, there isn’t a easy option that automatically configures VNC as a service like most Windows VNC software does (so you can VNC into a computer before you login as a user graphically and launch the X11vnc executable).

I had to manually create a service and I ran into a few problems as the instructions on StackExchange and other forums are missing critical pieces.

In here, I will use X11vnc server on Ubuntu Cinnamon (systemd) as an example. Instead of blindly pasting code here without context, I’ll sketch out the ideas here:

  1. Establish a password in a password file stored in common areas such as /etc/x11vnc.pwd instead of using the user-specific home folder default ~/.vnc/passwd
  2. Create a service (such as systemd) pointing to x11vnc program with the right parameters, which includes the path to the password file stored in common areas
  3. Start the service

It’s worth nothing that the X11server connection is unencrypted. I tried the -ssl options but my RealVNC clients complained about their version


First of all, x11vnc -storepasswd creates the encrypted password file at the current home folder where you run the code. You are going to call the said password file with x11vnc -rbfauth {path to password file} parameter when launching the X11vnc server program.

One way to do it is to copy the created password to a system-specific configuration folder instead of user’s home folder:

sudo cp ~/.vnc/passwd /etc/x11vnc.pwd

Alternatively (which I do not recommend), is to specify the password AND the password-file path directly with optional specifiers of the -storepasswd parameter.

# Directly create the password file without a prompt that hides the password entry
x11vnc -storepasswd my_pASSword /etc/x11vnc.pwd
# Clean up your terminal command history since you've exposed the password visually
history -c

Unfortunately, if you want to specify the path to the password-file, you have to specify type the plain text password in the command line, which you should do it when nobody’s watching and clear the history immediately afterwards. If you are in a public place, just do it the old way and copy the password file over


The core part of setting (doing the data-entry) for registering a service is the figuring out the command line parameters executing x11vnc program. At minimal, you’ll need

  • -rfbauth specifies where the password file is (or you can directly specify the password with -passwd, which I do not recommended)
  • auth: authentication means (prefers –auth guess, but you can specify where your .Xauthority file is)
  • -display: 0 connects to the X11 server display, which is usually 0
  • -create is the missing link! you must absolutely use this tell the VNC server to make a Xvfb (X virtual framebuffer) session if no display session is found (which is the case when you are running X11vnc as a service before logging in the a Desktop Environment like Cinnamon)

You’ll typically want this for a constant-on VNC server

  • -forever: x11server instances are by default (-once) killed after the client disconnects. -forever option keeps it there

My personal preferences

  • -shared: I might have a few computer VNC’ing into the linux computer and I don’t want to make sure I remember to close the ones I’m not using.
  • -noxdamage: XDamage is a system that only updates the changed parts of the screen. Don’t need it when bandwidth isn’t super tight.
  • -repeat: allow hold and repeat keystrokes just like what we are used to. By default it’s set to -norepeat to avoid stuck key scenarios.

For debugging (useful! that’s how I figured out the missing part that I have to use -create to make a dummy screen when using x11vnc as a service):

  • -o {output log file}: typically -o /var/log/x11vnc.log
  • -loop: if the program crashes for any reason, it’ll try to auto-restart for robustness. Might not need it if you use -forever

So the core command needed is:

x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever

Now after we’ve decided the exact launch command, we will have to create the service entry. In systemd Linux, it’s done by writing a service configuration file in text format very much like Windows INI files under /etc/systemd/system and the filename MUST end with suffix “.service

In short, create /etc/systemd/system/x11vnc.service. Basic file without logging is like this:

[Unit]
Description=VNC Server for X11
Requires=display-manager.service
# The two below are for performance (make sure everything is ready before launching)
After=network-online.target
Wants=network-online.target

[Service]
ExecStart=/usr/bin/x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever
# The 3 lines below are option, but for robustness
ExecStop=/usr/bin/x11vnc -R stop
Restart=on-failure
RestartSec=2

# For automatically creating symlinks with "sudo systemctl enable" only
[Install]
# Start at runlevel 5 (multi-user)
WantedBy=multi-user.target

This is the minimum skeleton that does the same less robustness against the unexpected:

[Unit]
Description=VNC Server for X11
Requires=display-manager.service

[Service]
ExecStart=/usr/bin/x11vnc -repeat -noxdamage -create -display :0 -auth guess -rfbauth /etc/x11vnc.pwd -shared -forever

[Install]
WantedBy=multi-user.target

The default permissions 644 (everybody reads but only root can write is standard for services. 640, denying unknown people the read access is also acceptable if you are paranoid) should be correct if you use sudo creating the file in the /etc/systemd/system folder.

There are some older tutorials using the (/usr)/lib/systemd/system folder, which are now reserved for automatic entry by programs instead of manual service entry like what we are doing now. Technically either way works, but follow the convention so people know where to find the entries.


After that enable the service file you’ve created (the “.service” suffix is optional when calling), preferably do a daemon-reload to make sure edits in the service file is reflected. If you don’t want to wait until the next book, you can start it with systemctl

sudo systemctl enable x11vnc
sudo systemctl daemon-reload
sudo systemctl start x11vnc

This kind of stuff in Linux is bitchworthy. It’s 2021 now. How come users need to mess with defining their custom services for such a common VNC use case (start before logging in graphically)? Never have to deal with this kind of shit in Windows with VNCs: they always expect users has the computer to themselves and always offer the option to set up the service automatically!

 6 total views,  4 views today

XFX TS Series 550W Power Supply – Made In China – Bulging Capacitor because it was installed backwards

I opened up my ATX Power Supply as I had it for quite a few years but it has been stowed away and used intermittently until I use it a lot more in my office computer in recent years. I just don’t trust any power supplies Made in China, even from a reputable brand as a couple of decades of working with computers tells me that they are bound to break after a few years, and very often it is the capacitor that rotted and the rest are collateral damages. Lo and behold there is one:

After I took the capacitor out, I noticed something odd: the polarity marker on the circuit board is the reverse of how the capacitor was installed! Holy smokes! I just want to verify if the PCB markings is right or the installer was right, so I installed wires to the capacitor to lift it up so I can connect the multimeter leads across it to measure the voltage polarity. This picture also shows the PCB’s capacitor orientation marking:

And the multimeter reads -5V following the original orientation of the capacitor before I took it out. This means the polarity was reversed!! No wonder the capacitor bulged. I was lucky that it didn’t blow up after a few years of use! Probably it was rated 16V yet only -5V was passed to it so the electrolytic capacitor rotted slowly.

To give XFX Force credit, they didn’t slap the power supply together with the cheapest white label components from the gutters. It uses proper Nichicon and Hitachi capacitor, so it might be the reason that reversed capacitor lasted so many years.

It’s the workmanship in China. If you go with a Red Chinese (Yellow-Soviets) brand, they might use junk components, but don’t think you are safe with foreign companies that has a solid process and design. The cheap labor in China who doesn’t give a crap can still manage to fuck it up. So trust nothing

ElectroBOOM!

 8 total views

Image Remote Disks with Norton Ghost

Symantec Ghost has been my favorite tool since high school as the user interface is minimalistic (runs fast) yet intuitive. It pretty much has every single feature (use case) you can imagine organized in a sensible way (unlike the fucking linux man pages that drown you with 4 dozens of command switches not logically organized so you have to skim through the entire thing to find out what is relevant).

The software is well made in general so we can get a lot mileage out of old versions. I recently had to clone a drive over the network yet I don’t want to share the image file. My initial plan is to have the remote computer I plan to image the disk attached to it run as slave (in Master-Slave mode Peer-To_Peer over TCP mode), but there are a few hurdles:

  • The documentation didn’t say which port is used. I have to use TCPview to figure it out. It’s Port 6668.
  • Turns out slave mode does not support restoring from a image file located from the (puppet-)master. In other words, the when you connect to the slave session, the file dialog box of “From Image” only shows the files on the slave side! WTF!

It’s strange that you can clone a raw drive / partition from master session to slave session, but you cannot choose image file as a source in place of the source drive. I tried the command line before and no avail. After some web searching I realized that I’m not insane. It was the way Ghost is:

The rules inferred from this table means:

  • image files ALWAYS stay at the slave session
  • direct drive/partition copies is always master pushing data to slave.
  • slave drives are never cloned (read)
  • master cannot read its own files to find image files
  • master can only select remote (slave) image files

First of all, direct drive-to-drive copy are bidirectional. The above list is not entirely accurate, so I stroke through the conclusion derived from the incorrect assumptions above. Y:

The rules for image files do not make much sense to me. Just can’t come up with a good excuse for it. The session have full access to both storage from both sides, and ghost command line’s logic is to make image files fungible with direct drives/partitions. It doesn’t discourage accidental overwrites or prevent one side’s data from being siphoned. All they did is to tease the user by not allowing them to read files/images from the master computer where the user interaction is.


The first instinct is to restore the GHO image I want to push to the server onto a disk and do the direct clone. This is logically fungible with creating a VHD, mount it, restore the GHO image to the mounted drive, then use direct ‘virtual disk’-to-disk clone to restore the remote (slave) disk. Luckily, newer Ghost has tools to simplify these steps. We’ll need this 3 pieces of clues to figure it out:

  1. Virtual machine disk image files such as VHD can be used as source or destination
  2. There’s a command switch to mount virtual machine disk image files internally WITHIN the ghost session (no side effects: windows won’t see it. Won’t persist between ghost sessions)
  3. GHO files are not directly mountable as a virtual disk even internally within ghost session

So the complicated process can be shorten to converting GHO to VHD and then internally mount the VHD as a direct drive through command switch launching Ghost. Use DEMO.gho as an example:

REM Convert DEMO.gho to DEMO.vhd
ghost -clone,mode=restore,src=DEMO.gho,dst=DEMO.vhd

REM Launch Ghost with DEMO.vhd internally mapped as a (direct) logical drive
ghost -ad=DEMO.vhd

I ran into some obscure error messages like “ABORT: 11030, Invalid destination drive” when trying to specify the full absolute path. So instead of fussing with the syntax that breaks the code, I added ghost to my Windows %PATH% environmental variable and run ghost directly at the folder where the files are. I suspect it can be fixed with /translate command switch to make sure the drive letter is not ambigious whether it’s local or remote, but that’s something for later if I have a project that require scripting this reliably.


My cliff notes here.

Run Ghost as slave mode

ghost -tcps

Do this at Ghost master computer

REM Convert DEMO.gho to DEMO.vhd
ghost -clone,mode=restore,src=DEMO.gho,dst=DEMO.vhd

REM Launch Ghost with DEMO.vhd internally mapped as a (direct) logical drive
ghost -ad=DEMO.vhd -tcpm:{IP address of the slave computer}

Remember to open port 6668 at the Ghost slave computer.


Appendix

Technically, it’s possible to restore from an image file located AT THE SLAVE side, but it’d be a stupid idea. Initially I thought Ghost would be smart enough to directly use the image file locally on the slave session to clone the drive locally. However, given the speed and my observation with TCPview, this is not the case. It’s doing the stupid thing of crawling the contents of image file from the slave machine in chunks and send it back to the slave!

 54 total views

rsync/Deltacopy gotchas (especially Windows transfers)

Deltacopy is a GUI wrapper around rsync, a feature-packed tool to copy files locally AND remotely, AND differentially (automatically figure out the parts that are different and resend. Excellent for repair) through hash comparisons. For non-programmers, hash is a unique ID computed for a chunk of data that are expected to change wildly even at the slightest data/file change/corruption).

Deltacopy is very useful if you just want to do the basic stuff and not know the rsync syntax and switch combinations off the top of your head. It also provides a windows port of rsync based on Cygwin (a tiny Linux runtime environment for windows). This is the only free alternative to cwRsync, a paid Windows port of rsync.

rsync is a Swiss Army Knife that can also work from one local path to another. Deltacopy is intended for remote file transfer.

Deltacopy server is basically this:

rsync --daemon 

However in Windows, since it’s cygwin, it’s looking for linux’s /etc/rsyncd.conf by default if you do not specify the config file through --config switch.

Deltacopy client basically help you generate the command to transfer files. Most of the features are done through right-click (context) menu, not toolbar or pull-down menus, which might confuse some people. You set up your tasks as Profile, which can be scheduled (the bottom panel) or executed immediately by right clicking on the profile:

Run is pushing file to the server, Restore is pulling files from the server. Run Now and Restore are for executing the command (aka task) immediately. You can peek into what it generated by right-clicking on the profile and choose “Display Run/Restore Command”. First time users might not be able to find it since the only place to access it is through context menus.

There are some tricky parts (gotchas) for specifying the files/folders to copy. First of all, even though you use Add Folder/Add Files button for entries

Basically you can make a (source, destination) pair by modifying the selection and target path. It’s just passed onto rsync command verbatim. The target path is relative to the virtual directory set on he server (see Deltacopy Server’s directory)

The destination path which is endowed with the branch folder name (one-level). In other words, if your source is C:/foo/bar, Deltacopy by default set the destination to /bar instead of /. This is probably to avoid the temptation of lumping all contents in the same remote destination root. If you just want to simply lay the files at the root virtual folder at the destination (my most common use case), you’ll have to edit and clear out the (relative) destination path.

As for the source, the author of rsync chose to do it the logical (more conservative) way but not intuitive way: by default it reconstruct the source folder’s FULL path structure at the destination! For example if you intend to copy everything under C:\foo over, the destination will create {destination root}\foo in the process and put everything under it instead of directly at {destination root}. The design choice was supposed to prevent accidental overwrites as multiple source subfolders try to write over each other with the same names at the destination.

Luckily, there’s a way around it! See man pages for -R –relative: Put a dot (.) at the place where the relative path starts! For example, the source is C:\foo\bar\baz and you do not want /foo to be created at the destination and want it to start with /bar instead. You should enter C:\foo\.\bar\baz as source. Everything the left of the dot (refers to self-folder) are stripped from the destination path structure.


ACL support for Windows sucks because rsync lives on cygwin, which has POSIX (unix/linux) type of permissions/ACL.

https://unix.stackexchange.com/questions/547275/how-do-i-use-rsync-to-reliably-transfer-permissions-acls-when-copying-from-ntf

In my opinion, the best way to go about it is to not transfer ACLs from the source and follow the preexisting ACLs at the destination. I’d also leave the groups and owners alone (inherit at destination) as well I might not be on the same active directory (or workgroup user management) as the destination computer so accounts with the same name might not be actually the same accounts.

--no-p --no-g --no-o

–no-{command} is the complement prefix that does the opposite of the -{command}, so the above means skipping -p (perms/permissions), -g (group), -o (owner) and make sure it has full permissions for everybody.


Sometimes a remote path can be mistaken as a relative local path with the hostname/IP address as the folder name if there’s no username. Start it with rsync:// as the URL scheme and the syntax is like ftp:// as far as username is concerned.

Deltacopy protects the source and destination paths with double quotes (“). It’s a good practice that we should do it even with direct rsync calls.

 29 total views

Tomato OpenVPN client assigned for specific computers

Setting Redirect Internet traffic to “Policy Rules” opens a table where you can specify which computer goes through VPN and which ones uses direct connection. Leave the destination IP unspecified and it’ll pick the 0.0.0.0 as intended

However, there’s a logical trap when you blindly follow instructions setting “Accept DNS configuration” to “Exclusive” as given by most instructions assuming all computers go on the network through VPN. Setting it as “Exclusive” means even the computer not intending to use VPN will still need to go through your VPN provider’s DNS! For slow VPN connection, this will be painfully slow for ALL computers! Set it to “Relaxed” instead.

 31 total views

Not missing Windows after trying Ubuntu Cinnamon Remix

Given that I grew up as a power DOS/Windows user, I often have gripes about how frustrating Linux is and they were almost never ready for people who just want to get common things done by intuitively guessing where the feature is (therefore having to RTFM or search the web for answers).

I deal with HP/Agilent/Keysight instruments a lot and appreciated their effort put on user experience (UX) design. It’s not that user who’s stupid if they have to dig through 5+ levels of menu buttons to measure a Vpp (peak to peak voltage) and the software aren’t smart enough to default to the only channel in use. That’s what Tektronix did to their nasty user interface and raised a generation of Stockholm Syndrome patients who keep buying Tek because they are traumatized by the steep learning curve and would rather walk on broken glass than having to learn a new interface from another vendor (that’s called vendor lock in).

I certainly appreciate Cinnamon desktop environment (came with Linux mint) designers willing to not insist on the ‘right way of doing things’ and follow a path that’s most intuitive for users coming from a Windows background.

The last time I used Linux Mint was 19. There’s still quite a lot of rough edges. Some services got stuck (time-outs) right out of the box and systemd went through slowly. It’s just not fast and responsive. When I tried it again when Mint 20.1 was released, my old i3 computer boots to the GUI in 5 seconds and I was hell of impressed. The icons and menus are also now sized balanced proportions like Windows (can’t stand the big and thick default menu-item fonts like Ubuntu).

However, there’s one big impeding factor for me to make Linux Mint my primary computer: the packages repositories are one generation behind Ubuntu (the most widely supported distro)! Software often have bugs that the developers solved, living with old, ‘proven’ software slows down the iterative process.

I’ve been through hell trying to access Bitlocker volume with Linux Mint 20.1 as not only it doesn’t work right of the box like Windows, I’m stuck with a command line dislocker that doesn’t integrated with the file manager (like Nemo). The zuluCrypt available with Mint 20.1 is too old to support Bitlocker properly. Trying to upgrade it to 6.0 has Qt dependencies which is unsolvable. I was able to download the unsanctioned old revision in debian package but there’s more unsolvable dependencies.

The alternative option of compiling from the source is met with more dependencies fuckery and now the restrictive Mint repository might not have the exact version of the compiler required by the source code package. Aargh!

I was about to give up Linux Mint and install Ubuntu and try to hold my nose changing the desktop to Cinnamon. Luckily I’ve found somebody who read my mind: there’s Ubuntu Cinnamon Remix!

Not only Ubuntu Cinnamon Remix supported Bitlocker right out of the box (no need to fuck with zuluCrypt which doesn’t integrate with the file explorer anyway)! Most of the defaults make sense, buttons are often where I expect them to be. Even Win+P key works identically! The names/lingo are close to Windows whenever possible, and honestly the default Yari theme is visually slightly more pleasing than Windows as it makes very good use of the visual spaces!

Here’s a few transition tips

Windows Ubuntu/Cinnamon
WallpaperBackground
Device Manager(No equivalent) Install hardinfo for System Information
Task ManagerSystem monitor
Windows KeySuper Key
ShortcutLauncher
Lingo
Windows Linux
Foobar2000deadbeef
Notepad++notepadqq
Greenshotksnip
Apps and its near equivalents

I use Winsplit-Revolution in Windows (old version is freeware) that uses the numeric keypad to lock the window to the 9 squares grid using Ctrl+Alt+{Numpad 1-9}. Save the keyboard shortcuts in case if you want to install it again on another computer:

dconf dump /org/cinnamon/desktop/keybindings/ > dconf-settings.conf
dconf load /org/cinnamon/desktop/keybindings/ < dconf-settings.conf

There’s no Ctrl+Shift-Esc key which I often use to call Task Manager (called System monitor). I had to make the shortcut as well to feel at home.

WindowsLinux
(Explorer) Alt-D for address bar(Nemo) Ctrl+L

 43 total views

HP 54502A Datasheet typo about AC coupling

The cutoff frequency of 10Hz on the datasheet is a typo. Better scopes at the time claims 90Hz. 10Hz is just too good to be true.

Found the specs from the service manual:

Don’t be fooled by the -3dB cutoff and ignore how wide the transition band can be (depends on the filter type and the order). Turns out this model has a very primitive filter that AC couple mode still messes square waves below 3kHz up despite the specs says the -3dB is at 90Hz. You better have a 30+ fold guard band for old scopes!

Remember square wave pulse train in time domain is basically a sinc pulse centered at every impulse of the impulse train in frequency domain superimposed. Unless you have a tiny duty cycle (which is not the case for uniform square waves, they are 50%), the left hand side of the sinc function at 1kHz fundamental still have sub-1kHz components that can be truncated by the AC coupling (high pass filter).

 44 total views

Qemu for Windows Host Quirks

I’m trying to cross compile my router’s firmware as I made a few edits override the update DDNS update frequency. Turns out it doesn’t work on the latest Linux so I’d need to run an older Ubuntu just to keep it happy.

RANT: Package servers keeps pulling to rug on outdated linux is frustrating. Very often developers didn’t make a whole installer it so we are often wedged between downloading a package at the mercy of its availability from package managers and their servers or compiling the damn source code!

With the promise that Qemu might have less overhead than Hyper-V or VirtualBox (indeed it observably is), I tried installing Qemu on Windows host and it turned out to be a frustrating nightmare.

RANT: Linux is not free. The geniuses did the most sophisticated work for free but users pay time and energy cleaning after them (aka a support network dealing with daily frustrations) to made these inventions useable. There’s a company that does the clean up to make BSD (same umbrella as Linux/Unix) useable and made a lot of money: it’s called Apple Computers since Steve Jobs return.

qemu is just the core components. System integration (simplifying common use cases) are practically non-existent. Think of them as the one who produced an ASIC (chip) and the end-user happens to be the application engineers. There’s a few tutorials on qemu Linux hosts for moderately complex scenarios, but you are pretty much on your own trying to piece it altogether for Windows because there are some conceptual and terminology differences. The man page --help for the qemu’s Windows host’s VM engine was blindly copied from the Linux hosts counterpart, so it tells you about qemu-bridge-helper which is missing.

I stupidly went down the rabbit hole and drained my time on qemu. So I documented the quirks to help the next poor sap who has to get qemu running on Windows 10 host efficiently over Bridged-Adapter (VirtualBox lingo) networking mode.

  • Preparation work to get HAXM accelerator set up
    • Release VT-d (hardware assisted virtualizations) so HAXM can acquire it
      • You’ll need to remove Hyper-V completely as it will hoard VT-d’s control
        • Windows Sandbox and Windows Subsystem for Linux (WSL2) uses Hyper-V. If you just unchecked Hyper-V in Windows Optional Features leaving any of these 2 on, Hyper-V is still active (it only removes the icons)
    • HAXM v7.6.6 not recognized by qemu on clean install. Install v7.6.5 first, then remove it and install v7.6.6. Likely they forgot a step in v7.6.6’s installer
    • Turn on optimization by: -accel hax
  • Command line qemu engine
    • qemu-system-{architecture name}.exe is what runs the show
    • qemu-system-{architecture name}w.exe is the silent version of the above engine. Won’t give you a clue if something fails (like invalid parameters)
    • qemu-img create -f {format such as vhd/qcow2} {hard drive image name} {size like 10G}
  • QtEmu sucks, and they lack any better GUIs out there!
    • It’s basically a rudimentary command line’s GUI wrapper
    • It only has user mode (SLIRP) networking (default)
    • It’s not maintained actively so it doesn’t keep up with the parameter syntax changes (i.e. can generate invalid combinations)
    • Since it uses the silent (with a w suffix) engine, likely to avoid a lingering command window, it also won’t tell you shit and why if something fails. It just ignores you when you press the start button unless all the stars align (you got everything right)
  • Basic command line parameters
    • Set aside 10G for the VM: -m 10G
    • 1 core if unspecified. Number of available threads (in hyper-threaded system) show up as # of processors. It’s referring to logical processors, not physical cores.
      • Windows: -smp %NUMBER_OF_PROCESSORS%
      • Linux: -smp $(nproc)
    • Attach virtual hard drive: -hda {virtual hard drive file name}
    • Attach optical drive (iso): -cdrom {iso file}

I typically want Bridged-Adapter option from VirtualBox, which means the virtual NIC plugs into the same router as the host and just appears as another computer on the same network as host. This is broken into a few components in qemu and you have to manage them separately. Great for learning about how Bridged-Adapter really works, but a lot of swearwords coming from people who just want to get basic things done.

Networking in QEMU is another can of worms if you deviate from the default SLIRP (user mode). I figured out how to work it, but the network bridge is faulty and it keeps crashing my windows with BSOD on bridge.sys with varying error tag. I have short glimpse of it working if I move very fast. Looks like the TAP driver is corrupting the memory as the bridge became very erratic that I see error messages deleting it and have persistent BSOD when the bridge starts after the VM hanged at the TAP bridge on boot.

I listed the steps below to show what should have been done to get the Bridge-Adapter (VirtualBox) equivalent function if there are no bugs in the software, but hell I’m throwing qemu for Windows to trash as it’s half-baked.

First, of all, you need to install OpenVPN to steal its TAP-Win32 virtual network card. It’s not VMware or Virtualbox that it’s part of the package. Qemu didn’t care to tightly integrate or test this driver properly.

Then you’ll need to bridge the “TAP-Windows Adapter (V#) for OpenVPN” with the network interface you want it to piggy back on.

The name of the TAP adapter is what you enter as ifname= parameter of the tap interface in qemu command line. You have to tell qemu which specifically interface you want to engage in. I named the virtual network card as ‘TAP’ above. After bridging it looks like this:

You are not done yet! The bridged network (seen as one logical interface) is confused and it won’t be able to configure with your physical network card’s DHCP client. You’ll have to go to the properties of the Network Bridge and configure the IPv4 with static IP.

You can use ipconfig /all to find out the relevant adapters acquired DHCP settings and enter it as static IP. Coordinate with the network administrator (can be yourself) to make sure you own that IP address so you won’t run into IP conflict if you reboot and somebody took your IP.

After these are all set up the parameter to add to qemu call is:

-nic tap,ifname=TAP

There are complicated settings like -net nic and -netdev -device. These are old ways to do it and have bloated abstractions. -nic switch combined them into one switch.

Then welcome to the world of Windows 10 bridge.sys crashing frequently and you might get a short window of opportunity that it boots and ifconfig acquire the IP address settings from your router (or network the physical adapter is on)’s DHCP server.

It’s like a damn research project finding out something is technically feasible but definitely not ready for production. Welcome to FOSS jungle!

Postscript: I put Hyper-V back and realized it’s insanely slow with Linux Mint as it does not support hardware graphics acceleration. It’s night and day of a difference. Qemu is fast, but it crashes on Windows 10 if I bridge the adapters!

 96 total views

Aria2 WebUI Notes

Aria2 is a convenient command line downloader that works like curl/wget on http/ftp, but it also support many other protocols, and it aria2 natively multipart download!

Instructions for Aria2 on Entware hosted by Lighttpd (defaults to Port 81): https://www.snbforums.com/threads/aria2-webui-on-asuswrt-merlin.63290/

Instructions for Nginx on Entware (defaults to Port 82): https://hqt.ro/nginx-web-server-with-php-support-through-entware/

Instructions for Aria2 on Entware: https://hqt.ro/aria2-download-manager-through-entware/

There are some minor details that changed.

# Install the base (core) software first
# This example is for entware
opkg install aria2

# Download the package from Github zip to /opt/tmp
wget -c -O /opt/tmp/webui-aria2.zip https://github.com/ziahamza/webui-aria2/archive/master.zip --no-check-certificate

# Make sure you have some web server installed (nginx, httpd, apache, etc.)
# Nginx HTTP server instructions
# https://hqt.ro/nginx-web-server-with-php-support-through-entware/
# Make sure you know what {Webroot} is
# for Nginx, {Webroot} is /opt/share/nginx/html

# Unpack to the zip file at /opt/tmp and clean up the zip
unzip /opt/tmp/webui-aria2.zip -d /opt/tmp/ && rm /opt/tmp/webui-aria2.zip
# Move/rename to desired location
mv /opt/tmp/webui-aria2-master {Webroot}/aria2

Nginx defaults to port 82 (change it to where you set your web server). The WebUI can be accessed at http://your_server_here:82/aria2/docs.

/doc is inconvenient, so I created a redirection by placing this index.html under aria2’s root folder:

<meta http-equiv="Refresh" content="0; url='./docs'" />

The RPC host breaks out of the box because the you’ll need to make a few adjustments to /opt/etc/aria2.conf before you can start the service without crashing it (so the WebUI of course will complain with a lot of cryptic error messages):

# Basic Options
dir={Change it to a viable folder that has enough space if /opt/var/aria2/downloads
 is is not big enough}

# RPC Options
# Unless you want to get a certificate, you'll need to use unsecure mode:
rpc-secure=false
# Change your rpc-secret to be matched in "Connection Settings" in the WebUI
rpc-secret=whatever_passphrase_you_like

After you get the config file correct

# Start the installed aria2 service 
$ (the package already have a service wrap over aria2c)
# aria2 seem to assume it's port 81 so the init.d script has a "S81" prefix, but aria2 does not control the port, where you put the WebUI in http. So it's just a cosmetic filename naming convention.
/opt/etc/init.d/S81aria2 start

If the service wouldn’t start (some bad configs might have the service reported as “done” and after you check again in a second with “S81aria2 check“, it’ll report as “dead”. You can debug by looking at what went wrong at /opt/var/log/aria2.log. That’s how I figured I need to turn off “rpc-secure” parameter.

 50 total views