
This weekend I finally updated my LG X130 netbook to Slackware 13.37. This is one of my “stable” machines that I use all the time for day-to-day tasks, so I do not run -current on it, as I depend on it too much. Slackware-current is very interesting for research and testing all the new stuff, but it can break when you least expect it. And then you need a “stable” installation to at least access the internet, pay the bills, read e-mail, etc. So I left this netbook as one of the last machines to update to Slackware 13.37.
And it went quite smoothly. Except for one thing: the wireless adapter was not working…
Old problem returning?
I remembered that when I first installed Slackware on this netbook about a year ago, I also had a problem with this rt3090 card. The kernel confused it with the rt2800 and tried to load the rt2800pci module – which did not work with this adapter.
At the time, I found a simple solution by ‘Googling”: putting the rt2800pci module on the blacklist by creating a simple text file in /etc/modprobe.d/ with a single line:
blacklist rt2800pci
So I checked if that file was still there, and it was…
I did a “lsmod | grep rt2
” and the result was:
rt2860sta 483303 1 crc_ccitt 1087 1 rt2860sta
So the correct module was loaded!
More investigation was needed
“ifconfig
” did not show the wlan0 interface, but “ifconfig -a
” did.
I tried “ifconfig wlan0 up
” but it returned with:
SIOCSIFFLAGS: Operation not permitted
So what do we do when a kernel module has problems? Check the dmesg log…
I did a “dmesg | grep -i rt
” and found this interesting line:
rt2860 0000:02:00.0: firmware file rt3090.bin request failed (-2)
So, the rt2860 module is looking for the rt3090.bin firmware and not finding it!
But isn’t the rt2860 driver used for the rt3090? I remembered reading (don’t know where, probably when I first installed Slackware 13.1 on this netbook) that the rt3090 adapter was handled 100% by the rt2860 driver.
I decided to check the /lib/firmware directory, where all the firmware files are installed in Slackware. There were several rt2xxx.bin files, but no rt3090.bin.
This is where I decided to get bold
I thought: It’s not working anyway, so what can I loose?
And I created a symlink rt3090.bin to rt2860.bin They are the same in the kernel anyway, right?
I rebooted, and… It worked!
My wireless adapter was working again and my netbook was fully operational as before the update.
But was this really the correct solution?
I decided to go straight to the source and browse around on kernel.org. And in their git repository I found this commit:
rt2860sta: Use latest firmware for RT3090
author Ben Hutchings
Sat, 30 Apr 2011 18:31:32 +0000 (19:31 +0100)
committer Ben Hutchings
Tue, 17 May 2011 04:22:12 +0000 (05:22 +0100)Ralink’s original drivers for RT2800P/RT2800E-family chips used
multiple different versions of the firmware for different chips. The
rt2860sta driver in staging was briefly converted to load different
files for different chips. However, the current rt2860.bin is
supposed to work on all of them, so replace rt3090.bin with a symlink.Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Well, well! So the official solution was to create a symlink as well!
I checked the tree in their git repository and there you can see they have the symlink “rt3090.bin” pointing to the rt2860.bin file, just as I did.
I also noticed a second symlink: rt3070.bin is pointing to rt2870.bin.
Report it back to “The Man”
Now that I was reassured that my solution was the correct one, I sent an e-mail to Pat Volkerding about my findings.
It would be nice to see the rt2860-firmware-26-fw-1 package create the symlink on installation, to prevent some headaches around the world
I needed to access a remote Windows XP desktop, but I don’t run Windows here on my computer. I do have a netbook that came with Windows pre-installed, so I first set it up there, but I wanted access from the computer I use every day, which runs Slackware.
Enter my quest for a Linux-based solution…
1. FreeRDP
First I found FreeeRDP, a fork of the original rdesktop project, in active development. There was no SlackBuild for it, but it was straightforward enough to write one and I had it up-and-running in no time.
FreeRDP is completely command-line based – something I actually like – and after a quick look at the man-page I was able to connect to my Windows desktop typing:
xfreerdp -g <width>x<height> -u <user> -p <password> <ip-of-desktop>
And I was connected:
2. Remmina
Having a working solution was nice, but I wanted something something to connect fast without typing all the parameters. I could write a simple script or – look for a GUI.
I found a SlackBuild for Remmina, which seemed to work with FreeRDP and give a nice menu to configure all the parameters and desktops I want to connect to.
I soon found out that I needed updated versions though of Remmina and Remmina-plugins. The newest FreeRDP version (0.8.2) is no longer compatible with the versions the Remmina SlackBuilds were written for originally. Compilation of the plugins crashed completely, complaining about missing struct members and other errors, like 'RD_PEN' has no member named 'colour'
, 'RD_BRUSHDATA' has no member named 'colour_code'
, expected declaration specifiers or '...' before 'RD_HCOLOURMAP'
, etc., etc.
A simple version bump to Remmina-0.8.3 and Remmina-plugins-0.8.4 solved all problems though.
After building and installing it, simply start Remmina from the “Network” menu and configure your remote desktop:
Fill in the parameters and your new remote desktop will show up in the list:
Now click on the connect button and we’re there:
Remmina has some nice features, like full-screen with a floating bar, just like the Windows connection has, tabs for several concurrent remote connection, etc. What I’m missing though is some more flexible screen resolutions, so that I can mimic the exact same resolution I have on the real desktop. Well, I’ll send this suggestion to the authors and who knows, in the next release…
3. Installing it on your own system
Remmina needs libssh and libunique. You can download the SlackBuilds for these and for remmina and remmina-plugins from SlackBuilds.org and build the packages.
For freerdp, which is still in the pending queue for approval on SlackBuilds.org, you can get the SlackBuild script from my site, where you can also download pre-built packages for all these.
The last few days I was struggling with the following situation:
- I wanted to examine some packets passing through my firewall
- my firewall is a “headless” (this means without a monitor, accessible only via ssh) Slackware server
- it has no X installed (and therefor no window manager etc.) so it won’t run wireshark
Now wireshark can read packets captured by tcpdump and written to a file, so that’s what I did at first:
- capture with tcpdump
- write the packets to a file on a network share
- open the file with wireshark
The problem is that I wanted to examine the packets “on-line”, as they pass through the firewall.
Linux has a solution (inherited from Unix) for this, called “pipes”. These are a special kind of files, where one program writes, while another reads from it, getting the contents in the right order.
In other words: the packet that went into the pipe first, will come out first at the other side of the pipe. Imagine it as a “vitrual tube”.
It was a bit of a struggle to get all the parameters right, but in the end, I got it working like this (note: all commands are entered in a terminal session on the desktop):
- Create the pipe
- Start tcpdump remotely with ssh from the desktop where you have wireshark installed:
- While tcpdump is capturing packets and sending them to the pipe, open another terminal, start wireshark and use the pipe as the input
niels@desktop:~$ mkfifo /tmp/pipes/cap_fw
“/tmp/pipes/” is where I create my pipes, feel free to use whatever directory you prefer.
“cap_fw” is the name of the pipe I selected.
niels@desktop:~$ ssh root@<firewall> "tcpdump -s 0 -U -n -w - -i eth1 not port 22" > /tmp/pipes/cap_fw
Replace <firewall> with the name or ip address of your remote server.
The options I used are:
-s 0
: use the required length to catch whole packets
-U
: packet-buffering – write packet to pipe as soon as it is captured (as opposed to waiting for the buffer to fill)
-n
: no address-to-name conversion (you can let wireshark do this if you want)
-w -
: write output to standard output
-i eth1
: capture from interface eth1 – change to match your setup
not port 22
: leave out any packets from / to port 22. This is needed as we use ssh to connect to out firewall, so that we don’t capture the captured packets again… If you need to examine port 22 on your server, use ssh over an alternative port.
> /tmp/pipes/cap_fw
: redirect the output to our pipe.
niels@desktop:~$ wireshark -k -i /tmp/pipes/cap_fw
Here the options mean:
-k
: start immediately
-i /tmp/pipes/cap_fw
: use our pipe as the “interface”
And you’re up and running!
You can use all the normal functions of wireshark, like filtering, etc., as if you were capturing from a local interface.
By special request from BP{k}, here is a diagram of the setup showing how ssh gets the data from the server, captured by tcpdump and sends it through the pipe to wireshark (with a little help from LeoCAD, l3p and POV-Ray):
(click on the image to enlarge it)
I might write a nice bash script to make things simpler now that I figured it all out.
If it is good enough in the end, I’ll publish it here on my blog.
Just as a small reminder: I am a long-time Slackware user (since 1996) and I only test my configurations on this distribution. I have used other Linux ‘flavors’ in the past but know much less about them.
Most thing I will post here will work though on other systems, but don’t shoot me if they do not.
I started using cbq for traffic shaping on my local network because of the following situation:
I use rsync to copy some files I cannot afford to loose from my desktop to my wife’s and v.v. I use crontab to automatically do this at certain hours.
Rsync is a wonderful protocol that only copies files that have changed, saving time and bandwidth.
But sometimes many files are changed or added, and then the whole bandwith of my local network is used, slowing down other traffic.
At these times even browsing the internet can become very slow, just because I am backing up some folders of new digital pictures.
Rsync has its own ‘–bwlimit’ option, but I wanted a better, more structured solution. And this solution is cbq.
Basically configuring up cbq is done in three steps:
1) Setting up cbq
cbq is actually a script that can be found in the documentation of iproute2 in Slackware. We have to copy it to /sbin and make it executable:
# cp /usr/doc/iproute2-2.6.16-060323/examples/cbq.init-v0.7.3 /sbin/cbq
# chmod +x /sbin/cbq
cbq expects its configuration files in /etc/sysconfig/cbq
If this directory does not exist, create it:
# mkdir /etc/sysconfig/cbq
2) Creating the rules-file
cbq reads files in /etc/sysconfig/cbq with the following names:
cbq-nnnn.yyy where:
- nnnn: is a hexadecimal number from 0002 to ffff
- yyy: is the name of your network interface, like eth0, eth1, etc
In my case, the network interface for my local network is eth1, so I created “cbq-0002.eth1”
Here is the contents of my file:
DEVICE=eth1,100Mbit,10Mbit RATE=5000Kbit WEIGHT=500Kbit PRIO=5 RULE=192.168.1.110:873,192.168.1.0/24 BOUNDED=no ISOLATED=no
Some explanations:
- DEVICE: the interface you want to limit, with its real speed and its weight (1/10 of the max. speed)
- RATE: the bandwith you want to offer for this particular application / port / address
- WEIGHT: 1/10 of the RATE
- PRIO: Priority setting. 5 is default
- RULE: source,destination –> in my case 192.168.1.110 is my desktop, 873 is the port rsync uses
- BOUNDED: Default no, used if you have other filters
- ISOLATED: ‘no’ means that the rate can be used by other traffic if not in use
3) Starting the bandwidth limiting
Use cbq compile to prepare the new filters or after you alter your cbq-nnnn.yyy files.
Then use cbq start to start your traffic-shaping!
To always start cbq, include it in your rc.local script.
You can monitor your results with iptraf or wireshark.
More information can be found using “man tc-cbq”.
]]> http://underpop.online.fr/n/nielshorn/2008/09/traffic-shaping-with-cbq/feed/ 4 http://underpop.online.fr/n/nielshorn/2008/09/load-balancing-two-isps/ http://underpop.online.fr/n/nielshorn/2008/09/load-balancing-two-isps/#comments Sat, 27 Sep 2008 01:56:00 +0000 Niels Horn http://underpop.online.fr/n/nielshorn/blog/?p=5 Today I finally managed to use my two ISPs together on my desktop, combining both bandwidths into one big (almost 3Mbit!) pipe.My setup:
- ADSL modem 1Mb down, 320Kb up
- GSM modem 2Mb down, 512Kb up
Configuring both at the same time is simple, but then we have two default gateways and our packets always go out through the first one found (or with the lower cost as defined in the ‘metric’ parameter.)
So how can we divide our packets over both links?
Googling around I found several suggestions to use iptables.
The general idea is:
- use -m statistic in a chain to choose packets to use on of two links (either the ‘nth’-method or the ‘average’-method
- set a mark on the packet
- use an ‘ip rule’ to select a routing table for mark 1, mark 2, etc.
That sounded like a perfect solution. This way I could really balance my two links like 40%/60% or whatever.
But it didn’t work…
My desktop is not a router, so I have to treat the packets in the OUTPUT chain, where routing has already taken place. The above-described method works on Linux routers treating the PREROUTING chain in iptables, where we can mark a package before routing.
So I studied IP ROUTE and IP RULES a bit more, browsing through the fantastic Linux Advanced Routing & Traffic Control site.
I discovered that we can use ‘nexthop’ to ‘hop’ between several routes.
After experimenting a bit I wrote the following script::
#!/bin/bash # # bal_local Load-balance internet connection over two local links # # Version: 1.0.0 - Fri, Sep 26, 2008 # # Author: Niels Horn # # Set devices: DEV1=${1-eth0} # default eth0 DEV2=${2-ppp0} # default ppp0 # Get IP addresses of our devices: ip1=`ifconfig $DEV1 | grep inet | awk '{ print $2 }' | awk -F: '{ print $2 }'` ip2=`ifconfig $DEV2 | grep inet | awk '{ print $2 }' | awk -F: '{ print $2 }'` # Get default gateway for our devices: gw1=`route -n | grep $DEV1 | grep '^0.0.0.0' | awk '{ print $2 }'` gw2=`route -n | grep $DEV2 | grep '^0.0.0.0' | awk '{ print $2 }'` echo "$DEV1: IP=$ip1 GW=$gw1" echo "$DEV2: IP=$ip2 GW=$gw2" ### Definition of routes ### # Check if tables exists, if not -> create them: if [ -z "`cat /etc/iproute2/rt_tables | grep '^251'`" ] ; then echo "251 rt_dev1" >> /etc/iproute2/rt_tables fi if [ -z "`cat /etc/iproute2/rt_tables | grep '^252'`" ] ; then echo "252 rt_dev2" >> /etc/iproute2/rt_tables fi # Define routing tables: ip route add default via $gw1 table rt_dev1 ip route add default via $gw2 table rt_dev2 # Create rules: ip rule add from $ip1 table rt_dev1 ip rule add from $ip2 table rt_dev2 # If we already have a 'nexthop' route, delete it: if [ ! -z "`ip route show table main | grep 'nexthop'`" ] ; then ip route del default scope global fi # Balance links based on routes: ip route add default scope global nexthop via $gw1 dev $DEV1 weight 1 \ nexthop via $gw2 dev $DEV2 weight 1 # Flush cache table: ip route flush cache # All done...
You can download the script here from my homepage.
This is not the perfect solution, as routes are cached, so once you connected to an external site, it will continue to use the linkt hat was originally selected.
So an FTP download won’t benefit from this solution, but torrent downloads will, as they use several parallel connections.
I tested the result and managed to download using BitTorrent with the incredible speed of 250KBytes/sec:
Update: This script is now also part of an article on slackwiki.