#!/bin/sh # That sort of manual or document or whatever it is exist because I lost a tons of time # digging around the net and readed tons of documents before i can start to formulate # in my head how shaping basicly works and how it is implemented in linux. # It's being done using iproute2 which is written by Alexey Kuznetsov. # So big thanks to him. All that i've writen may be completely irrelevant or wrong # My explanation is of my point of view, and my understanding about shaping. # I may be wrong. Do not rely only on that for more comprehensive explanations read # "Linux Advanced Routing & Traffic Control" located on http://lartc.org # Shaping is much complicated than i thought in the beginning. # And as i know the implementation of it in linux gives linux possibilities similar to # the shiny ciscos which cost a tons of money. Anyway shaping is not intuitive # and i'll provide little explanations to try to clear the picture in my and your heads :]. # Shaping in linux is being done using the "tc" command which stands for "traffic control". # If you want to shape traffic going to an adsl line most of the documents will advice you to # try the "Wonder shaper. # http://freshmeat.net/projects/wshaper/ # It's pretty easy to setup and works well. But it is not rich in # functuanality you can just shape your UPLOAD and DOWNLOAD speed and can't give # whatever speed you want to each client on your network. # Another tools you can use to shape particular clients, # if your clients are not behind masquerade is cbq.init." # http://sourceforge.net/projects/cbqinit/ # I wasn't able to find out how it works with clients behind a masq and that's why i wrote that # sort of traffic shaping introduction. And tried to get the conceptions behind traffic # shaping in linux. Getting the basics can help you much # in adding your custom rules and doing with your traffic really whatever you want ;]. # Probably you will want to take a look at htb.init it does the same as cbq.init. # But it's newer and probably more accurate in traffic shaping compared to cbq.init. # http://sourceforge.net/projects/htbinit # For documentation about cbq.init and htb.init look directly into the script # Explanation of how it works and few examples are embedded there. # Probably there are out quite big count of other available scripts and software # that does the same but that's not the goal so i'll begin with my explanation. # Let's start now. # Enjoy. # hip0 # -=-= #/sbin/iptables -t mangle -F #/sbin/iptables -A PREROUTING -t mangle -s 10.10.10.17 -d 0/0 -j MARK --set-mark 0x01 tc qdisc del dev eth0 root >/dev/null 2>&1 # hint: the qdiscs are a sort of virtual buckets that are bounded to interface # their main goal is to store our flowing traffic and then retransmits it # over the link in some specific order so we can prioritize some qdiscs before others. # hint: qdiscs can be ingress if we manipulate traffic that comes in the router # ( that's a very rare case ), and egress if we manipulate going out of the router. # hint: qdiscs have queuing disciplines in them. # A different types of qdiscs exists, some are classless # ( a.k.a. can't have classes and are used to specify the maximum speed # we can transmit data over the interface ). # Example for such classless qdisc is for example "tbf". # "tbf" = [ token bucket field ]. # that's being done by specifying an interface can send maximum at specified speed. # If we have a 100m/bit networking interface and use tbf we can # tell it to not send over the link at 100m/bit but send at for example 512k/bit. # That's very useless feature for example if your internet comes from an ADSL device. # The more intereting qdiscs are usually classful because with them # we can easily add classes to a root qdisc and then classify traffic # by some criteria to go to some custom selected by our rule class. # Classful structure looks very much like a tree. where our qdisc is the roots # of the tree and the classes are a sort of branches to that tree # ( some of them often called "leafs" ). # The most famous classful qdiscs are "cbq" and "htb" # "cbq" is much older and less accurate, so it probably won't be much of an interest to the reader # "cbq" stands for Classful Bucket Filter.". Usually to start shaping the traffic we need # to choose a network interface that we'll shape on. # Interesting queing disciple that most of the readers will need is the # "sfq" = [ stochastic fairness queueing ]. # We can use thing called "policer" to shape the # incoming traffic which we are not fowarding. # Usually shaping is doing done by queuing the traffic that comes to a specific # qdisc ( a virtual traffic bucket ) # and by dropping packets if they exceed our hard coded speed limit. # We shape the traffic for a client using "qdisc class" which # instructing the kernel to queue some of the client sent packets and send # them in a rate so the speed we are giving him is not overlimited. # For traffic usually coming to our router shaping is being done by dropping packets. # Note: Be aware that we can't limit how much traffic we receive, normally # we can just drop packets "artificially" lowering down the amount of traffic that comes to our # clients and thus shaping them. # The most easy and wide way of classifying packets by ip or port are by marking # our in/out packets using iptables. Then use that marks in "tc" filters # and redirect packets marked with some "marker" to some classful or classless. # Ofcourse it's obvious we need to have already created our classes before using # tc filters to jump to some of the classes. # If we want to shape clients behind a masquerade, marking the packets # is the only way to go, because all of the clients are already coming to our router # out interface ( for example eth0 ) with masqueraded ip addresses and cause # of that all clients are coming out with same ip address :] after the masquerading rules. # Marking of a packet is very simple with iptables and is being done like that. # IP_ADDR='10.10.10.15'; # MARK='0x1'; # /sbin/iptables -A PREROUTING -t mangle -s ${IP_ADDR} -d 0/0 -j MARK --set-mark ${MARK}. # In those example we will mark our packets in the PREROUTING chain in the table # called mangle destined for ip address "${IP_ADDR} with the mark of ${MARK}. # Now we will show a little example using i found here # http://www.kabelverhau.ch/elwms/en_shaping.php # with instantly added comments above every rule to add some extra explanations, # to make it more clear what each "tc" rule does and partly modified to suit needs # of someone that want to shape clients behind NAT. # By the way "tc" is userspace level program which we use to instruct the kernel. # All the qdiscs queing discipline classes and stuff are things being handled by our kernel. # It takes arguments and by way of usage in my opinion is very similar to iptables. # So think of it like a program that creates a sort of the iptables chains but destined # to do traffic interception. # Here i'll explain what we are going to do adding the lines below. # Firstly to begin using our qdisc. We need already to have # some features compiled into the kernel or on kernel modules. # Currently i'm using them on modules and my lsmod output looks like this. # [root@|root:]# /sbin/lsmod # Module Size Used by # sch_tbf 4800 0 # ipt_ipp2p 7072 2 # cls_fw 4064 48 # sch_htb 15008 2 # police 8544 0 # sch_sfq 4960 48 # sch_cbq 14336 0 # cls_u32 7012 0 # sch_ingress 2912 0 # eni 23940 0 [permanent] # suni 5056 1 eni # atm 36460 2 eni,suni # [root@|root:]# # most of the modules that you see are used by tc to instruct the kernel for some # advanced QoS ( Quality of Service ). # I'm not sure do you need eni suni and atm modules so anywayz. # ipt_ipp2p is iptables extensions which gives you an easy way to filter peer to peer # networks. The sch_tbf, police, sch_sfq, sch_htb, cls_fw, cls_u32, sch_ingress are # modules that can be used by setting different disciplines like police, sfq, cbq, htb # dropping packets using the ingress etc. # If all is fine we can start adding rules. # Firstly we delete the root qdisc ( that's being done in a case we have already # installed qdisc ) tc qdisc del dev eth0 root >/dev/null 2>&1 # add qdisc to the eth0 iface and set that traffic that not matches with our # filter rules goes "by default" to class identified by minor number 22 # hint: classes usually are identified by "major:minor" number # example such class identificator is "1:10" # each qdisc has a "handle" specified like "1:" in the below example # "handle 1:" means that our handle is identified by 1:0 tc qdisc add dev eth0 root handle 1: htb default 22 # add our root qdisc a main class # we specify "rate" option telling how much guaranteed speed our class will have. # With the "ceil" option we specify how much speed our class can get at maximum. # The "burst" option specifies how much data can pass the shaper without limit. tc class add dev eth0 parent 1: classid 1:1 htb rate 500kbit ceil 500kbit burst 6k # add our classes and specify the bandwidth and priority each will be identified by # prio gives a priority to the class each class can have priority # prio 1 is the highest priority and prio 3 gives a bulky priority. # it's nice download sesssions or peer to peer networks traffic to go to a class with prio 3. # the "parent" parameter specifies to which qdisc "handle" we connect the class. # Here we connect the parent to "handle 1:1". # and specify each class to have identier by "classid 1:11", "classid 1:12" etc. # The two lines below with identical "parent 1:13" all go in one class # means "classid 1:22" and "classid 1:21" share the same parent. tc class add dev eth0 parent 1:1 classid 1:11 htb rate 5kbit ceil 128kbit burst 6k prio 3 tc class add dev eth0 parent 1:1 classid 1:12 htb rate 100kbit ceil 100kbit burst 6k prio 3 tc class add dev eth0 parent 1:1 classid 1:13 htb rate 130kbit ceil 235kbit burst 6k prio 1 tc class add dev eth0 parent 1:13 classid 1:21 htb rate 30kbit ceil 30kbit burst 6k prio 1 tc class add dev eth0 parent 1:13 classid 1:22 htb rate 100kbit ceil 235kbit burst 6k prio 1 # add stochastic fair queuing to all classes # stochastic fairness does control the flows and do a sort of load balancing # to the traffic not allowing a single host to eat all our bandwidth # anyway it uses some funky algorithm with which it is possible # in some situations it makes mistakes sorting the traffic flows and # the role of "perturb" is periodically to change our "sqf" hashing table # in this case each 10 seconds. tc qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 tc qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 #tc qdisc add dev eth0 parent 1:13 handle 13: sfq perturb 10 tc qdisc add dev eth0 parent 1:21 handle 21: sfq perturb 10 tc qdisc add dev eth0 parent 1:22 handle 22: sfq perturb 10 # that filter instructs all the packet that comes marked by "0x1" # to go to classid 1:11 (see how we defined the class ) above. # as you can see the word "filter" says we are matching for packets # we specify the protocol type by writing "protocol ip". # we also say in which parent is located our class wich "classid 1:11". # i guess the "handle" is a standard and should be typed there # the "1 fw" after handle do says that match is being made # for packets with mark "0x1". tc filter add dev eth0 protocol ip parent 1: \ handle 1 fw classid 1:12 # In that way our client ( behind a nat ). Should have guaranteed speed of 100kbit # and maximum possible speed also of 100kbits. a.k.a. shaping him to use 100kbit # of our traffic line. # For example if we need to shape some other client # on our local net and we have set a MARK of 2 to the packets that flow to him # and want to allow him to take maximum a 235 kbits of our internet connection, # and want to guarantee him connection speed of 100 kbits, and give him # highest level of priority, we do that by redirecting him to class with classid 1:22 # like that. tc filter add dev eth0 protcol ip parent 1: \ handle 2 fw classid 1:22 tc filter add dev eth0 protocl ip parent 1: \ handle 3 fw clsasid 1:21 # and so on # And so on # We've already instructed the kernel what to do on the match of the filter above # so the thing that's left is to set a mark of "1" to the packets coming from a # specified ip address behind our previously established NAT. # First we flush "-F" the table mangle in our iptables chains # and then we add mark of "1" to all packets destined and coming from our local ip # addresses 10.10.10.17 /sbin/iptables -t mangle -F /sbin/iptables -A PREROUTING -t mangle -s 10.10.10.17 -d 0/0 -j MARK --set-mark 0x01 /sbin/iptables -A PREROUTING -t mangle -s 10.10.10.16 -d 0/0 -j MARK --set-mark 0x02 /sbin/iptables -A PREROUTING -t mangle -s 10.10.10.15 -d 0/0 -j MARK --set-mark 0x03 # Guess you've already got the basic idea of how shaping with packet MARKing works # Another nice thing we can do to make us feel confortable is prioritize # our ssh traffic to escape the horrible lag and nerves that some # stupid mother with his fucking non interactive download could cause us # This is taken from the larc.org howto full NAT solution with shaping. # So credits go to the author. # prioritize ssh packets /sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --sport 22 -j MARK --set-mark 0x1 /sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --sport 22 -j RETURN /sbin/iptables -t mangle -A PREROUTING -j MARK --set-mark 0x6 # now we should have our rules added. tc supports some diagnostics # again analogy with iptables it's something like "iptables -L" but concerning # our qdiscs and classes. # For example viewing all our qdiscs is possible issueing command like this. tc -s -d qdisc show dev eth0 # -s stands for statistics and -d for details # To see our traffic being sorted at classes we issue command like this. tc -s -d class show dev eth0 # stuff that comes in mind should go here. # ideas for stuff to go here are welcome # This document is written by hip0 # with the help of various sources on the net. # all stuff is copylefted respectively by their author. # This document/howto whatever it is is under GPL. # See http://www.gnu.org/licenses/gpl.txt # for details # I started to write that on # [root@|root:]# date # Thu Jul 14 19:50:21 EEST 2005