【問題】關於linux qos 設定

顯示結果從第 1 筆 到 4 筆,共計 4 筆
  1. #1
    會員
    註冊日期
    2008-04-05
    所在地區
    1m
    討論區文章
    6

    【問題】關於linux qos 設定

    請問一下下面設定的意思

    filter parent 1: protocol ip pref 3 u32 fh 800::800 order 2048 key ht 800 bkt 0
    flowid 1:10
    match 00010000/00ff0000 at 8
    filter parent 1: protocol ip pref 3 u32 fh 800::801 order 2049 key ht 800 bkt 0
    flowid 1:10
    match 00060000/00ff0000 at 8
    match 05000000/0f00ffc0 at 0
    match 00100000/00ff0000 at 32

    class htb 1:40 parent 1:2 leaf 40: prio 6 rate 1000bit ceil 52000bit burst 6Kb c
    burst 1664b
    Sent 7628415 bytes 52953 pkts (dropped 0, overlimits 0)
    rate 496bit
    lended: 11780 borrowed: 41173 giants: 0
    tokens: 63238626 ctokens: 473843

    iptables 中
    target prot opt source destination
    CONNMARK all -- anywhere anywhere CONNMARK restore
    RETURN all -- anywhere anywhere MARK match !0x0
    CONNMARK all -- anywhere anywhere CONNMARK save
    RETURN all -- anywhere anywhere

    是什麼意思啊

    哪有在討論tc iptables L7 filter 的地方阿



  2. #2
    Kree linux_xp 的大頭照
    註冊日期
    2002-01-19
    討論區文章
    2,655

    回覆: 【問題】關於linux qos 設定

    軟件路由論壇:
    http://bbs.routerclub.com/

    專門討論軟體路由器,各種 unix 系列、商業軟路由,還有 Linux 系列的

    不過因為是大陸網站,簡體字的,可用 Firefox 同文堂自動轉換繁體,發文前也把文章轉簡體比較好

    ---------------------------------------------

    樓主貼的那些設定語法

    前兩段看起來好像是 FreeBSD 的語法,非 Linux 語法
    (不確定是不是 FreeBSD,但一定不是 Linux 的)

    u32 是 CBQ 模式的,彈性大,但邏輯上較難懂
    CBQ 教學資料比較少,HTB 資料比較多
    兩者處理效能差不多,也有說法是 HTB 因較精簡,效能較好

    但是下面又出現 HTB....
    顯然不是出自同一篇的完整文章,是東湊西湊,湊出來的


    iptables 是 Linux 才有的
    不過最後那一段,並不是 iptables 語法

    看起來像是 iptables 的設定 save 檔 (匯出設定檔)
    那是給程式看的,用途是匯入用的
    一般人不會看那個,因為:
    1.比較難懂
    2.那個只能用程式匯入,無法打指令輸入的

    iptables 的設定,雖然用程式匯入也可以
    但大多數人偏向直接寫 script,打指令輸入
    因為匯入的無法程序化
    寫 script 則可以程序化,具判斷式、迴圈、變數..等功能,好處比較大

  3. #3
    會員
    註冊日期
    2008-04-05
    所在地區
    1m
    討論區文章
    6

    回覆: 【問題】關於linux qos 設定

    感恩linx_xp 大大的回答

    上面都是出自於dd-wrt linux 的L7-filter qos的東西
    只是他是圖形介面在家裡的QOS跑不太順
    堂弟在bt foxy 老婆在voip
    所以我想看它設完之後怎麼用iptables 及tc去跑出來的東西
    可是都看不太懂所以只好P在這問看看了
    看起來dd-wrt預設真的是有讓ack先跑加速bt foxy

    我自己也有找到一些reference
    給大家參考吧
    Device eth0:
    filter parent 1: protocol ip pref 10 u32
    filter parent 1: protocol ip pref 10 u32 fh 3: ht divisor 1 <========= Start of table 3. parses TCP header

    filter parent 1: protocol ip pref 10 u32 fh 3::800 order 2048 key ht 3 bkt 0 flowid 1:130 (rule hit 102 success 0)
    match 03690000/ffff0000 at nexthdr+0 (success 0 ) <========= SOURCE PORT 873 goes to class 1:130

    filter parent 1: protocol ip pref 10 u32 fh 2: ht divisor 1 <========= Start of table 2. parses ICMP header

    filter parent 1: protocol ip pref 10 u32 fh 2::800 order 2048 key ht 2 bkt 0 flowid 1:110 (rule hit 0 success 0)
    match 08000000/ff000000 at nexthdr+0 (success 0 ) <========= ICMP Type 8 goes to class 1:110

    filter parent 1: protocol ip pref 10 u32 fh 2::801 order 2049 key ht 2 bkt 0 flowid 1:110 (rule hit 0 success 0)
    match 00000000/ff000000 at nexthdr+0 (success 0 ) <========= ICMP Type 0 goes to class 1:110

    filter parent 1: protocol ip pref 10 u32 fh 1: ht divisor 1 <========= Start of table 1. parses TCP header

    filter parent 1: protocol ip pref 10 u32 fh 1::800 order 2048 key ht 1 bkt 0 flowid 1:130 (rule hit 0 success 0)
    match c1210000/ffff0000 at nexthdr+0 (success 0 ) <========= SOURCE PORT 49441 goes to class 1:130

    filter parent 1: protocol ip pref 10 u32 fh 1::801 order 2049 key ht 1 bkt 0 flowid 1:130 (rule hit 0 success 0)
    match c1220000/ffff0000 at nexthdr+0 (success 0 ) <========= SOURCE PORT 49442 goes to class 1:130

    filter parent 1: protocol ip pref 10 u32 fh 800: ht divisor 1 <========= Start of Table 800. Packets start here!

    =============== The following 2 rules are generated by the class definition in /etc/shorewall/classes ==================

    filter parent 1: protocol ip pref 10 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:110 (rule hit 2204 success 138)
    match 00060000/00ff0000 at 8 (success 396 ) <========= TCP
    match 05000000/0f00ffc0 at 0 (success 250 ) <========= Header length 20 and Packet Length < 64
    match 00100000/00ff0000 at 32 (success 138 ) <========= ACK

    filter parent 1: protocol ip pref 10 u32 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:110 (rule hit 2066 success 0)
    match 00100000/00100000 at 0 (success 0 ) <========= Minimize-delay goes to class 1:110

    =============== Jump to Table 1 if the matches are met ==================

    filter parent 1: protocol ip pref 10 u32 fh 800::802 order 2050 key ht 800 bkt 0 link 1: (rule hit 2066 success 0)
    match ce7c92b2/ffffffff at 12 (success 1039 ) <========= SOURCE 206.124.146.178
    match 00060000/00ff0000 at 8 (success 0 ) <========= PROTO TCP
    offset 0f00>>6 at 0 eat

    filter parent 1: protocol ip pref 10 u32 fh 800::803 order 2051 key ht 800 bkt 0 flowid 1:110 (rule hit 2066 success 1039)
    match ce7c92b2/ffffffff at 12 (success 1039 ) <========= SOURCE 206.124.146.178 goes to class 1:110

    filter parent 1: protocol ip pref 10 u32 fh 800::804 order 2052 key ht 800 bkt 0 flowid 1:110 (rule hit 1027 success 132)
    match ce7c92b3/ffffffff at 12 (success 132 ) <========= SOURCE 206.124.146.179 goes to class 1:110

    filter parent 1: protocol ip pref 10 u32 fh 800::805 order 2053 key ht 800 bkt 0 flowid 1:110 (rule hit 895 success 603)
    match ce7c92b4/ffffffff at 12 (success 603 ) <========= SOURCE 206.124.146.180 goes to class 1:110

    =============== Jump to Table 2 if the matches are met ==================

    filter parent 1: protocol ip pref 10 u32 fh 800::806 order 2054 key ht 800 bkt 0 link 2: (rule hit 292 success 0)
    match 00010000/00ff0000 at 8 (success 0 ) <========= PROTO ICMP
    offset 0f00>>6 at 0 eat

    =============== Jump to Table 3 if the matches are met ==================

    filter parent 1: protocol ip pref 10 u32 fh 800::807 order 2055 key ht 800 bkt 0 link 3: (rule hit 292 success 0)
    match ce7c92b1/ffffffff at 12 (success 265 ) <========= SOURCE 206.124.146.177
    match 00060000/00ff0000 at 8 (success 102 ) <========= PROTO TCP
    offset 0f00>>6 at 0 eat
    Understanding the output of 'shorewall show tc'
    The shorewall show tc (shorewall-lite show tc) command displays information about the current state of traffic shaping. For each device, it executes the following commands:

    echo Device $device:
    tc -s -d qdisc show dev $device
    echo
    tc -s -d class show dev $device
    echo
    So, the traffic-shaping output is generated entirely by the tc utility.

    Here's the output of for eth0. The configuration is the one shown in the preceding section (the output was obtained almost 24 hours later than the shorewall show filters output shown above).

    Device eth0:

    ============== The primary queuing discipline is HTB (Hierarchical Token Bucket) ====================

    qdisc htb 1: r2q 10 default 120 direct_packets_stat 9 ver 3.17
    Sent 2133336743 bytes 4484781 pkt (dropped 198, overlimits 4911403 requeues 21) <=========== Note the overlimits and dropped counts
    rate 0bit 0pps backlog 0b 8p requeues 21

    ============== The ingress filter. If you specify IN-BANDWIDTH, you can see the 'dropped' count here. =========

    In this case, the packets are being sent to the IFB for shaping

    qdisc ingress ffff: ----------------
    Sent 4069015112 bytes 4997252 pkt (dropped 0, overlimits 0 requeues 0)
    rate 0bit 0pps backlog 0b 0p requeues 0

    ============ Each of the leaf HTB classes has an SFQ qdisc to ensure that each flow gets its turn ============

    qdisc sfq 110: parent 1:110 limit 128p quantum 1514b flows 128/1024 perturb 10sec
    Sent 613372519 bytes 2870225 pkt (dropped 0, overlimits 0 requeues 6)
    rate 0bit 0pps backlog 0b 0p requeues 6
    qdisc sfq 120: parent 1:120 limit 128p quantum 1514b flows 128/1024 perturb 10sec
    Sent 18434920 bytes 60961 pkt (dropped 0, overlimits 0 requeues 0)
    rate 0bit 0pps backlog 0b 0p requeues 0
    qdisc sfq 130: parent 1:130 limit 128p quantum 1514b flows 128/1024 perturb 10sec
    Sent 1501528722 bytes 1553586 pkt (dropped 198, overlimits 0 requeues 15)
    rate 0bit 0pps backlog 11706b 8p requeues 15

    ============= Class 1:110 -- the high-priority class ===========


    Note the rate and ceiling calculated from 'full'

    class htb 1:110 parent 1:1 leaf 110: prio 1 quantum 4800 rate 192000bit ceil 384000bit burst 1695b/8 mpu 0b overhead 0b cburst 1791b/8 mpu 0b overhead 0b level 0
    Sent 613372519 bytes 2870225 pkt (dropped 0, overlimits 0 requeues 0)
    rate 195672bit 28pps backlog 0b 0p requeues 0 <=========== Note the current rate of traffic. There is no queuing (no packet backlog)
    lended: 2758458 borrowed: 111773 giants:
    tokens: 46263 ctokens: 35157

    ============== The root class ============

    class htb 1:1 root rate 384000bit ceil 384000bit burst 1791b/8 mpu 0b overhead 0b cburst 1791b/8 mpu 0b overhead 0b level 7
    Sent 2133276316 bytes 4484785 pkt (dropped 0, overlimits 0 requeues 0) <==== Total output traffic since last 'restart'
    rate 363240bit 45pps backlog 0b 0p requeues 0
    lended: 1081936 borrowed: 0 giants: 0
    tokens: -52226 ctokens: -52226

    ============= Bulk Class (outgoing rsync, email and bittorrent) ============

    class htb 1:130 parent 1:1 leaf 130: prio 3 quantum 1900 rate 76000bit ceil 230000bit burst 1637b/8 mpu 0b overhead 0b cburst 1714b/8 mpu 0b overhead 0b level 0
    Sent 1501528722 bytes 1553586 pkt (dropped 198, overlimits 0 requeues 0)
    rate 162528bit 14pps backlog 0b 8p requeues 0 <======== Queuing is occurring (8 packet backlog). The rate is still below the ceiling.
    lended: 587134 borrowed: 966459 giants: 0 During peak activity, the rate tops out at around 231000 (just above ceiling).
    tokens: -30919 ctokens: -97657

    ================= Default class (mostly serving web pages) ===============

    class htb 1:120 parent 1:1 leaf 120: prio 2 quantum 1900 rate 76000bit ceil 230000bit burst 1637b/8 mpu 0b overhead 0b cburst 1714b/8 mpu 0b overhead 0b level 0
    Sent 18434920 bytes 60961 pkt (dropped 0, overlimits 0 requeues 0)
    rate 2240bit 2pps backlog 0b 0p requeues 0
    lended: 57257 borrowed: 3704 giants: 0
    tokens: 156045 ctokens: 54178

    ref http://www.shorewall.net/traffic_shaping.htm

  4. #4
    會員
    註冊日期
    2008-04-05
    所在地區
    1m
    討論區文章
    6

    回覆: 【問題】關於linux qos 設定

    為什麼這篇突然不見了

類似的主題

  1. 【教學】Coyote Linux 頻寬管制 (QoS) 設定教學
    作者:linux_xp 所在討論版:-- 網 路 技 術 版
    回覆: 111
    最後發表: 2011-04-08, 02:33 PM
  2. 【問題】想請教一下關於coyote的QoS設定
    作者:lansea 所在討論版:-- 網 路 技 術 版
    回覆: 8
    最後發表: 2007-08-07, 12:12 AM
  3. 關於QOS設定詢問...
    作者:937949839 所在討論版:-- 網 路 硬 體 版
    回覆: 0
    最後發表: 2005-05-26, 08:41 PM
  4. 關於linux configurator 的設定問題?
    作者:bigmichael 所在討論版:-- FreeBSD & Linux 討 論 版
    回覆: 3
    最後發表: 2002-10-10, 02:37 PM
  5. 有關於LINUX下的網路設定
    作者:jollinman 所在討論版:-- FreeBSD & Linux 討 論 版
    回覆: 2
    最後發表: 2002-01-15, 12:41 PM

 

此網頁沒有從搜尋引擎而來的訪客

發表文章規則

  • 不可以發表新主題
  • 不可以回覆文章
  • 不可以上傳附加檔案
  • 不可以編輯自己的文章
  •