In a recent post, I mentioned to you that I was in Washington last week. Little did I know that my efforts would cause a Defcon 5 call-to-action by the P2P Defense League.
I met with officials at the Federal Communications Commission (FCC) to discuss the issue of network management. One aspect of that discussion included the impact today's peer-to-peer applications have on network resources. After reading the ex parte letter that was filed by the National Cable and Telecommunications Association (NCTA) on my behalf, Karl at Broadband Reports took issue with some of the comments I made during those meetings.
If you want to read the whole thing, you can download a copy of the ex parte letter.
You can read for yourself what Karl wrote in its entirety but first, here's a longish summary of what NCTA reported that I said...

Mr. Willner described how, in the absence of network management, the usage of P2P services by a very small number of a cable system's high-speed Internet customers can cause substantial (and sometimes complete) congestion of the system's upload capacity. As a result, service for the system's high-speed Internet customers using the Internet for other purposes (such as e-mail, web browsing, e-shopping, streaming music and video, etc.) would be degraded. As Mr. Willner pointed out, network upgrades that are intended to enhance the speed and quality of Internet access would, in the absence of network management, only exacerbate this problem because P2P users around the world seek to retrieve files from computers on systems with the fastest upload speeds.
To avoid such a result, network management tools and technology are necessary to regulate the extent to which P2P uploads consume and congest capacity. This enables customers to continue to use P2P services without degrading the quality of Internet access service for the vast majority of customers using the Internet for other purposes. In the fiercely competitive environment facing cable operators today, maximizing the attractiveness, quality and value of the Internet experience is the foremost imperative. Competition drives operators to find the network management techniques that best serve this objective - and the high level of consumer satisfaction with the high-speed Internet service provided by Insight, and by the cable industry generally, demonstrates that cable's network management techniques have made cable's high speed service the most attractive in the market.
Mr. Willner expressed concerns that regulatory efforts to restrict or pre-determine the tools and technology to be used for network management would thwart investment in network upgrades, slow the rate of growth of high-speed Internet customers, and seriously disrupt and degrade the value of the Internet for all users. He urged that the Commission adhere to its successful policy of "vigilant restraint" and not embark on such a misguided regulatory path.
For those of you who are regular readers of this blog, these statements aren't anything new. In my two part post Confessions of a Network Manager I wrote:

Network management is not your enemy -- it is your friend, even if you're a P2P enthusiast. Without network management, everyone's online experience would melt down to a completely useless exercise. It would reduce the Internet to a chaotic free-for-all as if you built a 10-lane superhighway and didn't have any traffic laws in place to keep the traffic moving.
This is not the message that the net neutrality crowd wants to hear. As Karl's post claims, network management is all about a money grab by ISP's trying to justify consumer-unfriendly pricing, failing to keep up their network infrastructure or relieve them from costly government regulation. Furthermore, Karl writes that my statements about network congestion resulting from unmanaged P2P usage is all a bunch of hogwash.

The problem is, any claim of "complete congestion" is lobbyist hyperbole, again highlighting the chasm between lobbyists and real technicians. Networking and protocol specialist Robb Topolski should know -- he first discovered Comcast's use of Sandvine to throttle upstream capacity in May of 2007. It was his findings in our forums that led to the FCC's investigation of the cable company.
"Complete congestion is a technical fantasy which only exists in the minds of people who do not understand TCP congestion control and how Additive Increase/Multiplicative Decrease (AIMD) works in TCP Congestion avoidance works, he says. "AIMD allows a linear growth of bandwidth utilization until loss occurs, at which time an exponential reduction takes place. This slow-start, fast-fallback ensures congestion cannot cause gridlock."
Whew. Some heavy-duty technical jargon there so let's talk real-life. Insight customers remember when we had some real technical problems back in 2006 when we migrated our network off of AT&T's backbone. Out of necessity, I learned more about the inner workings of the Internet than any CEO ever thought he or she would need to know. And as a result, I know the definition of complete congestion and it's not simply an intellectual theory -- it's a real-life customer experience, and not a good one.
Ok, now let's get a little technical, even though I'm no technical expert. Basically, Karl and Robb seem to believe that TCP, one of the underlying protocols of the Internet, is robust enough to prevent P2P applications from eating up disproportionate amounts of bandwidth and negatively impacting other users' online experiences. While the AIMD algorithm was originally designed to prevent network congestion - the implementation is now over 10 years old and P2P application programmers long ago learned how to circumvent this built-in TCP traffic cop. That's the problem.
Don't believe me? Have a look at this recent article from ZDNet. George Ou, the former Technical Director at ZDNet, explains on his Real World IT blog that all current P2P applications do an end-around on AIMD and grab 10 to 100 times the bandwidth of other Internet applications.

By 1999, the first P2P (peer-to-peer) application called Swarmcast began to blatantly exploit Jacobson's TCP congestion control mechanism. Using a technique called "parallel incremental downloading", Swarmcast could grab a much larger share of the pie at the expense of others by exploiting the multi-stream and persistence loophole. These two loopholes would be used by every P2P application since.
The part of Ou's article that I think is most illustrative of the problems in Karl and Robb's position are graphs of tests conducted on network congestion due to P2P applications in Japan. Japan is the country with the most per-user broadband capacity in the world - a country where 100 Mbps broad connections are routine in homes. But, as Ou demonstrates with his charts, 75% of broadband traffic in Japan is clogged by a small percentage of P2P users. So, even with the much higher per capita network capacity, P2P manages to fill up the network. It really doesn't matter how much network capacity is built - P2P applications are inherently designed to find a way to fill it. And they do so from New Zealand to New Brunswick.
Without responsible network management, P2P applications would find a way to fill the available network bandwidth until other users' connections are gridlocked.
Ou's article, while focused on the engineering failure of AIMD, also comments on the fervent persistence of those net neutrality advocates who continue to claim that AIMD works and have politicized the issue.

Despite the undeniable truth that Jacobson’s TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred. Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P. They wag their fingers that P2P throttling is “protocol discrimination” and that it’s a violation of the TCP standards. They tell us that if anyone slows down a P2P application, then they are somehow violating someone’s right to free speech and impinging on their civil rights.
So, that means they scold us for fortifying the defenses against P2P applications that, according to Ou, are specifically designed to pierce a hole through the first line of network congestion defenses.
With all due respect to BBR, I take exception with Karl's quoted network "expert" Robb Topolski who even accused the NCTA of intentionally attempting to confuse policy makers by "conflating" the issues of network management and behavioral based advertising systems without a shred of evidence. Nothing could be further from the truth. Let's hope they both get an opportunity to read my post and reconsider their reaction to my comments to the FCC.
Topolski claims to be open minded toward "responsible network management." In Karl's post and in comments Topolski made during testimony to the FCC earlier this year he references the idea of "responsible network management." and says that "[m]ost techs don't oppose reasonable network management." Pardon me if I harbor some skepticism about his definition of reasonable -- reasonable to consumers or reasonable to the commerical interests of P2P application providers.
As for me, I'm going to continue to work to educate policy makers, Insight's customers and the public at-large (through this blog) on the benefits of a responsibly managed network. I also intend to post my concerns about some of the suggestions about different ways we should manage the network.
So the next time you read about my "epic distortions" on another blog, surf over here to get the straight scoop.
get a usb cable plug it in between the petrinr and laptop and you will be able to . if you want you need to get a wireless for your motorhome and a wire adapter for your laptop and petrinr and you can go wireless as long as you can get 110 volt AC to the router. most motor homes now have a dc to ac converter were you can run computers tv router dvd players microwaves from it. it convert dc power to ac. you can plug your router into that and then set it up on your laptop to be wireless and you can use that to print and if your rounter finds a storng enough single witch they are lot of now a day you can go online as well.
Posted by: Adnan | Wednesday, April 25, 2012 at 03:00 AM
so perfect!and Hey! I love that mnaieng too for sure.Probably I'll send You my resumebecause I wanna work with Figabyte!Ok, stop laughing I'm reading aboutYour joomseed component, is so interesting. so, good job, really good job, Guys.Thank You so much! I foresee a smashing successfor your company, here in Italy(and we know the reason why).Good luck, Sexdrum,see You!
Posted by: penny | Tuesday, April 24, 2012 at 10:15 PM
Dear Admin,this is a major problem if I reply OR fworard a message which contain any attachment (including signature pictures) it will not go out and red cross appears immediately (without connection to internet): Send failed please resend null .My phone is 9800 TorchPLEASE FIX THIS ASAPI also asked how can I pay via paypal please answer.
Posted by: Tom | Monday, April 23, 2012 at 06:25 AM
[Deleted]
Regarding bofkentucky's comments that George Ou is merely the "reporter", Here's my full bio.
George Ou is an Information Technology and CISSP Security Consultant based out of the Silicon Valley who founded ForMortals.com. Most recently, George Ou joined Washington DC based Think Tank ITIF.org as Senior Analyst though he continues to work out of Silicon Valley.
George Ou recently served two years as Technical Director and Editor at Large for TechRepublic and ZDNet (both property of CNET Networks) doing in-depth coverage of IT and technology topics. Before journalism, he worked as an Information Technology Consultant who designed and built wired network, wireless network, Internet, storage, security, and server infrastructure for various Fortune 100 companies.
Now as for this MYTH that P2P only uses 3-4 TCP flows, that is a SOFT limit configured in most BitTorrent clients PER UPLOAD. So if the pipe is not saturated or the user changes that per-upload limit, it will go beyond 4 flows. If there are two upstream sessions, then there are 8 flows. If there are 3 uploads, there ar 12 flows.
On the downstream path, the limit is often set to around 50 and whenever I download something from BitTorrent, there are often 20 to 40 downstream TCP flows and that is multiplied by the number of simultaneous downloads. The total number of TCP flows on my BitTorrent client is set for 200 simultaneous TCP flows.
So for anyone to suggest there are only 3-4 TCP flows used by P2P users, they either do not know what they're talking about or they're being intentionally deceptive.
Posted by: George Ou | Thursday, July 24, 2008 at 05:14 PM
Richard,
Again reality does not match your version of the facts. Most home gateways won't support 5000 NAT entries. Even mine, the D-Link DGL-4300, which was built with extra NAT space specifically because of the demands for P2P, handle 2000.
That said, I have no facts to analyze as to why "we've" seen as many as 5,000. Remember that TCP entries generally don't expire very fast, and if your gateway device isn't keeping track of state, then they'll stay open practically forever even though your computer does not have them open. UDP entries are stateless, but most devices expire these off within a few minutes.
All, Richard Bennett is a shill and takes every opportunity he can to question my competence. Fortunately, this is a computer science and you can reproduce the results. To see how many ports your Windows device has open, and where they are going, and who has them open, use these commands:
tasklist
locate the PID number for the program that you're interested, for example, utorrent.exe
netstat -ano | find "1234"
(where 1234 is the PID number for utorrent.exe shown by tasklist.) To get a quick count, use the switch /c as such
netstat -ano | find /c "1234"
If you can find any program that has over 5000 listings as a result of the above command, please let me know. By default, none of today's P2P programs will do this. Users generally would have to grossly misconfigure their applications to make this happen.
Robb Topolski
Posted by: Robb Topolski | Sunday, July 20, 2008 at 09:40 AM
Let me address one of Topolski's errors of fact in his comment above. He says: "Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running."
Dr. Briscoe - British Telecom's Chief Scientist - is correct in his analysis of P2P connections. In home gateways with Network Address Translation, we've seen as many as 5000 TCP connections in our translation table at one time, most of them created by P2P. Things become interesting when we consider what happens when a P2P application opens so many TCP connections, most of them test connections, that the table reaches its design limit and overflows: the NAT sends TCP Reset packets to the end points of the overflowing connection.
Oops.
This is an easily observable behavior that a competent software tester would see; the ones that my company have certainly can. The design response is to make connection table size dynamic, but not all gateway vendors have done that yet, since it's a fairly recent (last year or so) problem.
Posted by: Richard Bennett | Friday, July 18, 2008 at 06:32 PM
Let me address one of Topolski's errors of fact in his comment above. He says: "Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running."
Dr. Briscoe - British Telecom's Chief Scientist - is correct in his analysis of P2P connections. In home gateways with Network Address Translation, we've seen as many as 5000 TCP connections in our translation table at one time, most of them created by P2P. Things become interesting when we consider what happens when a P2P application opens so many TCP connections, most of them test connections, that the table reaches its design limit and overflows: the NAT sends TCP Reset packets to the end points of the overflowing connection.
Oops.
This is an easily observable behavior that a competent software tester would see; the ones that my company have certainly can. The design response is to make connection table size dynamic, but not all gateway vendors have done that yet, since it's a fairly recent (last year or so) problem.
Posted by: Richard Bennett | Friday, July 18, 2008 at 06:32 PM
PS: And for the record, I don't use P2P much these days -- there's just not much on those networks that interests me. I did try it out several months ago, out of curiosity (I had left my job and needed a new mental exercise as I was pretty ill and somewhat bedridden). Eventually, I ran into this Comcast mess.
Most of my P2P use since then has been to do testing in support of this investigation. I haven't done any P2P for a couple of months now as I've been focused elsewhere (NebuAd). I think the last time I tried it was to try something out that, ironically, George Ou suggested (as an original thought)!
P2P Defense League, sure, okay. But, really this effort is about Internet freedom. If I was in it for the P2P, you'd think I'd use it more.
Posted by: Robb Topolski | Friday, July 18, 2008 at 12:20 AM
Dear Mr. Willner,
FIRST
There is not one shred of evidence that P2P application programmers are circumventing TCP congestion control. Bob Briscoe thought that idea up without sufficient evidence to support it (he saw idle connections, but not active ones), and George Ou, as he is apt to do, repeated someone else's facts in his blog. That’s known as truthiness. It's not the truth.
George Ou’s Swarmcast evidence – every web surfer does essentially the same thing by having more than one web browser window open at a time while downloading a largish file. It’s not an exploit of any sort – it’s kinda the way that the ‘net works, has always worked, and works just fine.
The fact is that P2P application programmers do not cause congestion because it is not in their best interests to do so. They avoid congestion like the plague. Congestion slows them down. (Think about that -- why WOULD P2P developers want to do something that would cause congestion?!?!)
Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running. But no matter how many streams they do run, they're still limited by the individual settings of the broadband modem THAT YOU, THE ISP, CONTROLS. They can't use up any more bandwidth than you give to them!!
Please give the kind folks at Vuze or BitTorrent a call and have a sit down – you’re in for a shock. The reason that there is so much P2P traffic is because P2P is popular. And denying your customers P2P access is only denying them what they want (and paid for, as it happens to be).
SECOND
How in the world did we get from about 250 or so Internet hosts to 500 million without any wide-scale wire-speed Deep Packet Inspection to give us Network Management?
Yes, I approve of Network Management – the old fashioned kind – the kind that says you stay ahead of demand by upgrading your networks on time and you don’t oversell your bandwidth beyond all reasonableness.
I purport that you can, and should, and hopefully shall again operate your traffic networks without using this relatively new DPI that intrudes into managing the very APPLICATIONS (not the networks) your users are allowed to use.
THIRD
Thanks for bringing up Japan, which recently announced that it was going to institute a bandwidth “cap” of 30 GIGABYTES A DAY. George overblows their problems in order to support his political goal.
FOURTH
I have no interest in the commercial aspirations of P2P application publishers, except that they be given a fair and level playing field to sink or swim just like everybody else. But when big multimedia companies block little multimedia companies like Vuze and BitTorrent, then someone has to say something – and it might as well be me.
AND FINALLY
Mostly, I’m a probably a network tester and a protocol freak because I have a natural aptitude for it. I was reading music before I learned the alphabet. I could decode Baudot by ear (sort of). I have a keen sense of rhythm and patterns. I can visualize and assemble these things in my head and at slow enough speeds, hear them by ear.
George Ou and company make light of the fact that “I’m only a tester,” and they’re missing the point. I’m the guy who finds the bug in the logic, the lines of code that never get accessed or the protocol gaff that error checks too few or too many. It’s easy to write code and tell a computer what to do, it’s harder to tell a programmer why it’s really not as easy as it would seem.
I’m glad you admit that you’re not technical. Will you now go back to the Commissioners and staffers that you met and explain it to them? Give me a call, I’ll be happy to go with you.
Posted by: Robb Topolski | Friday, July 18, 2008 at 12:12 AM
There's multiple issues here and the industry seems to want to focus on the top talkers.
1) Top talkers, if they truly are abusive, cut them off, Have established and published caps in the TOS and the penalties associated with them. Adaptive throttling can be done at the modem and/or the CMTS. The P2P community realizes that when you sell a 20/2 internet connection they're not going to be able to get that 24/7/365, but no one knows when the line is going to be crossed. P2P isn't the only utilization driver out their either, actively throttling/disabling virus and spyware infected pc's of customers could go a long way to freeing up bandwidth.
2) Deep packet inspection, using it for network maintenance is a good and legitimate use. Selling that data to marketing companies is shady in most users opinions. You have to volunteer to be give your TV watching habits to Nielsen for TV ratings. Why should the data on what sites you are hitting be any different? This goes for the DNS redirection systems or negatively impacting non-blessed voip.
3) The war between P2P and Network management. The industry keeps taking ham-fisted measures against specific protocols instead of stopping the true abusers. The P2P software makers and users will outflank network management software and hardware every time. By the time a proposed network management change is built by the vendor, tested, gone through change management and implemented the clock has been ticking for a patch on the users side since the vendor released the patch. This is similar to anti-spyware/anti-virus race. Simply detecting and removing the infections is going to lose out to properly securing the machines in the first place.
The customer, in the best case, has two options for internet service, one blessed by the feds 130 years ago to be a monopoly and the other one blessed over the last 50 locally to be local monopolies. Along with those monopoly rights comes a responsibility to listen to your paying customers and treating them fairly. Your average joe user isn't ever going to get to testify to his position before congress or the FCC, some one has to speak to their wants and needs out of the network.
If utilized properly, P2P can save network operators tons of bandwidth. For example world of warcraft updates are distributed via bit torrent, meaning that in an optimal case only one Insight user would have to download that file through the internet POP and then would seed the remainder of the insight network. Imagine if itunes, anti-virus patches or windows updates were distributed in a similar fashion.
There is a middle ground between the NCTA and ATT/Verizon and the top talkers, its all about finding a package and a price point.
Posted by: bofkentucky | Thursday, July 17, 2008 at 07:20 PM
Let's clarify something here, Robb Topolski was a software quality assurance tester. He was not a network engineer or architect and he has consistently shown that he does not understand how network congestion works.
Posted by: Anonymous | Thursday, July 17, 2008 at 03:45 PM