« Wednesday article links: when net neutrality activists attack | Main | Friday article links »

Thursday, July 17, 2008

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Adnan

get a usb cable plug it in between the petrinr and laptop and you will be able to . if you want you need to get a wireless for your motorhome and a wire adapter for your laptop and petrinr and you can go wireless as long as you can get 110 volt AC to the router. most motor homes now have a dc to ac converter were you can run computers tv router dvd players microwaves from it. it convert dc power to ac. you can plug your router into that and then set it up on your laptop to be wireless and you can use that to print and if your rounter finds a storng enough single witch they are lot of now a day you can go online as well.

penny

so perfect!and Hey! I love that mnaieng too for sure.Probably I'll send You my resumebecause I wanna work with Figabyte!Ok, stop laughing I'm reading aboutYour joomseed component, is so interesting. so, good job, really good job, Guys.Thank You so much! I foresee a smashing successfor your company, here in Italy(and we know the reason why).Good luck, Sexdrum,see You!

Tom

Dear Admin,this is a major problem if I reply OR fworard a message which contain any attachment (including signature pictures) it will not go out and red cross appears immediately (without connection to internet): Send failed please resend null .My phone is 9800 TorchPLEASE FIX THIS ASAPI also asked how can I pay via paypal please answer.

George Ou

[Deleted]

Regarding bofkentucky's comments that George Ou is merely the "reporter", Here's my full bio.

George Ou is an Information Technology and CISSP Security Consultant based out of the Silicon Valley who founded ForMortals.com. Most recently, George Ou joined Washington DC based Think Tank ITIF.org as Senior Analyst though he continues to work out of Silicon Valley.

George Ou recently served two years as Technical Director and Editor at Large for TechRepublic and ZDNet (both property of CNET Networks) doing in-depth coverage of IT and technology topics. Before journalism, he worked as an Information Technology Consultant who designed and built wired network, wireless network, Internet, storage, security, and server infrastructure for various Fortune 100 companies.

Now as for this MYTH that P2P only uses 3-4 TCP flows, that is a SOFT limit configured in most BitTorrent clients PER UPLOAD. So if the pipe is not saturated or the user changes that per-upload limit, it will go beyond 4 flows. If there are two upstream sessions, then there are 8 flows. If there are 3 uploads, there ar 12 flows.

On the downstream path, the limit is often set to around 50 and whenever I download something from BitTorrent, there are often 20 to 40 downstream TCP flows and that is multiplied by the number of simultaneous downloads. The total number of TCP flows on my BitTorrent client is set for 200 simultaneous TCP flows.

So for anyone to suggest there are only 3-4 TCP flows used by P2P users, they either do not know what they're talking about or they're being intentionally deceptive.

Robb Topolski

Richard,

Again reality does not match your version of the facts. Most home gateways won't support 5000 NAT entries. Even mine, the D-Link DGL-4300, which was built with extra NAT space specifically because of the demands for P2P, handle 2000.

That said, I have no facts to analyze as to why "we've" seen as many as 5,000. Remember that TCP entries generally don't expire very fast, and if your gateway device isn't keeping track of state, then they'll stay open practically forever even though your computer does not have them open. UDP entries are stateless, but most devices expire these off within a few minutes.

All, Richard Bennett is a shill and takes every opportunity he can to question my competence. Fortunately, this is a computer science and you can reproduce the results. To see how many ports your Windows device has open, and where they are going, and who has them open, use these commands:

tasklist

locate the PID number for the program that you're interested, for example, utorrent.exe

netstat -ano | find "1234"

(where 1234 is the PID number for utorrent.exe shown by tasklist.) To get a quick count, use the switch /c as such

netstat -ano | find /c "1234"

If you can find any program that has over 5000 listings as a result of the above command, please let me know. By default, none of today's P2P programs will do this. Users generally would have to grossly misconfigure their applications to make this happen.

Robb Topolski

Richard Bennett

Let me address one of Topolski's errors of fact in his comment above. He says: "Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running."

Dr. Briscoe - British Telecom's Chief Scientist - is correct in his analysis of P2P connections. In home gateways with Network Address Translation, we've seen as many as 5000 TCP connections in our translation table at one time, most of them created by P2P. Things become interesting when we consider what happens when a P2P application opens so many TCP connections, most of them test connections, that the table reaches its design limit and overflows: the NAT sends TCP Reset packets to the end points of the overflowing connection.

Oops.

This is an easily observable behavior that a competent software tester would see; the ones that my company have certainly can. The design response is to make connection table size dynamic, but not all gateway vendors have done that yet, since it's a fairly recent (last year or so) problem.

Richard Bennett

Let me address one of Topolski's errors of fact in his comment above. He says: "Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running."

Dr. Briscoe - British Telecom's Chief Scientist - is correct in his analysis of P2P connections. In home gateways with Network Address Translation, we've seen as many as 5000 TCP connections in our translation table at one time, most of them created by P2P. Things become interesting when we consider what happens when a P2P application opens so many TCP connections, most of them test connections, that the table reaches its design limit and overflows: the NAT sends TCP Reset packets to the end points of the overflowing connection.

Oops.

This is an easily observable behavior that a competent software tester would see; the ones that my company have certainly can. The design response is to make connection table size dynamic, but not all gateway vendors have done that yet, since it's a fairly recent (last year or so) problem.

Robb Topolski

PS: And for the record, I don't use P2P much these days -- there's just not much on those networks that interests me. I did try it out several months ago, out of curiosity (I had left my job and needed a new mental exercise as I was pretty ill and somewhat bedridden). Eventually, I ran into this Comcast mess.

Most of my P2P use since then has been to do testing in support of this investigation. I haven't done any P2P for a couple of months now as I've been focused elsewhere (NebuAd). I think the last time I tried it was to try something out that, ironically, George Ou suggested (as an original thought)!

P2P Defense League, sure, okay. But, really this effort is about Internet freedom. If I was in it for the P2P, you'd think I'd use it more.

Robb Topolski

Dear Mr. Willner,

FIRST

There is not one shred of evidence that P2P application programmers are circumventing TCP congestion control. Bob Briscoe thought that idea up without sufficient evidence to support it (he saw idle connections, but not active ones), and George Ou, as he is apt to do, repeated someone else's facts in his blog. That’s known as truthiness. It's not the truth.

George Ou’s Swarmcast evidence – every web surfer does essentially the same thing by having more than one web browser window open at a time while downloading a largish file. It’s not an exploit of any sort – it’s kinda the way that the ‘net works, has always worked, and works just fine.

The fact is that P2P application programmers do not cause congestion because it is not in their best interests to do so. They avoid congestion like the plague. Congestion slows them down. (Think about that -- why WOULD P2P developers want to do something that would cause congestion?!?!)

Had Mr. Briscoe as a technical guy or George Ou as a (ahem) “reporter” asked any developers, they would have learned that these programs do NOT use hundreds of connections, all actively uploading. They do open dozens of idle connections, but they use 3-4 uploading streams for a typical US broadband connection – sometimes twice that if two swarms are running. But no matter how many streams they do run, they're still limited by the individual settings of the broadband modem THAT YOU, THE ISP, CONTROLS. They can't use up any more bandwidth than you give to them!!

Please give the kind folks at Vuze or BitTorrent a call and have a sit down – you’re in for a shock. The reason that there is so much P2P traffic is because P2P is popular. And denying your customers P2P access is only denying them what they want (and paid for, as it happens to be).

SECOND

How in the world did we get from about 250 or so Internet hosts to 500 million without any wide-scale wire-speed Deep Packet Inspection to give us Network Management?

Yes, I approve of Network Management – the old fashioned kind – the kind that says you stay ahead of demand by upgrading your networks on time and you don’t oversell your bandwidth beyond all reasonableness.

I purport that you can, and should, and hopefully shall again operate your traffic networks without using this relatively new DPI that intrudes into managing the very APPLICATIONS (not the networks) your users are allowed to use.

THIRD

Thanks for bringing up Japan, which recently announced that it was going to institute a bandwidth “cap” of 30 GIGABYTES A DAY. George overblows their problems in order to support his political goal.

FOURTH

I have no interest in the commercial aspirations of P2P application publishers, except that they be given a fair and level playing field to sink or swim just like everybody else. But when big multimedia companies block little multimedia companies like Vuze and BitTorrent, then someone has to say something – and it might as well be me.

AND FINALLY

Mostly, I’m a probably a network tester and a protocol freak because I have a natural aptitude for it. I was reading music before I learned the alphabet. I could decode Baudot by ear (sort of). I have a keen sense of rhythm and patterns. I can visualize and assemble these things in my head and at slow enough speeds, hear them by ear.

George Ou and company make light of the fact that “I’m only a tester,” and they’re missing the point. I’m the guy who finds the bug in the logic, the lines of code that never get accessed or the protocol gaff that error checks too few or too many. It’s easy to write code and tell a computer what to do, it’s harder to tell a programmer why it’s really not as easy as it would seem.

I’m glad you admit that you’re not technical. Will you now go back to the Commissioners and staffers that you met and explain it to them? Give me a call, I’ll be happy to go with you.

bofkentucky

There's multiple issues here and the industry seems to want to focus on the top talkers.

1) Top talkers, if they truly are abusive, cut them off, Have established and published caps in the TOS and the penalties associated with them. Adaptive throttling can be done at the modem and/or the CMTS. The P2P community realizes that when you sell a 20/2 internet connection they're not going to be able to get that 24/7/365, but no one knows when the line is going to be crossed. P2P isn't the only utilization driver out their either, actively throttling/disabling virus and spyware infected pc's of customers could go a long way to freeing up bandwidth.

2) Deep packet inspection, using it for network maintenance is a good and legitimate use. Selling that data to marketing companies is shady in most users opinions. You have to volunteer to be give your TV watching habits to Nielsen for TV ratings. Why should the data on what sites you are hitting be any different? This goes for the DNS redirection systems or negatively impacting non-blessed voip.

3) The war between P2P and Network management. The industry keeps taking ham-fisted measures against specific protocols instead of stopping the true abusers. The P2P software makers and users will outflank network management software and hardware every time. By the time a proposed network management change is built by the vendor, tested, gone through change management and implemented the clock has been ticking for a patch on the users side since the vendor released the patch. This is similar to anti-spyware/anti-virus race. Simply detecting and removing the infections is going to lose out to properly securing the machines in the first place.


The customer, in the best case, has two options for internet service, one blessed by the feds 130 years ago to be a monopoly and the other one blessed over the last 50 locally to be local monopolies. Along with those monopoly rights comes a responsibility to listen to your paying customers and treating them fairly. Your average joe user isn't ever going to get to testify to his position before congress or the FCC, some one has to speak to their wants and needs out of the network.

If utilized properly, P2P can save network operators tons of bandwidth. For example world of warcraft updates are distributed via bit torrent, meaning that in an optimal case only one Insight user would have to download that file through the internet POP and then would seed the remainder of the insight network. Imagine if itunes, anti-virus patches or windows updates were distributed in a similar fashion.

There is a middle ground between the NCTA and ATT/Verizon and the top talkers, its all about finding a package and a price point.

Anonymous

Let's clarify something here, Robb Topolski was a software quality assurance tester. He was not a network engineer or architect and he has consistently shown that he does not understand how network congestion works.

The comments to this entry are closed.