On the worrying state of networking standards
This entry was triggered by a column by David Cartwright regarding the Acid2 test for web browsers. The test shows how well a browser implements CSS by feeding it a complex bit of invalid CSS. Only a few browsers actually pass the test, Internet Explorer fails horribly and Mozilla/Firefox fares slightly better but also does not render this image correctly.
David Cartwright goes on to say that if so many browsers, both commercial and open source do not render this test correctly
- The browsers are full of bugs
- The authors have chosen to implement only a subset of the specification
- The specification is ambiguous or incomplete
All of these are worrying.”
David is right, from experience I can tell that the problem actually starts at reason 3 and then goes via the 2nd reason to the 1st. When the networking standard is not a rendeing system like CSS but a complete protocol the problems multiply. Networking standards (both protocols and datastrcutures) are in a sorry state. I spot a number of worrying trends (1) Inexpert protocol creation, (2) incomplete specification and (3) overly complex designs
Inexpert protocol creation
Those who make the protocol often are not the experts who have an actual experience in building this kind of thing, getting it to interwork with many other independent implementations and scale to millions of global users. Unfortunately ever more often our next generation of networking standards is created by quite the opposite. Students (both undergraduate and PhD) and employees of various companies alike claiming that they know nothing about the subject makes them far better qualified to create a standard for it than experts who have “preconceptions”.
… drawbacks are its substantial (and continually growing) complexity, and the wealth of different ways to accomplish the same function, both of which have led to interoperability problems. …Primitives are provided for admission and bandwidth control. These are not useful outside of LAN-based IP telephony, since …. cannot determine capacity for media calls in a general purpose IP network. … extensibility relies on protocol versioning and vendor specific attributes scattered throughout the protocol, which is very limited.
Most importantly,… has difficulty delivering new services. …. A specification is required for every feature. This limits the speed at which features can be added, and limits vendor creativity….
He wrote this about H.323 the then dominant VoIP protocol which he uses as the reason for creation for SIP. Unfortunately today this can be said doubly for SIP. The SIP specification is no longer 23 pages but 269, and that is only the base spec. Any implementation needs to implement a set of standards and internet drafts to get to a working product.
Incomplete specification
This leads to the second category of problems, incomplete specification. When Jonathan Rosenberg attacked H.323 stating that it was too complex. Actually when compared to today’s Internet standards H.323 was an example of tight specification. True it was complex but how these elements locked together was clear for implementers. (BTW: what got university types and people from small companies worked up was that H.323 made the decision to have its protocol, which was defined in ASN.1, encoded with the method that required an expensive toolset rather than the kind for which many toolsets were freely available a small error of judgment that led to huge effects.)
SIP on the other hand is defined as a text protocol in a very freeform way. This is also very common in networking specifications today, and some go as far as using XML. While a text-based protocol is very convenient to debate about either in a meeting room or on a mailing list, and hence read by humans, the freeform way in which humans may read a message is not very useful for computer programs.
Why is freeform text so bad? An example; if you can use the name of the person you want to send a message to several times in a message to whom should you send it, the last one, all of them? If it is the last one, you will need to check the entire message to see if there is the final recipient lurking there. Today this unfortunately happens in a lot of cases for all kinds of essential information, it is specified that it needs to be there just not how many times, where in the information sent or how to handle error cases.
A complete specification defines for every error case how to handle it. This is what makes specifications so large and so complex to write. So often the specifiers get stuck at the sunny day scenario where everything goes as planned and trust to the common sense of the implementer to handle the error case in the way that is obviously correct. This is where things go wrong, what is obvious to one person is unclear to another. And of course a short spec is a good one, what kind of document would you like to get past management, a 25 page one or a 300 page one?
There is an alternative. I was once involved in a project where we were on our way to create standards that allowed automatic testing of the implementation. Something that would help a great deal in getting a technology deployed quickly and correctly. However this was killed because it was not seen as progressing fast enough. To my satisfaction has the project that came after it sill not delivered anything usable and has bogged down in the quagmire that we managed to avoid by going the long way.
Overly complex designs
On top of all of this standards suffer from feature creep. Partly this is because of enthusiasm of the creators. Partly this is because the creators usually can not agree on everything, person/company/country A wants one thing B wants another, they can not agree and hence both get in. In some standards they do a better job at separating all of these options than in others. So we get overly complex standards with too many features for any implementer to build, one has a usable product when only a part of the features. This of course culminates in the number of bugs, the more complex anything is the more changes one has to make a mistake implementing.
Coming back to the original column, David Cartwright ends by posing the question “how many packages we rely on day to day are hideously broken but we simply don’t know it”. Well David, I can tell you judging from the base material of the networking standards, a whole lot. I singled out SIP because I find it annoying that a technology that was so hyped is so broken (one wishes that as much attention went into its technology as in its marketing) but SIP is just a tip of the iceberg, an example of the state of networking standards. We have a long way to go before Internet standards are as good as we’d wish which may lead to products we can rely on.
0 Comments:
Post a Comment
<< Home