IETF 83 (Paris), HTTP/2.0, and Encryption

I attended IETF 83 in Paris this past week with my fellow Mozilla networking team member Patrick McManus. This was my first IETF meeting, and it won’t be my last given how productive and enjoyable it was.

Our primary goal was to participate in the HTTP(bis) working group, where we hope to standardize SPDY, possibly under the label HTTP/2.0. I learned quite a bit about how the IETF standards processes work and greatly enjoyed spending time with many of the people involved.

We’re excited about what SPDY has to offer in terms of security and performance. The HTTP/2.0 proposals based on alternatives to SPDY (or that deviated significantly) were interesting, but I’m still convinced that the solution we end up with should be based largely on SPDY. I can hardly imagine a proposal that better exemplifies “rough consensus and running code,” with plenty of data confirming its benefits.

I’m also more convinced than ever that encryption (e.g. TLS) should be a requirement in HTTP/2.0. This is the right thing to do for our users and there is now plenty of data available to debunk myths about unacceptable deployment costs. Mozilla has a strong history of standing up for user security and privacy and hopefully we’ll continue with that tradition by strongly opposing any solution that does not require encryption. Perhaps we should go so far as to decline to implement any non-encrypted solution that might be specified.

9 thoughts on “IETF 83 (Paris), HTTP/2.0, and Encryption

  1. Agreed that encryption should be a key part of such as solution, but if so, it needs to be much easier to set up. Right now, I get a lot of value from being able to set up simple web servers – e.g embedded into IDEs, or as simple services on a local network. It’d be a huge step backward if that kind of thing became impractical due to the hassle of setting up certificates and stuff.

  2. @Simon: I think that kind of simplicity will come with technology adoption. The parralell I draw here is H.264. Years ago when it first came around, everyone was worrying about how complex it was and how difficult it was to edit. Now that it’s become widely adopted there are plenty of tools around to work with it and I can play around with it as easily as Mpeg2.

    I think that as secure connections become more widespread, implementations will evolve and improve to the point where the technology is as transparent as many of the other critical building blocks of the web have.

    Either way, if Adam Langley’s blog post is anything to go by, we’ve got a lot in the way of speed improvements to expect as secure moves away from being the exclusive preserve of banks and e-merchants.

  3. Encrypting connections is very important to our freedom and privacy, and I wish that one day all of our connections will be encrypted by default. In order to do so, there are some limitations in the current implementation I think you should spend some time with the IETF working group in order to find how to make it better.

    Certificate costs – currently it costs money to every site owner to have a certificate attached to his domain. There are some workarounds such as self-signed certificated and few free certificate authorities which are not trusted by the mainstream browsers by default. In order to make sure everyone will be able to enjoy encryptions, I wish that the basic certificates will be free, possible these self-signed certificates won’t trigger the broken certificate error message in browsers.

    Multi Certificate – Most small and medium websites are hosted in a shared hosting environment. This way we reduce the costs of having the site up and running, but because there are few websites sharing the same IP address it is impossible currently to have multiple certificates attached to the same webserver. Because of this, people who want to secure their connections need to pay more for a private IP address and certificate hosting, which makes the hosting way more expensive.

    Because of these limitations, I find myself using not encrypted connections, which make me fear of being tracked and some evil people stealing my password and cookies from some important websites I manage and my own blog. Because of my fears I find myself never publishing blog posts when I’m in public locations and I find it censoring my free speech.

    • Things like DANE (DNS-Based Authentication of Named Entities) and Convergence can help in this regard, they offload the question of “Is this a valid self signed cert or a MITM attack?” to 3rd parties outside your network.

      In the case of DANE it does this by stuffing the hash of the SSL certificate into the DNS entries (which are then signed by DNSSEC), so if the DNS entries are valid and match the SSL certificate, then you can know that the certificate is valid for the domain.

      Convergence works by having 3rd parties request the certificate and see if it matches what you’ve got, if they match it’s valid, if they don’t match it’s invalid, etc.

      I’m oversimplifying this a fair bit, but things like this will go a long way to making self signed certificates user friendly, and as a result help more and more sites use encryption.

  4. I think we need to make a distinction between “encrypted” and “signed”. Signing is really a way for a domain to certify its identity, to confirm that it is who it says it is via a well maintained web of trust. Encryption is a way of preventing snoopers from poking in your connection. The two obviously go hand in hand – for a lot of the “original” use cases of HTTPS you need both. But in the modern web I think there is something to be said for splitting the two concepts so that every tiny website and embedded webserver is secure by default without imposing the technical/financial/temporal burden to opt into the web of trust for every little blog and ad-hoc webapp.

    • no, encryption is useless without certificates, because it allows a MITM at the start of the connection, it then does not matter if its encrypted or not.
      you need to understand that, to understand why it has been made in that way. you need a trust model (even thus i heavily dislike current trust model for https/ssl/tls)

      • I’ve always liked the ssh trust model. I create new linux boxes all the time and ssh into them. I’m prompted to verify the fingerprint the first time I connect to a new box from a new client, but after that is simply checks to see if the fingerprint matches. Maybe fingerprints could be embedded into hyperlinks so that when site A links to site B, it can tell you what fingerprint to expect. If you trust site A, then you can trust it’s fingerprint for site B.

  5. if you want encryption, solve the certificates issues.
    not solving that will not help freedom and privacy all that much since:

    -only paying customers will have proper certificates
    -even if they pay, certificate authority will be compromised/untrustable (as it happened several times already)

    and those both points make encryption for things such as HTTP not very useful for the people, specially if it’s mandatory.

    pgp-like trust models are probably the way to go (like moxie’s solution, and others)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s