Summary: Has enough positions to pass.
I'll quote stephen here, since he makes a point that I championed during the discussion. I would like to dicuss this. (3) The proxy injecting this header field means that the user cannot get any signal that this has been done and appendix C even says that the site should not allow the UA to unset the proxy's preference. This also encourages the use of plaintext. Other than saying "yeah, that's what's done" I don't believe that this problem was explored at all, never mind addressed. Injection of this value by proxies gets to the very heart of question of consent between two parties the requester and sender and that of the agency of the requestor. Encouraging transparent middle boxes to mess with the contents of flows is imho an irresponsible act on the part of the IETF. I could have definitely held my nose and pass this without comment were it to very strongly discourage that acceptance of such a hint over non-confidential non-tamper resistant channels.
I agree with Joel's discuss. My concerns are with the ability to insert this flag to enable censorship. I do also have concerns about using a binary flag for "safe". For instance, is a webpage with comments filled with graphic sexual threats "safe"? Sad to say, they are common. Does sexual material (except for things like sex ed, breastfeeding help, etc.) get grouped with violence, offensive political, etc? Is it better to have 32 semantic-less flags so that nuance can be better supported? I agree with Kathleen that an effort to improve the draft is worthwhile.
I'm looking at this text: Origin servers that utilize the "safe" preference SHOULD document that they do so, along with the criteria that they use to denote objectionable content. and wondering - is this an RFC 2119 SHOULD? I'm guessing documenting that you utilize "safe" has no impact on interoperation, so maybe this is more like "ought to"? - is there any guidance that could be given about how this documentation might be made available to users, so it would be easier for users to find it? I won't be a bit surprised if the answer is "no", but I wanted to ask ...
Thanks for this simple document. A fine idea to document it. I found... Note that this specification does not precisely define what "safe" is; rather, it is interpreted within the scope of each Web site that chooses to act upon this information (or not). That is good, but perhaps not painted red enough for some folk, notwithstanding the discussion in the Security Considerations section. How about: Note that this specification does not precisely define what "safe" is; rather, it is interpreted within the scope of each Web site that chooses to act upon this information. Furthermore, requesting "safe" does not guarantee that the Web site will apply any filters. --- I looked for (and found!) discussion of the insertion of "safe" into a stream. It's a fair discussion, but a worry for me. Having created this tool, is there a way to ensure that it is not used to filter my access to Web sites without me knowing? Of course, an intermediary that can insert "safe" can also modify the content, but it is much simpler to rely on the server to do that so it would be nice to have a way to prevent or detect insertion of "safe". Similarly, an intermediary that can insert "safe" in a request can remove "safe-supplied" from a response. Perhaps there is nothing to be done?
I am balloting No Objection, not because I approve of the document itself, but rather because I believe in the ISE's right to publish what they see fit; if I were balloting on the document *itself* I'd probably ballot Discuss, but this has now morphed into balloting on the inclusion of the IESG Note. While I would *like* the ISE and authors to address the questions raised in the Conflict Review IESG Note I respect their independence. As I do not see this actively breaking any IETF technology, I feel that the only response I can give is No Objection. :I think that it is not very well described and that there are risks... but this isn't my call to make.
Thanks for your work in this area. I do think it's important to experiment with ways to improve options for getting safe content without needing a proxy service to filter it out for you. I do think there is more to work through before this goes forward, but think it is worth trying to figure out if there is something we can do here. I do agree with the concerns of other ADs on this potentially being used for censorship, although think experimenting to see if this or something similar works would be worthwhile. Since this just sets a technical option, and you can already do this with cookies, I'd be happier to see this just between a server and client. If this can be altered by middleboxes, there is the potential for censorship with MITM approaches if a cleartext session is used. Assuming this is between a server and client, with the onus on the server to provide "safe" content (whatever that means to the server), there could be regional restrictions as to what that means put in place. However, I don't think there is anything to stop that from happening now and we have already had offshore/out-of-country web servers to get around taxes and other local/regional requirements. I don't think this flag will cause censorship issues forced on servers in a region where it wouldn't happen anyway. I do think there is an opportunity to reduce the number of middleboxes (proxy web servers filtering by DNS or URLs) used by organizations that can afford to run these services to protect their users from objectionable content. We are just talking about a technical option with no policy definition or requirements for it provided. The current methods require the ability to deploy a box and pay for personnel to administer it. Although there are a few implementations listed, how has this been working in experiments? I see this draft is listed as standards track. I'd prefer the option to be strictly between client and server and not with middleboxes requiring cleartext. It would help to have the text clearly state that this might be used at organizations to prevent objectionable content to meet HR requirements to remove the focus from children so this is seen as a broader solution. If this is at a school or corporate setting, sure, it's easier to make this setting with a proxy, but realistically, most use standard images running on the computers (or should) that get overwritten on a regular basis to wipe away any malware automatically (kiosks at schools, corporate may not get overwritten as often or at all). With this approach, it's easy enough to have this setting maintained in the browser/computer. Perhaps removing or changing the example on schools would help? Thanks for removing the emphasis on middleboxes. I think it would help to emphasize that the setting should be in the browser and could be on default images at schools/organizations for client to server communications. Grade schools might not be using images, but Universities do as they have learned from experience with malware. I'm also wondering how this might interact with ads. I know there are different technologies at play for ad insertion, but don't know the details of how they all work. Would ads be "safe" as well? Ideally, this would happen at the web server as we should be striving for encrypted sessions( As opposed to ad insertion by a middlebox - likely to be how this is done), which would change how this flag works from the current proposal that requires cleartext. Do ad servers recognize this or might ads be racy on a "safe" page? If ads are not safe, then this is really meaningless to prevent the issues I've had to deal with as a former CISO at a few organizations where we have had to investigate and fire people who have viewed porn at work. This could be really helpful to others in the same position I was in, preventing such access at work. I'm not as worried about the cleartext as I think the push for encryption will lead to a change in how this flag is set and where it can be changed rather than prevent people from turning on encryption. From the EFF statistics, 30% of web sites are encrypted (may be higher now) and from our corporate observations, as of last June, 78% of EMC employee web traffic is encrypted (I hope that's about the same for other organizations, but don't have access to their stats).
I wish I had had time to engage on this one during earlier discussions or IETF LC, but I did not. My apologies for that. I see no reason to standardize this bit. Although folks have argued that it should be standardized because there are multiple independent implementations of it, I think that is a red herring given that there is no standardized semantic associated with it. "Objectionable" and "safe" are characteristics that are defined differently by different sites, users, and cultures; their meanings can change throughout time; and as evidenced by existing sites and applications that provide their own content filtering preference settings, those preferences are often not binary. (This is in fact one way in which the "safe" preference is distinct from DNT, because in the DNT case they actually tried to define the semantic. That was incredibly challenging and in the case of "objectionable" content I do not believe it is possible.) The same is true for the "safer" concept described in Appendix C. Which is safer, filtering violent content or filtering nudity (or both)? Different users would answer that question differently. For a site that offers both of those choices independently, advising sites to associate the "safe" preference with the "safer" of those three options is meaningless to the user -- the user will still have to rely on the site's concept of which one is "safer" or "safest" if they want to experience the benefit of this preference. Given the lack of a standardized semantic, I also think proliferation of this header could incentivize increased censorship. Since the deployment of this header is designed to dramatically increase the number of requests in which a preference for "safe" content is signaled (since it's designed to be sent on all requests), sites looking for an excuse to take down content altogether, or legal authorities looking for data to back up claims that the web should be rid of particular kinds of content in the first place or that the preference should be required to be on by default will potentially have lots more data to back up their claims. Furthermore, having a country-level proxy insert this could dramatically change content availability for a large user population with very little effort on the censor's side. I don't see the need to wait for any of these things to happen and then try to put the genie back in the bottle, because I doubt that will be possible. I also don't see how the various arguments made about proxies inserting this preference can be properly reconciled. On the one hand, proponents of the header have argued that one of the reasons its presence does not necessarily indicate that the user is a minor or otherwise vulnerable is because a proxy could insert the preference on behalf of many users. On the other hand, the idea of a proxy inserting this against a particular adult user's wishes as a means to censor his Internet connection is clearly anathema to most folks and there is discussion about removing the proxy text. I don't see a solution where we could have it both ways -- have the preference indicate nothing in particular about the user, while discouraging proxies from inserting it. I have sympathy for parents for whom the landscape of sites and apps offering parental controls is complex. But I think the risks for the Internet, users, and the IETF of standardizing this preference far outweigh the benefits to parents. As long as "objectionable" content is in the eye of the beholder, setting these preferences site-by-site provides a useful safeguard.
--- OLD DISCUSS Before I abstain on this I would like to briefly discuss the evaluation of rough consensus and check if a few points raised were addressed. I'll put those as discuss points, but plan to abstain once those are briefly covered as I think that this is something the IETF should not specify, never mind "endorse" as a proposed standard. I think it is something we would regret publishing, much as we would have regretted it had we produced an RFC for the (IMO quite similarly broken and damaging) do-not-track (DNT) flag. (1) While I am definitely in the camp who would prefer that we not specify this at all, and hence am a biased judge, I can't see that there was rough consensus for this, having just re-read the IETF LC mails. The write-up does clearly acknowledge that any consensus was very rough in the view of the sponsoring AD. Note that I'm not at all questioning Barry's intentions here, just his conclusion. In any case, my reading is that there were arguments not addressed (see below) and that it is just not credible that all this LC discussion results in no change at all in the draft, and it is basically not at all clear to me that what seems like a more or less 50:50 set of folks in each camp, (I didn't count though), with both "camps" making reasonable arguments for and against, can in this case constitute even a very rough consensus. So I'd like to chat about that with the IESG in case my biased opinion turns out to better map to the mail archive than Barry's AD evaluation of the last call. (2) I also don't believe the point I raised about the scope of this was ever addressed. Does emitting this apply to just the response to that request, or to the origin or to whatever the server thinks is correct or what? Having an undefined semantics and an undefined scope seems broken to me at least but the point was never addressed that I can see. (3) The proxy injecting this header field means that the user cannot get any signal that this has been done and appendix C even says that the site should not allow the UA to unset the proxy's preference. This also encourages the use of plaintext. Other than saying "yeah, that's what's done" I don't believe that this problem was explored at all, never mind addressed. (4) The point raised by Joe Hall of CDT that emitting this signals a higher probability that the site is dealing with a minor (and hence perhaps with a user more easily socially exploited) is I think valid and is not reflected in the draft nor much in the discussion. While the author offered to add text, no change occurred. (5) I don't see where the point raised by Christian Huitema was dealt with - that the IETF standardising this will likely lead to (in particular) governments who wish to censor content requiring conformance to RFC7xxx. I'm not sure that we have a good BCP telling us to not collude with such, but I don't believe that point was addressed in the LC. (Note: it's quite possible that I missed some things that were dealt with, or that there's scope for disagreement as to whether or not things were or were not addressed.) --- OLD COMMENT - I didn't raise this during LC so I'll just make it a comment, but I also find it objectionable that Appendix C says that even if the user stops sending this preference then the servers should continue to behave as if it is being sent. That just seems like broken protocol behaviour to me esp with no defined semantics. - "become much simpler" is IMO utterly clearly not correct yet not even that obvious change was made after IETF LC.
I whole-heartily agree with Stephen and Joel on this. This is an unprotected field that is indicating a desire to receive content deemed safe by someone else's subjective view. Combine that with the encouraged use of plaintext and proxies is not good. Even the Mozilla developers recognize that the issues here are not network/protocol issues (just like DNT).
This looks like a liability nightmare. I understand why you want to do it, and to some degree sympathize, but this looks to me like a baseball bat you are handing to law enforcement that will be used against unsuspecting web operators who publish things they think are unobjectionable but that wind up being considered objectionable in some jurisdiction. I can easily see it being used to suppress LGBT content, for example, and any sort of useful sex ed content for teens. The IETF should not be associated with this specification.
I am abstaining, because I have an issue with what the safe-hint header will mean in reality and I do see issues coming out of censorship (i.e., when proxies are inserting this header w/o letting the user know). It might be even an illegal action in some countries to inject such header into the communication between browser and server, as this will modify the communication. Take Germany as one example. The document is not and cannot specifiy what objectionable content is, as this depends on too many factors, such as culture background. In short, I do share what Adrian, Alissa, Kathleen and Stephen have alread said.
Still not too sure how to ballot this document. So no record for now. So basically the proxy for my company/provider/country will decide what's "safe" versus "objectionable" content. Btw, http://charliehebdo.fr/ "objectionable" content or not? It depends, right... So basically a proxy is required, right? We can't expect web server to flag themselves what could be "objectionble" content, like advertisement, porn, or charliehebdo. (that would remind me of the evil bit april 1st RFC: If you're evil, say it.). That could work if the laws are changed, the laws in all countries. And bbviously there is an international agreement on what "safe" means. Let's face it, that will not happen. Note: Moving a server in a different countries because different rules apply is no big deal. Therefore, I'm kind of hesitant between: - publishing this document doesn't matter because it will not be widely implemented, So it won't matter much. - this specification might be enforced in wrong way, so it's evil and we should not publish it I want to hear about the different arguments before balloting.