7th Social WG F2F Summary
This post is my own opinion, and does not necessarily represent the opinion of the Social Web WG!
It's crunch time, but we're on track. We have a selection of specifications which essentially cover creating content and then pinging people about it in various different ways. Many people in the group are using one or more of these specs to power their own personal online social stuff. There are so many more Social Web problems to solve, and we're just trying to dig some foundations and plant some posts in the ground to get everyone else started.
The first time the Social WG gathered last week was for an hour on Wednesday, to demo the products of all of our specs to the broader audience of TPAC attendees. We had a full house. Everything went smoothly, and our demos were fairly coherent. Aaron showed Micropub and Webmention working together in the context of posting a reply to his site (with Micropub), which automatically sent a Webmention to post he was replying to, and after a few seconds his reply showed up there (on the site of someone who isn't a WG member, and had implemented Webmention receiving independently).
My site returns ActivityStreams 2.0 for everything when you request
application/ld+json, so I showed collections and individual objects at the commandline. Dumping a blob of JSON on the screen isn't a particularly compelling demo, but Chris was able to furnish me with a screenshot of his AS2 consumer, emacs-based, rendering one of my posts. Maybe you want to make an AS2 reader so you can design your own reading experience for my posts?
Chris showed how he is able to create notes on his site using an emacs-based posting client, which prompts his server to send the message on to the inbox of anyone addressed, all using ActivityPub.
Sarven showed dokieli; sharing an article with someone (automatically pulled from the contacts list in his profile) which sends an ActivityStreams
Announce to their Inbox using LDN. Also annotating an article, whereby he could save the annotation in his own personal datastore (via LDP) and optionally send a copy to the AnnotationService provided by the publisher (using the Annotations protocol, plus send a notification to the publisher with LDN.
What followed was some discussion of how we're planning to tackle spam and abuse. ActivityPub includes a mechanism for blocking another user; LDN encourages imposing constraints on one's inbox to filter what ends up there; and self-hosted or known/trusted hosts for you content can give you more control over what shows up where, but there's way more to be done here, and we're not close to tackling that yet.
Guidance was recently issued that specifications should ask for horizontal review (review by other W3C groups) three months before going to Candidate Recommendation. We're obviously far too late for that, so we're doing the best we can with the time remaining. Horizontal review is part of 'wide review', which specifications must also demonstration before they can transition to CR. Wide review invovles having input and feedback from a variety of different sources outside of the Working Group, and even W3C. We can demonstrate this through issues raised on github, implementations reported to us, and feedback on public mailing lists. As such, if you're going to send any of us a message about any of our specs, please do so somewhere that is public! The
public-socialweb list is archived by W3C, so that's a good one to CC if you don't want to raise an issue on github directly. Pointing out typos, requests for clarification, and pledges to implement are welcome too.
The Security & Privacy groups provide a questionnaire to guide through common issues that might be flagged during review by those groups. We collectively decided to carry out this questionnaire and add it to the 'Security and Privacy Considerations' section of all of our specs. The Internationalisation (i18n) WG also provide a checklist to run through which helps them perform their review more quickly.
Specs in CR
ActivityStreams 2.0 (core and vocab), Webmention, and Micropub are currently in Candidate Recommendation (CR), which means we're actively soliciting implementations and accepting reports. They all have test suites in some stage of readiness, and the implementation report templates are provided in the respective repos, and designed largely around reporting which tests your implementation passes. Issues raised during CR which require substantive spec changes and may affect existing implementations require restarting the CR period (pushing back CR exit for a minimum of four weeks), so the earlier we hear about those the better. We went through issues raised on our CR specs to determine whether they result in normative changes, and for the most part they were editorial.
Webmention dropped returning a human-readble message in response to a sender's
POST request, because anything involving human-readable text opens internationalisation challenges. This message was primarily for developer debugging as it's unlikely to ever been seen by an end user, and it was optional anyway, really only mentioned as a possibility, so this was always at the implementations' discretion. Servers are expected to 'do HTTP properly' and send and honour
Accept headers for language if they are sending human-readable responses, which doesn't need explicitly calling out in the Webmention spec.
There was discussion of requiring a "backoff" strategy when trying to discover a Webmention endpoint. This also applies to LDN, and any other specs which requiring probing at webpages to determine if they accept whatever the discover-er is trying to send. The issue is that a sender may be triggered to attempt discovery multiple times on the same URL, or for different URLs on the same domain. There's a point at which they should stop trying if they fail, or risk the server being probed blocking them completely for an unreasonable amount of
HEAD requests. Webmention has added a suggestion to include a
Retry-After headers on the server.
The biggest Micropub issue concerned the media type of the (very specific) JSON structure returned in the event of an error. The conclusion was not to do anything special now, but if there are future versions of Micropub which modify the structure, they will use a profile to indicate which version they are, so implementations encountering that error know how to interpret it.
During our crossover meeting with the i18n WG we noticed that Micropub's dependency on Microformats2 means that it has no way to indicate content language. This is being worked on in mf2 though, and Micropub will get a note to indicate this, and any improvements to mf2 will be incorporated in Micropub by reference. This seems at first glance a bit fishy since we're only supposed to reference 'stable' external specs, but this was resolved earlier in the year by the various mf2 sub-specs documenting their change policy, and committing to clearly indicate which parts are 'stable' and how they are updated.
ActivityStreams 2.0 has acquired a bunch of editorial comments since going to CR, and these have been addressed. There's a bigger question of how to manage (append-only) extensions after the close of the WG, and we settled on Proposal 2 in issue 370. I'll be writing this up in the (pending) human-readable namespace document shortly.
Then there was an even bigger question of whether or not
name is a required property for all objects and activities. At the time of the meeting it was required, but given some implementor experience - in particular with people adding dummy meaningless names to activities just to pass the validator - we revised this. The usefulness of a
name for every entity is giving consumers something to display even if they don't understand any other properties. A drawback of developers adding auto-generated or dummy names is that a consumer can't tell the difference between this and an actually meaningful name, so it can't make a sensible decision on whether it can overwrite it with something more useful (eg. in a different language, or something particular to the application at the time). We decided to shift the burden of assigning names to unnamed entities from publishers to consumers, as consumers are more aware of the context in which the entity is being used or displayed. Publishers can of course assign a
name, which should be consistently respected by consumers as they'll now have confidence the name was assigned deliberately. We still think there should be a name for
Article type objects. So we resolved to switch to having two properties:
- one which is required, expected to be machine-generated, and can be replaced by consumers if they think they can do it better.
- one which is optional, expected to be human-generated and meaningful, and consumers should respect it.
We haven't decided exactly what the names for these two properties are yet. Probably 2. is
name and I like
fallbackName for 1., but others up for discussion are
label... but watch this space... (and issue 312).
We really need implementation reports for AS2 by the way... in case that was news. Let us know what you're working on!
Specs imminently entering CR
The most substantive issue we discussed on ActivityPub was adding a
source property, to indicate the original input the user made in case it is converted from something else to HTML for the
content field. The obvious use case for this is letting users author their content in markdown. Storing the original markdown alongside the converted HTML lets the client present the markdown back again for the user to edit in future. Current pump.io implementations have problems with letting users create content at first with markdown, but presenting HTML back to them later, which can be confusing. Personally I think a client that offers markdown editing and can do the conversion to HTML to send to the server should also be able to take that HTML content and convert it back to markdown for the user in the case of editing an existing post. Apparently going back the other way is more challenging though. And Chris's use case was for having a representation of the content in emacs org mode, not markdown. I was worried that the
content fields could get out of sync if editing the same content is done by multiple different clients. That's (probably?) fairly edge though (though I'm already using multiple APish clients on the same content). We compromised by saying that clients which find
source and don't understand it must prompt the user to let them know the original source will be lost of they proceed with editing, as a feature "at risk". I'm a bit wary about specifying a UI thing in the spec, but we'll try it out and see how it goes. But maybe all it really is is a requirement that clients MUST understand this particular property; so far there aren't any properties a client can't ignore.
We got agreement to add ActivityPub terms which are extra to AS2 into the AS2 namespace and JSON-LD context, which will simplify things for implementors. This also stands as an initial experiment with how AS2 extensions can be done in the future.
Chris also proposed to define the media endpoint of AP as "at risk" as, even though they need it in MediaGoblin, it's not worth potentially holding the rest of AP up for, and they can easily develop it as an extension afterwards. There may also be a possibility of convergence between the Micropub and AP media endpoints.
We began our LDN discussion by learning that Kjetil, who had attended the first day of our meetings and then had to leave, implemented a portion of an LDN sender/consumer on his flight back to Norway.
LDN is addressing the "backoff" issue previously mentioned for Webmention by referring the new section in Social Web Protocols, and stressing that implementations should anticipate and respect relevant HTTP status codes. After all, it's only polite.
We briefly discussed LDN's need to add
inbox to the LDP namespace, but we had a session on the Wednesday plenary day about extensibility of W3C namespaces in general, including a bunch of LDP people in the room, none of whom objected. We also had support in our github issue about this. The WG agreed to go ahead with whatever the relevant W3C and Semantic Web communities consent to. Hindsight addendum: I added
inbox (at risk pending PR) to the LDP namespace a week or so later. Nobody has complained and the Web still seems to be up.
In LDN we had a section ("Security, Privacy and Content Considerations") which mixes normative and non-normative content, and I was getting confused about what kinds of things are supposed to be which. The experts in the room helped us to untangle this section, and we decided to move all normative content into the appropriate parts of the spec; to move some of the informative content into green 'note' boxes if they pertain to something specific, and anything that was left should apply to the spec in general and we could mark the entire section as entirely non-normative.
Both LDN and AP will have complete or partial test suites, or a detailed plan for a test suite, before they can enter CR.
Specs in earlier stages
Post Type Discovery right now defines a 'mapping' from combinations of properties a post might have (based on mf2 properties, though many of them are explicitly unstable so we can't actually reference them normatively) to a 'type' currently presented as an English-language string. We discussed at length the purpose of Post Type Discovery and came to the conclusion that it ultimately needs to be possible to generate ActivityStreams 2.0 types (rather than arbitrary strings, which are essentially yet another vocabulary). Many of us thought it was supposed to be doing this anyway, and were surprised that it hasn't yet. So this should see some significant updates in the near future towards this.
PubSubHubbub is raring to go as a FPWD, thanks to Julien's hard work in updating it to W3C spec format. We just have to jump some process hurdles, and that'll be published soon. Now is a great time to raise issues on it if you have or intended to implement it! We spent some time discussing a new name which is friendlier to non-English language speakers, and we're probably going to go with PubSub. We also decided to close the PuSH community group, as any further discussion around the spec should take place in the WG.
There's a general consensus that we've barely scratched the surface, but we formally resolved not to try to take on any more recommendation track documents henceforth. We need to focus on getting what we have to rec! However, to continue work after the group closes at the end of the year, including for managing errata and extensions, we'll open a Community Group. We'll do this towards the end of our charter, but if you're interested in being a part of it, drop an email to the public list.
We're planning another face-to-face
sometime in November in either San Francisco or Boston. Whence we'll hopefully be wrapping things up!
There's another writeup here by Chris.