🗁Added 35 photos to album London, July 2022.
Some days in London for the TAG face-to-face. Great food at Mildred's, Temple of Seitan, Luminary Bakery.
Some days in London for the TAG face-to-face. Great food at Mildred's, Temple of Seitan, Luminary Bakery.
4 night's shelter in London
596.49gbp (expensed) (£596.49)
Post created with https://apps.rhiaro.co.uk/latinum
Post created with https://apps.rhiaro.co.uk/no-ceremonies-are-necessary
In recent weeks I'm getting slightly more interested in... stuff... tech stuff. And feeling more able to take things on than I have in long time. I think this means I might actually be recovering from the Great Burnout of 2017?
But hard to test, because I'm also taking a bunch of holiday from my day job this month, and I haven't had to think about travelling/travel planning since August, so maybe that has freed up some areas of my brain that were busier than I realised before.
Post created with https://apps.rhiaro.co.uk/no-ceremonies-are-necessary
So many RFCs, so little time.
Post created with https://apps.rhiaro.co.uk/no-ceremonies-are-necessary
The OpenActive Modelling Opportunity Data spec is so nice. It's well written, well structured, and all the Linked Data stuff is done properly. I need to refer to it for ODSCo-op work at the moment, and it's a real delight. Similarly the Realtime Paged Data Exchange one is also nice.
ActivityPub and WebSub went to REC this week and the work of the Social Web WG in general has been getting a lot of attention, including AP making it to the top of hackernews. There are lots of good comments, but of course it's the negative ones that stick around when you release your babies into the harsh wilds of the Web for the last time.
There was a lot of conflict inside the SocialWG, and a lot of compromise. The comments that irk me the most are the ones that suggest we made decisions on a whim without thinking about things at length, or ignored prior art.
Sure we standardised multiple ways of doing similar things, but the decision to do that came only after much wailing and gnashing of teeth* and faced with the prospect of not standardising anything at all in this space. Or alternatively one- to two-thirds of the group meeting with suspicious accidents in lieu of consensus. It wasn't for funsies. We weren't trolling implementors. We were just trying to cope.
Anyway, what reassures me in the end is that, as we could never make everyone happy, at least we've somehow succeeded in making nobody at all happy.
The art of consensus.
* and over a year of work, dozens of telecons and several face-to-face meetings around the world often at participants' own expense, and not a little yelling.
Linked Data Notifications is a protocol to facilitate sharing and reuse of notifications between different Web applications. It's a W3C Recommendation from the Social Web Working Group, and part of a push to help people own their data and re-decentralise the Web, particularly the Social Web. You can read more about why you might want to care about this here.
For this post, I'm going to jump straight into implementation. I've chosen PHP, without any frameworks, because if you already have a (local or remote) server it should be quick for you to get going with, without needing to set up or configure anything. The "Linked Data" in the name implies involvement of RDF; in fact LDN uses JSON-LD, but I don't presume any existing understanding of these things for this post, I'll just try to introduce the minimum that you need as we go along. (For a nice intro to JSON-LD see Manu's YouTube video JSON-LD Basics.) I am assuming though you have a basic understanding of JSON, and what HTTP Headers are.
LDN is a three part protocol. We expect front-end applications as well as servers to play the roles of senders and consumers of notifications. The third part is receiving. As the human in the mix, you need to tell the applications you use where to send notifications that are meant for you (or your software to pick up), as well as where applications can read them from (in order to display them back to you, or to process them and trigger other tasks to run). This 'where' is your Inbox. Applications might, for example, discover it from your homepage or a social media profile. You should host your Inobx somewhere you trust. Just like with email, some people might want to rent space from a provider, or maybe your workplace or school supplies one to you. At the moment the market for this is.. pretty small. This Web-data-owernship thing is in its early days.
So for the pioneering developers among us, we can write our own, using around 50 lines of quick and dirty PHP.
For convenience, we're going to set some variables for URL paths we will use regularly:
$base = "https://".$_SERVER['HTTP_HOST']; // Your domain $inboxpath = "inbox"; // The directory where your notification files are stored.
First, your script needs to accept HTTP POST
requests containing JSON-LD blobs. We get the data from the php://input
path. We also get the request headers. LDN receivers need to support as a bare minimum application/ld+json
payloads, so we'll send a 415
if the Content-Type
header doesn't match this. We're also going to check the payload parses as JSON since that's an easy way to throw out (with a 400 Bad Request
) invalid JSON-LD. If you have a JSON-LD parser handy, you can validate it against that too. I haven't included one here because.. quick and dirty.
Aside: If you do have an RDF parser around, you can accept other RDF serialisations like text/turtle
. If you do, you should advertise this with an Accept-Post
HTTP header on your Inbox. I use EasyRdf for all of my RDF stuff. If you don't want to include a library there are a few services with APIs you can call, like rdf-translator.
$input = file_get_contents('php://input'); $headers = apache_request_headers(); $data = json_decode($input, true); if(strpos($headers["Content-Type"], "application/ld+json") === false){ header("HTTP/1.1 415 Unsupported Media Type"); }elseif(!$data){ header("HTTP/1.1 400 Bad Request"); echo "Invalid payload."; }else{ // Write notification contents to a file }
The LDN specification says that even if you only accept JSON-LD serialized notifications, you should set the Accept-Post
header anyway. You can do this in PHP with header("Accept-Post: application/ld+json");
or an .htaccess file with Header set Accept-Post "application/ld+json"
.
Once we've determined the payload contents are valid, we should store the notification. This is where you might want to do any or all of the following:
@type
, or other specific property-value (predicate-object) combinations.But for now, all we're going to do is dump the contents into a file, update the notification's @id
to point to the location we're storing it, and set the HTTP response headers:
// Write notification contents to a file $filename = $inboxpath."/".date("ymd-His")."_".uniqid().".json"; $data["@id"] = $base."/".$filename; $json = json_encode($data, JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES); $h = fopen("../".$filename, 'w'); fwrite($h, $json); fclose($h); header("HTTP/1.1 201 Created"); header("Location: ".$base."/".$filename);
Aside: This implementation is super simplistic. The notification may come with an @id
already set, or even contain several distinct subjects, pointing to resources somewhere else on the Web. Checking that referenced resources makes the same statements as the notification you received could be good practice for verifying the truth of the notification contents. It may also be set to "@id": ""
, which is relative to request; it basically means 'this'. You don't need to add your own absolute @id
if it's already set; you can consider the URL at which you store the data as a graph URI, which contains statements about other things, but not about itself. Alternatively, you could wrap the notification data in @graph
and apply your own @id
on the top level.
Since we're storing the notifications as JSON files, you probably want to tell your server to return JSON files with Content-Type: application/ld+json
. You can do this by putting the following in a .htaccess file: AddType application/ld+json .json
.
In order to make your notifications reusable by other applications, you need to expose them to GET
requests. Specifically, your Inbox needs to return a blob of JSON-LD which points to a list of the URLs from which the individual notifications can be retrieved. You probably want to put this behind some kind of access control, so that only applications with which you have authenticated can read your notifications. I use IndieAuth as a service.
In this case, the URLs in the list are the files we stored the notification data in. The JSON-LD for an Inbox listing should look like:
{ "@context": "http://www.w3.org/ns/ldp#", "@id": "", "@type": "ldp:Container", "contains": [ { "@id": "https://example.org/notification1" }, { "@id": "https://example.org/notification2" } ] }
The listing doesn't need to look identical to this, but it needs to be an equivalent JSON-LD representation. Since there are several ways of presenting the same thing in JSON-LD, you might find you use a serializer that outputs something slightly different. For example, you might see the contains part shortened to: "contains": ["https://example.org/notification1", "https://example.org/notification2"]
. You're also likely see the @context
appear differently, and prefixes for the properties (keys) might be used. The JSON-LD Playground is a good place to look at different possibilities.
Aside: The "@type": "ldp:Container"
is optional for LDN, but it helps other LDP clients understand that they might be able to use your data too.
You could store the Inbox listing in a flat file, and update it every time you receive (or delete) a notification. However, for this implementation we're going to generate it dynamically from the JSON files in our "inbox" directory. (You can take either approach if your notifications are stored in a database, too).
$files = scandir("../".$inboxpath); $notifications = array(); foreach($files as $file){ if(!is_dir($file) && substr($file, -5) == ".json"){ $notifications[] = array("@id" => $base."/".$inboxpath."/".$file); } } $inbox = array( "@context" => "http://www.w3.org/ns/ldp#" ,"@id" => "" ,"@type" => "ldp:Container" ,"contains" => $notifications ); $inboxjson = json_encode($inbox, JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES); header("Content-Type: application/ld+json"); echo $inboxjson;
If you want to restrict access to your notifications, this is a good place to check the request against the authentication method of your choice (eg. a token in the Authentication
header, or a signature of some kind).
Now that's all done, you can put your script on a server and check it works with the LDN Receiver test suite. If it does, submit an implementation report!
In order to be useful, you need to make your Inbox discoverable by sender and consumer applications. You can do this by modifying any resource on the Web which you control (like a blog post or your website homepage) to link to the Inbox with the ldp:inbox
relation. This can be with an HTTP header:
Link: <https://example.org/inbox.php>; rel="http://www.w3.org/ns/ldp#inbox"
or RDF link, eg. JSON-LD:
{ "@context": "http://www.w3.org/ns/ldp", "@id": "https://example.org/profile", "inbox": "https://example.org/inbox.php" }
eg. RDFa:
<link href="https://example.org/inbox.php" rel="http://www.w3.org/ns/ldp#inbox" />
And that's all there is to it! The complete script is available here, for your copy-pasting pleasure (Apache 2.0 licensed).
If you don't fancy writing your own script to handle LDN receiving, there are few existing implementations you could self-host on your own server. Plus Linked Data Platform servers work out of the box as LDN receivers, so maybe you want to set one of those up.
This post is my own opinion, and does not necessarily represent the opinion of the Social Web WG!
We met at MIT in Cambridge, MA, on the 17th and 18th of November. See also day 1 minutes and day 2 minutes.
Everything is progressing. Naming things is hard. We need your implementations please or features may be dropped from some specs. We hope to extend by 6 months, not so we all make more work to do, but just so that the newest spec has time to make it to rec given process timing requirements. We're transitioning any other work into a CG.
The Social Web Incubator Community Group, should be pronounced swi-kig, but by the end of the meeting everyone had taken to calling it swish. I think it was Evan's fault. Anyway it's live, and is where we'll continue to develop on top of the things we started in the WG, as well as think about how to tackle things we haven't got to yet. Aaron and Chris are chairing it, and plan for discussion to take place primarily on github and IRC, with mailing lists for broadcast only. You should join.
At the last face-to-face we renamed PubSubHubbub to PubSub. We subsequently realised this is too generic a term for quite a specific spec, and as a result is hard to search the Web for, and hard to find/name libraries and packages for. Renaming it again took the better part of a month. Heh. A few weeks ago we developed a fairly long shortlist on the wiki, listing pros and cons, and a few people voted and left their rationale. On day one of this face-to-face, we ruled out every single one of those suggestions, and came up with three new ones (WebSub, WebFollow and WebSubscribe).
We slept on it, and just before lunch of day 2, voted between these three. WebSub won. I like it for its closeness to PubSub; WebFollow is a good name for a user-facing feature that implements the WebSub protocol. Then we proceeded to brainstorm more names in the google doc, progressively making the font smaller and introducing columns so we could see them all at once.
In less important news, we added Aaron as a coeditor of the WebSub spec, resolved a bunch of issues, and there's an updated working draft up.
We decided to go ahead with a new CR for ActivityStreams 2.0. Though it's frustrating to increase the time to exit, it's also not infeasible that getting implementation reports which sufficiently cover all features will take another month anyway. Plus, this extra time ensures that the ActivityPub implementations will make it into AS2 implementation reports.
So we have a bunch of changes to AS2 since we entered CR, although none of them affect implementations or are technically normative changes, which is why we could get away without restarting CR if necessary. But we decided updating the spec with these changes (mostly editorial, clarifications, etc, which do not change the intent of the spec) is important enough not to save them all for the PR publication. Personally I think we should publish a version with the new wording around name
and summary
(a plaintext summary
for all objects is required in the absence of name
) as soon as possible.
Another useful clarification is explicitly stating that the value of the @context
key may be a plaintext string, an array, or an object. We added examples of each of these, so it's clear for consumers what to look for. This is particularly important for making sure implementations which include extensions - for which the @context
is necessarily an array or an object - are not completely dropped on the floor by consumers. Consumers can of course ignore extension properties they don't understand, but they should not ignore standard AS2 properties just because there are extensions alongside it.
This also means that it's possible to use the JSON-LD @language
construct properly (inside the @context
object) to set the base language for a whole AS2 object. As there are other ways to set the language, for individual objects or for specific values, setting the @language
is not required. Further, you should not set a language if you don't actually know what it is. And we haven't dumped language tags in all of the examples in the spec, in order to avoid people copying and pasting the examples without updating the language tags we use. Apparently this phenomenon is seen all over the Web, with EN
language markers alongisde text that is most certainly not EN.
We skimmed through a few issues for each of Micropub, LDN and ActivityPub, and checked in on how test suites and implementation reports are doing. The editors (Aaron, Sarven, and Chris respectively) are working exceptionally hard on building the test suites and chasing implementors. They are all at various stages of progress, and we know we have at least some implementations of some features of each.
The Working Group's charter expires at the end of this year. Due to minimum time constraints on various parts of the publication process, as WebSub was late to join the WG we need until at least April to take it through to a recommendation, and that's with absolutely nothing going wrong. We were aiming, obviously, for all of our other specs to be wrapped up before the various December holidays, but it'd be tight. Adding buffer time for unexpected issues, and editors-not-having-to-make-themselves-ill-with-allnighters time, we figured they'll be exiting CR in January or early February at the latest. So we expect to get an extension of 6 months, and reduce our telecon time to monthly after January. The extra time on top of April means we won't need to freak out if for any reason WebSub has to have a second CR. This also overlaps with the opening of the Community Group, so it should help with the transition.
An extra shoutout to anyone who is thinking of or starting to implement any part of any of our specs! Please let us know, either by filing implementation reports (even partial ones are helpful) or pinging us on IRC (#social) or the mailing list so we know to chase you at some point in the future. If you don't want a feature of a spec to be dropped, ie. because you want to use it, we have to prove it has been implemented. If possible, don't wait around for us to exit CR, because we need your implementations to make it that far.
Post created with https://rhiaro.co.uk/sloph
I am officially part of the W3C Team and co-staff contact for the Social Web WG. Look, there I am!
This post is my own opinion, and does not necessarily represent the opinion of the Social Web WG!
See also day 1 minutes and day 2 minutes.
We met in Portland on 6th and 7th June. What follows is more detail on my perspective of the main conversations we had over the two days. Clarifications and corrections welcome. This doesn't cover everything we talked about in detail; as well as the following, we resolved (or at least discussed) issues on all of the specs, and took a few to new Working Draft status.
I demoed my ActivityPub implementations; clients Burrow for checkins, Obtainium for consumption/purchase logging, Replicator for food logging and Seeulator for journeys and events. These all do create only by sending either appropriate activities (including some extensions) to my activitypub endpoint (aka outbox, but not discoverable as such yet).
Seeulator creates the right kind of activity based on what attributes are filled in or left blank, essentially doing post-type discovery - albeit my own algorithm rather than tantek's spec from the user input to generate the right as:Activity
.
The newer thing I worked on was a client that only does updates of existing AS2 data. I wanted this so I could add captions to all my photos at img.amy.gy, so Morph does just that. This also means it has to be able to read/consume the AS2 data published at img.amy.gy about as:Collection
s of photos.
Aaron demoed the Webmention test suite at webmention.rocks and notes that there are Webmention Rocks stickers available for people submitting implementation reports..
Aaron also demoed a new feature in the Micropub spec which is the media endpoint. After some discussion recently it was established that all mainstream social APIs seem to post media (like images) that have been embedded in a post to a separate endpoint, then embed the returned URL in the post content, and MediaGoblin does this too. Aaron's implementation in Quill is really swish looking, uploading the file to the discovered media endpoint whilst you're typing the rest of the blog post, then embedding it back in the UI so you can see it straight away. I should probably implement something along these lines, and sync it up with what ActivityPub is doing (which is going to be basically the same); it's especially useful as I host my images on a completely different domain and stack from my blog posts and right now I have a by-hand process of uploading images to one server, then copying the URL into a blog post to embed.
Evan showed the Wordpress plugin implementation of AS2 by pfefferle, demonstrated on his fuzzy.ai blog. We all noticed a few bugs and AS1-isms in the implementation, but all correctable and all good flags for when we give advice for people switching existing implementations from AS1 to AS2.
For a while I've been pushing to break ActivityPub up into several separate specs for each part, modularised by functionality, reasoning that this will lower the barrier to both CR and conforming implementations. I feel strongly that distinct functionalities should not be dependent upon one another to conform to the spec; ie. if I only want to implement subscribing/reading in my application, I shouldn't be required to implement creating new content as well. I've been back and forth on this with Chris and Jessica for at least a year and we're all getting closer to understanding one another. The WG resolved at this meeting to split into ActivityPub (reading, creating, updating, deleting content) and ActivitySub (name pending; subscription and delivery of content). It took me a little longer than it should have to really grok how closely tied 'delivery' and 'notifications' are, but now I realise that regardless of what triggers 'delivery' of an activity, the process of 'delivery' to someone's inbox is the same. The triggering part can be a subscription (a special side effect of receiving a Follow
activity) or a notification (an activity or object is created which is addressed to or links to a user or other activity/object). Thus I anticipate ActivitySub describing how the triggers work, then how delivery works upon a trigger. I'd still like to be able to conform to the 'delivery' part without worrying about the 'trigger' part (maybe I want to implement an entirely different subscription trigger mechanism) but this can be achieved with conformance classes if splitting the spec up further is too much.
The working group wraps up at the end of 2016. There's still time for us to work on new specs, but the ideal is that anything new being presented to the group will have been incubated (worked on, tested, implemented) outside of the group beforehand, either in a CG or other community or organisation. Coming soon to an editor's draft near you: PubSubHubbub!
We confirmed we'll meet on Thursday and Friday at TPAC in Lisbon in September. We'll also run a social web breakout session on the plenary day (Wednesday) like we did last year.
Co-editing a spec with a W3C Rec pedant who expertly dismantles other specs based on tiny discrepancies/ambiguities.. takes about four times as long, but ultimately should be bulletproof.
This post is my own opinion, and does not necessarily represent the opinion of the Social Web WG!
See also day 1 minutes and day 2 minutes.
We met in Boston on 16 and 17 March. What follows is more detail on my perspective of the main conversations we had over the two days. Clarifications and corrections welcome.
AS2 is inching closer to CR. Evan has made a validator at as2.rocks and done a lot of work on conformance criteria which we went through as a group and updated a little; mostly changing SHOULDs to MUSTs.
Discussed and not necessarily resolved a few new open issues, including: considering dropping the Relationship
object and reviving it as an extension if necessary; a proposal for a new property to say when something was deleted; weakening the SHOULD requirement on name
; clarifying scope
and context
.
Stay tuned for a new working draft sometime soon.
The most exciting thing I thought was agreeing on the potential for convergence between the create, update and delete parts of Activitypub and Micropub.
Micropub started life as a super small and simple way for clients and servers to agree how to create content on a website by POSTing form encoded parameters to an endpoint. As a result of this simplicity, there are dozens of client and server implementations, allowing people to use each others posting clients to add posts to their site, from simple text-only posts to photos, events, RSVPs, likes, bookmarks, reposts. When Micropub needed update and delete, it grew beyond what form-encoded parameters could sensibly handle, and added in a JSON syntax which I think to date only the editor has implemented.
Activitypub uses a JSON syntax (ActivityStreams2) from the outset for create, update and delete, and when you compare this with the Micropub JSON they look remarkably similar.
My posting endpoint implements create the AP way, and endpoint discovery the MP way. It also catches Micropub form-encoded requests and translates them to AS2 JSON before proceeding, so I can still use simple Micropub clients. My posting clients burrow (checkins), obtainium (purchases), replicator (food) and seeulator (events, RSVPs, travel plans) all post AS2 JSON... after discovering the endpoint via rel=micropub
. Next on my list, and well overdue at this point, is adding update and delete to both server and clients.
So I proposed we write a document that unifies the common parts of AP and MP, iron out the smaller differences, and hope this coalesces into a small create/update/delete spec which both AP and MP can reference rather than duplicate. Because modularity is good, and common modules are better! I dubbed this temporarily (or is it?) SocialPub.
So what's left in Micropub? I hear you cry. The super simple form-encoded create which is what made Micropub do so well in the first place is really what makes Micropub micro, so I'd like to see this be the bulk of the Micropub spec, with just a pointer to SocialPub for people who want to level up to JSON.
There are still more than a few issues to be dealt with, though we handled a few during the meeting (such as media uploads). I'll be writing SocialPub up into the Social Web Protocols doc next week, stay tuned.
Jessica and Chris demo'd Media Goblin federating with pump.io! Which is cool. Which brings them a huge step closer to implementing things with AS2/AP and federating that way. They discussed how one of their main impediments had been database schema migration.
Aaron demo'd his Micropub editing UI, which allows partial edits on the post, only for data he is most likely to want to edit (tags, syndication URLs and date).
Aaron also demonstrated a new event posting interface in Quill which uses Micropub, and showed how RSVPs from Woodwind (a feed reader) work via Webmention. Tantek and Ben also demo'd RSVPs from their sites. And Ben demo'd how he can post reactjis as replies, exemplified with the poop emoticon, and there is no question that the future of the social web is in safe hands.
Frank demonstrated federation between OwnCloud servers, which uses WebDAV and CalDAV, and talked through their access control.
We also had a couple of admin/process related discussions. The first included agreeing to meet at TPAC in Lisbon in September as it already looks like there'll be critical mass to make it worthwhile.
Sandro has made a list of issue labels for github which we painstakeingly went through to make sure everyone understands them and editors are willing to use them on specs. This should help people to figure out at a glance what the current state of a spec is from the issues, as well as help passers-by to jump in if they want to get involved.
S3E13, Deja Q...
Q: "Simple: Change the gravitational constant of the universe."
Geordi: "What?"
Q: "Change the gravitational constant of the universe, thereby altering the mass of the asteroid."
Geordi: "Redefine gravity. And how the hell am I supposed to do that?"
Q: "You just DO it. GAHH! Where's that doctor, anyway?"
Data: "Geordi is trying to say that changing the gravitational constant of the universe is beyond our capabilities."
Q: "Well, then... never mind.
+ http://www.w3.org/2015/Process-20150901/#transition-reqs
Amy added http://www.w3.org/2015/Process-20150901/#transition-reqs to https://rhiaro.co.uk/bookmarks/