LDN is based on Linked Data principles, enables decentralisation, and persistence of data.
csarven (16 out of 46)
- Micropub: Tests, Implementation reports
- LDN: Tests, Implementation reports
- ActivityPub: Tests, Implementation reports
csarven presents Linked Data Notifications at ESWC2017... Starting with some motivational use cases in the social web space to which this W3C Recommendation has been applied so far.
At 12, csarven will present Linked Data Notifications.. Come to the talk for an overview, then if you want to integrate a part of the protocol into your existing applications, or build a receiver from scratch, we're both here all week and ready to help!
Pioneering the Linked Open Research cloud
This year, the Linked Data on the Web workshop at WWW2017 held an open discussion session about academic publishing. In particular, about what is getting in the way of us getting more research outputs into the LOD Cloud. The discussion was led by Sarven Capadisli, the loudest voice behind the Linked Research initiative, who did his best to remain a neutral facilitator, and let others in the room argue about the things he is normally arguing about.
There was a general consensus that we should have research from the Linked Data on the Web community in the form of Linked Data, and on the Web. Sounds like kind of a no-brainer, but you'd be surprised. Or maybe you wouldn't. The group, around 30 people, discussed the reasons why we, as a community, are lagging.
One of the reasons is tooling. Nobody wants to hand-author HTML, especially not with RDFa in. Nobody really wants to turn their research articles into datasets in any other syntax either. There are (apparently) still a large portion of researchers even in CS who use MSWord to generate their final PDFs so they need something that works at least as well as that to generate Web-friendly submissions.
There is also a matter of incentives, on several different fronts. Most researchers are a-researchin' and publishing their work to advance their career, and to get funding so they can do new cool stuff. So how are individual researchers incentivised to put in the extra effort it takes to generate HTML? On another level, how do we incentivise researchers to improve the state of tooling and resources? The first can be tackled along multiple fronts, one of which is petitioning publishers to demand HTML (just like today they demand LaTeX). What are the incentives for publishers to do this? There were a few ideas thrown around, including how they can improve their SEO, access, discoverability, and creating more pleasurable reading experiences than PDFs can deliver.
But there's a large faction ill-at-ease with depending so much on publishers to drive this change, even if it didn't look like it would be an agonisingly slow process. Some of us, myself included, would like to shift the whole scholarly communications process to be more self-sufficient and less dependant on centralised third parties. We do not want to do this at the expense of quality of work of course! (Which some people immediately assume is the case). Beyond publishing, we want to open up the review process, so it's both more transparent, and so researchers get the credit they deserve for this work. Conversations can continue well beyond the submission process if the reviews are open and public. But again, we need to work on the tooling and incentives to enable this.
Whilst I agree that we are woefully underdeveloped on the tooling front, I object to the weight this was given in the context of getting just Linked Data researchers to adapt to the Web. Personally I think if Linked Data researchers are going to cry about being required to submit their contributions as Linked Data, I will have trouble taking them seriously. Writing HTML is not a high bar. If you're comfortable with LaTeX, you can switch off a few neurons and write HTML instead. Or you can use Pandoc. As Jens Lehmann succinctly put it, the LDOW workshop is about advancing the state of Linked Data on the Web. If this community is not prepared to drive the state of this forward, even (especially?) if it includes working outside of the current system and taking some risks, who is going to?
So overall there was a vague feeling of consensus that we need to do something to take better advantage of Web technologies, and that LDOW as an established and respected venue is a good grounds for an experiment. I don't know if LDOW will manage to require HTML submissions next year, but the organising committee seem like they'll be inclined to strongly incentivise it. Stay tuned (watch the public-lod@w3.org
mailing list. And speak up, if you care. Notes from the session are here.
We'll be continuing this discussion in the Web Observatories workshop this afternoon, around 4pm.
PS. the Enabling Decentralised Scholarly Communication workshop at ESWC this year requires Web-based submissions and is all about building and connecting the tooling to advance the state of academic publishing on the Web. Deadline is 17th of April. See you there.
Troublemaker csarven just won't lie down and sign whatever publishers ask him to.
Full Article, Immediate, Permanent, Discoverable, and Accessible
🔁 https://twitter.com/csarven/status/810838130228072448
Amy shared https://twitter.com/csarven/status/810838130228072448
Coauthoring papers with people in vastly different timezones has the advantage that there's someone working on it 24/7. And also that you're not distracted by debating changes in realtime, you can just get on with it and deal with the consequences later..
Post created with https://rhiaro.co.uk/sloph
8th Social WG F2F Summary
This post is my own opinion, and does not necessarily represent the opinion of the Social Web WG!
We met at MIT in Cambridge, MA, on the 17th and 18th of November. See also day 1 minutes and day 2 minutes.
tl;dr
Everything is progressing. Naming things is hard. We need your implementations please or features may be dropped from some specs. We hope to extend by 6 months, not so we all make more work to do, but just so that the newest spec has time to make it to rec given process timing requirements. We're transitioning any other work into a CG.
Community Group
The Social Web Incubator Community Group, should be pronounced swi-kig, but by the end of the meeting everyone had taken to calling it swish. I think it was Evan's fault. Anyway it's live, and is where we'll continue to develop on top of the things we started in the WG, as well as think about how to tackle things we haven't got to yet. Aaron and Chris are chairing it, and plan for discussion to take place primarily on github and IRC, with mailing lists for broadcast only. You should join.
The spec formally known as PubSubHubbub
At the last face-to-face we renamed PubSubHubbub to PubSub. We subsequently realised this is too generic a term for quite a specific spec, and as a result is hard to search the Web for, and hard to find/name libraries and packages for. Renaming it again took the better part of a month. Heh. A few weeks ago we developed a fairly long shortlist on the wiki, listing pros and cons, and a few people voted and left their rationale. On day one of this face-to-face, we ruled out every single one of those suggestions, and came up with three new ones (WebSub, WebFollow and WebSubscribe).
We slept on it, and just before lunch of day 2, voted between these three. WebSub won. I like it for its closeness to PubSub; WebFollow is a good name for a user-facing feature that implements the WebSub protocol. Then we proceeded to brainstorm more names in the google doc, progressively making the font smaller and introducing columns so we could see them all at once.
In less important news, we added Aaron as a coeditor of the WebSub spec, resolved a bunch of issues, and there's an updated working draft up.
ActivityStreams 2.0
We decided to go ahead with a new CR for ActivityStreams 2.0. Though it's frustrating to increase the time to exit, it's also not infeasible that getting implementation reports which sufficiently cover all features will take another month anyway. Plus, this extra time ensures that the ActivityPub implementations will make it into AS2 implementation reports.
So we have a bunch of changes to AS2 since we entered CR, although none of them affect implementations or are technically normative changes, which is why we could get away without restarting CR if necessary. But we decided updating the spec with these changes (mostly editorial, clarifications, etc, which do not change the intent of the spec) is important enough not to save them all for the PR publication. Personally I think we should publish a version with the new wording around name
and summary
(a plaintext summary
for all objects is required in the absence of name
) as soon as possible.
Another useful clarification is explicitly stating that the value of the @context
key may be a plaintext string, an array, or an object. We added examples of each of these, so it's clear for consumers what to look for. This is particularly important for making sure implementations which include extensions - for which the @context
is necessarily an array or an object - are not completely dropped on the floor by consumers. Consumers can of course ignore extension properties they don't understand, but they should not ignore standard AS2 properties just because there are extensions alongside it.
This also means that it's possible to use the JSON-LD @language
construct properly (inside the @context
object) to set the base language for a whole AS2 object. As there are other ways to set the language, for individual objects or for specific values, setting the @language
is not required. Further, you should not set a language if you don't actually know what it is. And we haven't dumped language tags in all of the examples in the spec, in order to avoid people copying and pasting the examples without updating the language tags we use. Apparently this phenomenon is seen all over the Web, with EN
language markers alongisde text that is most certainly not EN.
Other CRs
We skimmed through a few issues for each of Micropub, LDN and ActivityPub, and checked in on how test suites and implementation reports are doing. The editors (Aaron, Sarven, and Chris respectively) are working exceptionally hard on building the test suites and chasing implementors. They are all at various stages of progress, and we know we have at least some implementations of some features of each.
Extension
The Working Group's charter expires at the end of this year. Due to minimum time constraints on various parts of the publication process, as WebSub was late to join the WG we need until at least April to take it through to a recommendation, and that's with absolutely nothing going wrong. We were aiming, obviously, for all of our other specs to be wrapped up before the various December holidays, but it'd be tight. Adding buffer time for unexpected issues, and editors-not-having-to-make-themselves-ill-with-allnighters time, we figured they'll be exiting CR in January or early February at the latest. So we expect to get an extension of 6 months, and reduce our telecon time to monthly after January. The extra time on top of April means we won't need to freak out if for any reason WebSub has to have a second CR. This also overlaps with the opening of the Community Group, so it should help with the transition.
Implementations
An extra shoutout to anyone who is thinking of or starting to implement any part of any of our specs! Please let us know, either by filing implementation reports (even partial ones are helpful) or pinging us on IRC (#social) or the mailing list so we know to chase you at some point in the future. If you don't want a feature of a spec to be dropped, ie. because you want to use it, we have to prove it has been implemented. If possible, don't wait around for us to exit CR, because we need your implementations to make it that far.
Post created with https://rhiaro.co.uk/sloph
Amy shared https://pandelisperakakis.wordpress.com/2015/09/09/how-to-negotiate-with-publishers-an-example-of-immediate-self-archiving-despite-publishers-embargo-policy/
the problem of restricted access can easily be solved using existing infrastructures and with a small additional effort on behalf of the authors or their librarians - Pandelis
If you are Web savvy, it is a 'small effort' to self-archive your work in a space you control. But not everyone can manage that. And then, feedback, reviews and collaboration also in a space you control is no 'small effort'. Linking to and from specific parts of other research is not trivial when reports and results are missing fine-grained open identifiers. Maintaining your reputation and tracking the effect of your work (so that other researchers and institutions take you seriously) is no 'small effort'. Searchability and guaranteeing long-term persistence is no 'small effort'. There's still a way to go on both the infrastructure and cultural fronts here.
The (Social) Web has most of the pieces. They just need putting together.
That's what we're working towards with #LinkedResearch.
2 coffees (Mariposa)
Brunch for two (ZuZu)
$20 (€17.92 / $20.00 / £15.43)
Pizza for two (All Star)
$32 (€28.67 / $32.00 / £24.69)
Two milkshakes (Life Alive)
$13.25 (€11.81 / $13.25 / £10.17)
🔁 https://twitter.com/csarven/status/777488215020306432
Amy shared https://twitter.com/csarven/status/777488215020306432
Heading out to W3C TPAC2016 to fix and break things for the future #WebWeWant . If anything breaks, the person next to me did it.
Travel food (Clover)
$20 (€17.78 / $20.00 / £15.07)
Supplies (Harvest Market)
$11.27 (€10.02 / $11.27 / £8.49)