Do any of the Python JSON-LD libraries make it easy to serialize a @context
rather than just the data? All examples I can see which don't use external contexts just write it in as a string.
Do any of the Python JSON-LD libraries make it easy to serialize a @context
rather than just the data? All examples I can see which don't use external contexts just write it in as a string.
+ Recogito annotation platform
Amy added http://recogito.pelagios.org/rhiaro to https://rhiaro.co.uk/bookmarks/
🔁 https://twitter.com/philarcher1/status/778515246109646848
Amy shared https://twitter.com/philarcher1/status/778515246109646848
Vocabulary development and maintenance, W3C namespace control. 15:30 today, room 1.04 at TPAC2016- Phil
Oops on behalf of JSON-LD I apologise to github users @context @graph @value @id @type etc...
+ http://bblfish.net/tmp/2011/05/09/
Amy added http://bblfish.net/tmp/2011/05/09/ to https://rhiaro.co.uk/bookmarks/
+ http://kidehen.blogspot.cz/2015/09/what-happened-to-semantic-web.html
Amy added http://kidehen.blogspot.cz/2015/09/what-happened-to-semantic-web.html to https://rhiaro.co.uk/bookmarks/
Amy added http://rdfa.info/ to https://rhiaro.co.uk/bookmarks/
+ http://dig.csail.mit.edu/2009/presbrey/UAP.pdf
Amy added http://dig.csail.mit.edu/2009/presbrey/UAP.pdf to https://rhiaro.co.uk/bookmarks/
+ http://www.lsrn.org/semweb/rdfpost.html
Amy added http://www.lsrn.org/semweb/rdfpost.html to https://rhiaro.co.uk/bookmarks/
+ https://linkeddata.github.io/SoLiD/
Amy added https://linkeddata.github.io/SoLiD/ to https://rhiaro.co.uk/bookmarks/
Finally you can get my blog post content in a number of different formats. You'll notice alternative URLs (noted currently as 'Permalink') in the metadata section of a normal view of a post. Hit this up with no Accept header, and you'll get neat markdown (which is what I authored it in; the 'source', as it were).
Browsers automatically send Accept: text/html
, in which case they are redirected to the rendered HTML version of the post you're probably reading now. But here are some alternatives you can try:
curl https://rhiaro.co.uk/llog/micropub-test
-> plain markdown.
curl -H "Accept: application/json" https://rhiaro.co.uk/llog/micropub-test
because there's obviously not enough JSON in the world. Note this isn't correct JSON-LD yet. I'll sort that out another time.
curl -H "Accept: text/turtle" https://rhiaro.co.uk/llog/micropub-test
Ohemgee! RDF! It's what you've all been waiting for, I know.
For those of you who like angular brackets, you could try:
curl -H "Accept: rdf/xml" https://rhiaro.co.uk/llog/micropub-test
Briefly discussed with Tantek, Aaron and Bret at IWC Cambridge and on IRC about how a micropub client could fetch post content for editing. Many people aren't editing raw HTML as it's presented on the page, but maybe markdown or some other syntax, so a client needs to be able to discover this (the 'source') to present it to be edited. Now if someone makes their client check a post for a rel="source"
or shoots off a request with Accept: text/plain
(less likely, since static sites can't do conneg) then they'll get my markdown directly (uh, when I actually put rel="source" in my HTML, which I haven't yet).
As a related aside, I also return source="markdown"
if a micropub client asks my endpoint q=source
.
I'm serializing the microformats2 vocabulary into RDF; repo on github.
Disclaimer: This post is (not necessarily coherant) notes as I go along.
Useful notes by Tom Morris about mf2 to RDF mapping on the microformats wiki. I updated the namespace to use the one suggested there (http://microformats.org/profile/
).
Everything starting with h-
looks like it's a class. Typically RDF classes are capitalised so this makes me cringe, but I'll cope.
Woah, there are a lot of properties. Seventy unique properties. I didn't realise. On the wiki most don't have a description, many are duplicated in the list, and none of them seem to have their own description page (and thus no URIs to refer to them with). Pasting them into a spreadsheet was the quickest way to de-dup and alphabetise them. Now I'm working through one at a time, adding descriptions and mapping them to existing properties in FOAF (because many are obvious), VCard (because many (all?) are derived from this), ActivityStreams 2.0 (because SocialWebWG) and hesitantly Dublin Core and SIOC. Where they map to something well-described in RDF, I don't bother with rdfs:comment
and domain
and range
of their own, but I'll add for those with no good sameAs
mappings.
dt-reviewed
same as dt-published
? If not, why do reviews get special treatment with regards to differentiation between when they're written and published, and nothing else does?e-content
? Couldn't find one on the wiki, so made one up that is compatible with AS2.0 content, with additional caveat about including markup, which is explicit on the mf2 wiki: "The contents of something like a post, including markup and embedded elements."e-description
and e-instructions
sub-properties of e-content
?I keep getting caught out by AS2.0 terms that are in the JSON-LD document but are actually deprecated (the JSON-LD contains no information other than property and class names and sometimes types; I have to remember to check the written docs). Eg. as2:author
-> as2:attributedTo
.
category
. Am I missing something? Oh wait, tag
will do it.p-description
from p-summary
?p-education
should contain a nested h-card
(of school and location). Doesn't that mean it should be an e-
or have I misunderstood that completely?p-education
is "an education h-calendar
event" but h-calendar
doesn't exist, it's h-event
. Probably just a typo? Ditto p-experience
.name
property for Actors
(only displayName
). Interesting.label
'new in vCard4' but vCard4 says it's deprecated, and in any case was for address labels, so not sure what use this is in mf2.p-note
for specifically?p-reviewer
not just p-author
?sex
and gender-identity
are part of it, but all I can see is hasGender
.AS2.0 has rating
but it's explicitly defined as "a non-negative decimal number between 0.0 and 5.0 (inclusive) with one decimal place of precision". MF2 says it's a number between 1 and 5. I think even less specification is better, as ratings come in come in many forms. Anyway, because I'm not venturing into SKOS, I'm owl:sameAs
ing them for the time being.
Update: Dropped all domains and ranges because all properties in microformats2 are actually global.
Last modified:
Same as last year, but with twice as many students. Tirelessly answered student emails, made a few supplementary materials, mostly got the feedback sent on time; was nominated for a Best Teaching Award ^^ Oh, also organised a hands-on workshop this year, because I generally disagree with lectures.
Worked with an amazing team to boost the profits of Mr. Falafel in Shepherd's Bush and on the side helped with modelling the world as the BBC sees it, and learnt all of the corners to cut and ideologies to give up in order to develop linked data applications to improve the lives of people who don't know/care about linked data.
Key achievements include:
owl:sameAs
David Bowie.So Slog'd got stuck for a little while because the fast, nice-looking, somewhat magical SPARQL endpoint provided by ARC2 stopped working for no discernable reason.
I thought I'd try leaving it alone for a few weeks to see if it started working again by itself, but alas, it has not.
Everything is fine until I try to query for a specific predicate. (Specific objects or subjects are fine). The query runs, it just returns no results. I know the data is in there, because I can get it out with less specific queries. Also because I can see it all in the MySQL database on which it is based. When I left it, it was working fine.
I'm going to kill the database and set it up again.
I did this by - and oh, it was joyous - going into the database settings and appending '2' to the name of the database. I then reloaded the endpoint page, and it set everything up by itself :)
I inserted two triples, and successfully queried for a specific prefix. So, it works. I wonder what will happen if I dump all my old data back in there? (I validated the raw file with all the triples in RDF/XML, and they're fine).
I inserted the rest the rest: LOAD <path/to/rdf.rdf> INTO <>
Ran a test query, aaaand... it's fine.
So what the hell was wrong with my other database? Perhaps I'll never know...
I had an idea for a tiny wee project to do with quantified self. More on that later.
Because I'm trying to use linked data for everything, for reasons beyond the scope of this post, the first thing I did was sketch out the data I need to store in a graph structure. I need to record emotions, so I did a quick search for ontologies that represent emotions, figuring psychologists and the like must have been at this for years already.
Sure enough I found a few, but the most convincing one, the HUMAINE Emotion Annotation & Representation Language (EARL) is in XML rather than OWL.
Yay! Time to convert a well structured and useful dataset into RDF. Always a Good Thing.
EARL comes as many files, and goes beyond what I need. But it's not huge, and with a little effort (and looking some stuff on Wikipedia) I think I can understand what's going on enough to convert the lot.
Note: There's apparently a lot of disagreement about terms and stuff in this area. Not something I'm invested in, so I'm just going to roll with this XML.
There are:
Emotional occurrences can have all of the above as properties, as well as probability and intensity. Complex emotional occurrences have times, and contain a minimum of two emotional occurrences with the above properties.
The terms are all taken from various different psychological experiments or schools of thought. There are alternative versions of some of these things from something called AIBO. Arbitrarily I'm ignoring everything prefixed AIBO for now.
I'm going through the files and writing everything relevant out, then drawing it as a graph.
First juncture: do I use all the attributes (like the list of 55 emotions) as properties (as they are demonstrated in the original XML) or use classes? Properties seems messy, and feels less extensible, even though technically I suppose it's not.
Maybe they should be properties. Except the categories, they all (or at least most) have corresponding DBPedia entries that it would be stupid not to take advantage of. But the dimensions, regulation and appraisal might be better suited to being properties, otherwise I'm having pointless identifiers or blank nodes everywhere. And nobody wants that.
I adjusted the Samples thing a bit, mostly to simplify it, and I may have got it wrong, but I think it makes sense.
Then I typed it all into WebProtege. As a result, I think quite a few things are overspecified. What do you think? Check it out: http://vocab.amy.so/earl.
Last modified: