Talk about add-ons and extension development.
Build me a Firefox plugin to do the following: when a LINK is ROLLED OVER, the browser background-scrapes the header of the target page and displays any metadata in a popout box.
Metadata could be the usual (title, keywords, author etc) Dublin Core embedded or an linked RDF file if present etc. The more the better. This would allow us to judge the contents of links before clicking them, something I personally would find very useful. Like all such moves towards a semantic web this will depend on the quality of the metadata in the target file.
Semantic extensions could come later if e.g.: linked rdf points to an ontology providing context, or the browser itself finds the context independantly of the target metadata.
The 'remote scrape' could be extended to body content: any tag such as H1 or any microformats could then be extracted and listed...
I'd like to see this in the core App to be honest, but maybe one of you guys/gals is up to it?
Thats a very nice idea, tho I'm not really sure how you would go about it currently (ive only been developing my first extension for a few weeks myself) . .
I have read though that ff can prefetch pages by loading the links on pages, so im pretty sure it would be possible to enumerate over all the links in the page, and fetch the response headers (using XMLHTTPRequest possibly to just grab the headers, and possibly the metadata) . .
Not sure of the performance implications of such an extension, (tho im sure it wouldnt be very bandwidth intesive fetching just headers, and you could provide the user the option to fetch all data for the links on any one page) . .
Nice idea, let us know how you got on!
Hey, I'm no developer. I'm a html/css hacker/webmaster with some php/jsp. I'm really interested in the idea of the semantic web and am looking for some kind soul to try developing demonstrators of parts of this approach. See also Dave Karger's Piggy Bank extension.
I'm VERY interested in doing something like this. I think I could do it soon if noone else will develop it. I think it will not take very long to get info from the link hovered and to display the meta data from the linked-to page in a small window attached to the cursor or as a tooltip even.
I am finishing up development on the 1.0 version of my XML Developer's Toolbar
Starting a Musicians Toolbar soon
And moving to DC in the next month, so I am pressed for time, but will check in again to see how anyone is doing with this. But if you would really want me to spearhead the development I can, just post to me.
Here's a screenshot of how I visualise it from a talk I gave last year on semantic web possiblities.
http://homepage.ntlworld.com/mike.lownd ... /links.gif
the second version is the 'real' semantic web because it looks up the meaning of the link via the target metadata in order to generate all the 'accurate, meaningful (copyright TiimBL)' links.
The first version would be a great start, however!
Microformats are definitely one of the ways this could be extended. In an educational context for instance, Learning Objects (for which LOM - learning object metadata already exists) - these could be referenced via the link, to automagically show what part of a curriculum a resource is relevent to. Most of this - the way I'm thinking it through, is for human viewers of course, but semantic web engines could do a lot more with data like this.
I'm looking to Piggy Bank http://simile.mit.edu/piggy-bank/ (I'd call that a mashup machine) and similar things to work with Microformats as they appear.
David Karger gave an interesting talk at a 'Data Webs' day here in London the other week: http://www.rin.ac.uk/data-webs-presentations
so...............................anyone want to do this with me?
I'd be interested. I've been working on something (fairly) similar with <a href="http://www.mutube.com/intelligent-referencing">referencing</a> to give readers information on the quality of information in articles. This itself involves pulling remote data (using bindings/etc.)
I'm having a total nightmare with development to be honest, but I could probably knock together an example extension for what you're doing.
For my referencing I chose against using information in the target files in favour of information taken from an independent (public-editable) server. It's the only way to guarantee it's accuracy/lack of spam really.... but for a start...
Maybe it would be simpler to just store a publically editable 'level of trust' on that independant server? If present the level could be displayed with the other details. Your own project sems to do similar, but displays confidence as a hatched underline - that's introducing a new 'standard' to web interfaces and also changing the use of an old one that is often ignored.
I think it's two different approaches for two different things. The referencing thing will take into account a lot of factors (source, author and even referee) to give a confidence in a piece of text.
For the semantic links it's easier to pull information on a remote source directly from there (if it exists). In effect you will be creating new standards here also: only once people add correct data to pages will the extension be useful. Chicken & egg.
I'll happily put together something that does what your visualisation image shows if you're interested. Quite into all this stuff. Also interested with the highlighting aspect (i.e. not relying on links being defined) although this would take more work to get sense out of what's highlighted beyond a Google search.
(NB: It's actually a "dotted" underline, but it doesn't render properly in IE)
thing one: what is built should be able to pull whatever is there: TITLE is simplest, DESCRIPTION important... any other metadata - PICS, Dublin Core etc, any linked metadata - rdf, any H1... Agree there would be a new standard if we needed a microformat for this but I think not. Could hilite any rss feeds etc too, though.
thing two: auto-highlighting or ad hoc links - take a look at Magpie by the Open Uni (http://kmi.open.ac.uk/projects/magpie/main.html) - forces you to be explicit about the ontology, but once its loaded is very clever with any page browsed.
Is one or more of you guys is up for taking this idea on?
Its all gone very quiet. Has anyone had a chance to try some of these ideas out?
Like....FindScripts.... (UserScript.org)... Finds greasemonkey scripts related to the site you are currently viewing. To use, just hit CTRL+ALT+f while viewing a site you are interested in. Results will appear in a tab at the top of the page.
chance to try some of these idea?...or am i wrong?...ignore me...if i didn't read the post throughly
»»Iravanan««… (‹ An End User ›);
•» Obviously my first language is not English but தமிழ்: (Tamil).«•
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Iron/0.2.152.0 Safari/14369976.525
Just a quick note to say I've started putting a demo of this together using the Dublin Core references you've given me. As luck would have it work I've been doing on my own referencing extension means most of the neccessary is done.
I'll let you know how I get on.
Who is online
Users browsing this forum: No registered users and 0 guests