Posted by sh1mmer on Sep 24, 2006 in Accessibility
It often occurs to me that people try to do too much with new technology, and that maybe if they thought a little more simply they would get better results. RFID tags are a point in case. The number of justifications I have heard for RFID tags that are based around intrecate information models, or complex device interactions is incredible.
One idea that I have had, that RFIDs could simply and effectively achieve, is object identification. One thing RFIDs are great at is identifying stuff. You write a bit of code into the tag that says “I am an X”. Simple.
If you want to be a tiny bit more complex, you add certification to say who created this information. Along the lines of SSL certificates on the internet. This means that you not only know that the tag is an X but also that Y said it. So Y has identified X for us.
Now, the cool bit. RFIDs are basicly transponders. They are passive devices, require no batteries, and can just sit around happly all day doing nothing until called on. So if you embedded and RFID in say a door, you wouldn’t need anything, but glue it in.
The idea of RFID eyes, is to take this simplest of features of RFID tags and use it to give an auditory indication to people with visual impairment of where and what something is. RFID tags only activate when a reader is close enough. There is also nothing to stop someone from making a ‘narrowband’ reader that was highly directional. Think of something like a satellite dish, but on a smaller scale. The man from Sky has to come and fit it, so that it points in the right direciton otherwise you can’t recieve the broadcast.
What I would like to see is a device which a person could cast around with, and get an auditory (or even dynamic braille) output, describing what they are pointing at. “You are pointing at the Ladies’s toilets” for example could save some embarresment for the gentleman.
Posted by sh1mmer on Sep 15, 2006 in General
Following the discussion after Barcamp about SHW a number of possible implementation ideas were discussed. Below is my current favourite, for which I am writing a wordpress plugin with the help of Mike Davies.
Most people I spoke to agreed that using REST was the least invasive way to report that someone has linked to an error. It is also the most ubiqituous technology on the web (obviously). So how could we implement this with REST?
The answer we came up with is a mix of two excellent suggestions. The first is how we deal with notification, the most important part of the system. If we are going to use REST, then any informative or updating should be done with the POST method. When Mike and I first started to hack an implementation we were orginally going to use an XML format. However in order to seperate these notifications from other POST transactions we deicded this to use an HTTP. By adding a header “X-Self-Healing-Notification” (1), we give the resource a chance to handle it seperately to other POST traffic. This reduces the chance of a conflict with other systems, and allows notifications to be gathered globally, even when they are being sent to a resource which doesn’t know how to handle POST request (say an HTML document).
Once you have made a post with the notification request then you should expect a response with a header like “X-Self-Healing-Accepted” (1). If you didn’t get a header response, then you should expect not to send notifications to this resource for given perioid. This might be a week, a month or forever. Whatever is the right length of time for your applicaiton. This is important because it makes the system polite and non-invasive, unlike previous systems, such as those that email domain owners of 404s.
One of the important things to remember is that this is a notification system. While additional headers could be used to pass along information, such as the new location of a 301, it should be taken as purely informative. As the previous post discussed, there are plenty of issues around security and authentication. Personally I wouldn’t update without visiting the resource in quesiton to validate that it really is throwing an error.
Finally, at BarCampLondon, someone also suggested the use of an X-header when you deliver resources to users. This would allow browser plugins to do the same job on pages which accepted self healing. So, to put it in context, you install a browser plugin and do some surfing. One of the sites you visit supports self healing notifications and links to an error. The site which with the error doesn’t support self healing however, so there are not any self healing X-headers in the HTTP response. Your browser plugin knows that the referring site does support self healing though, so it sends the notification itself. By simply browsing the web you are helping to fix it. Super!
Note 1: A prize for the best suggestion for the name of these headers
Posted by sh1mmer on Sep 14, 2006 in General
Note: I originally wrote this article before BarCampLondon. After my presentation at that event and the dicussion following it I’ve rewritten the article with the current ideas which seem most appropriate. This is still intended to explain the idea and I’ll follow up with the best of the suggested implmentation ideas.
There are a number of errors that can occur on the Web. A 404 error (file not found) is the most common but other errors or non-errors like a 301 (permanently moved) are also common. You have probably experienced both of these responses and more whilst surfing the web.
Why is this a problem? Well, in the case of a 404 a visitor has tried to access a resource, for example a web page, which is no longer available. This could be for a number of reasons, because a 404 is a generic error. It means the web server can’t find the requested resource. The reason could be a typographical error in the name of the resource or it might have been deleted – all the web server knows is it isn’t there. There are more specific errors for page deletion but these are seldom used, even by content management systems. In the end, this means that visitors get a pretty bad experience. A real life equivalent would be to ask for directions to the cinema and when you arrive where you thought the cinema was you find a sign stating “I’m sorry the cinema isn’t here, I’m not sure why, or where it is, I’m not sure it was ever here. People keep asking me, but I just don’t know.”
The best kind of error you can expect is a 301. This response indicates the resource has been permanently moved to another location. The web server knows what the original address was and what the new address is. The web browser should never ever use the old address again thank you very much. Of course, the visitor doesn’t really see much of that. There is an HTTP request from the browser to the web server, and the web server passes back some HTTP headers with the 301 and the information about the new location. Then the browser makes another request and the visitor gets the page they wanted with a different address in their URL bar, without any extra notifications or unnecessary clicking of links.
”Good”, you might say, “the visitor didn’t get an error message and ended up where she wanted to go”. True, but on the other hand the visitor’s browser just made one HTTP request it didn’t need to. If the visitor requested the old address again, their browser wouldn’t even bother trying again and go straight to the other address. The 301 it got the first time tells it the resource has moved for good, no need to try again. So, why should a web page send a visitor to the wrong address more than once? If we go back to the example of directions to the cinema a 301 is like a sign saying “The cinema has moved one block over to the right. We hope to see you there!” It sure is a lot better than finding no help, but wouldn’t it be even better to get up-to-date directions instead of being sent via the old address?
In an ideal world visitors would never get any errors. The resources would never be inaccessible, require credentials or any of the other possible causes of an error. Visitors would never be sent to a resource they couldn’t use. Unfortunately the strength of the web is that no one person owns all of it. This means that there will always be errors as the addresses to access resources shift and change.
The problem with the way we deal with errors right now is we tell the visitor that there was a problem but not the resource they used to get to the error. That’s like the sign telling each person where the cinema moved to. The sign is ok, but wouldn’t it be much better if the people in the cinema discovered who was sending so many people the wrong way and told that person the new location of the cinema? In the real world this would be pretty tricky, and quite a lot of work.
On the web this is a heck of a lot easier. In a normal configuration. the visitor’s browser tells the web server which page referred it. This is how web tracking works. This new suggestion is that when an error occurs the web server which is giving the error to the visitor also notifies any referrer it gets about that error. This is like telling the person giving bad directions where the cinema is. We don’t know if they will stop giving bad directions, but they might possibly start giving good ones instead.
The format of this notification could vary. There are a number of commonly used server to server communication protocols. XMLRPC and SOAP are two possible options; XMLRPC particularly is widely used in the blogosphere for ‘trackbacks’.
When a web server receives a notification they are linking to an error the response should differ from error to error. A notification of a 301 could cause a resource to automatically update to use a new URL, because after all it is just a new reference to the same place. A 404 notification, however, should do something more complex. Since the 404 is such a generic error, there is no new location to use, and you can’t just delete a link. The best course of action is probably to send a notification to the content owner to re-examine the link they used in their content.
All this is simple enough, but there are some more things to consider. Most importantly authenticating these error notifications. If a stranger told you that the cinema really isn’t where you thought it was and you should really send people some place else would you believe them? This stranger could be sent by a rival cinema, or a night club. Unless you know for sure, why should you believe them? This is the real life equivalent to the ever present problem of spam on the Web. It’s pretty simple to get sorted though; all you need to do is ask the stranger for an ID, that proves that they work for the cinema. If that is not enough (IDs can be faked, too), you could just go visit the new address. As always it is much easier in the computer world: you can use a Reverse Domain Name Service (RDNS) lookup (of the IP of the notification) or attempt to access the resource in question.
We know that we want to send a notification to a resource telling them there is a problem with the links they are providing. But there is a question of ownership. The ownership of domain.com/me may be entirely independent of domain.com/you. This begs the question, whether notifications should go to some global controller or directly to the resource in question. A global controller at the root of the domain has some advantages: much of the blogging community already uses global level controllers for things like track-back. However if the ownership of subfolders on a domain is diverse then the global controller would have to know about each owner, or at least how to delegate to a sub-controller. Alternatively, the notification could go directly to the resource. In this case however requests still have to adhere to a HTTP request. This means all resources on a web server need to be handled by the controller, which includes for example POST requests. This could be configured as some kind of global mask, but it starts to become more awkward. Realistically, it’s probably more sensible to try both and let the web decide which works best with plain ol’ market forces.
If a response at either the global or the resource level is not received then the other should be tried. If neither work then the erroneous resource should notify its own content owner to manually contact the referring resource. That way you’ll get notified when you are throwing 404s, even if the other guy isn’t fixing it. That isn’t to say you have to be notified for every 404 ever thrown. If a resource is only accessed once every few months it may not be economic to fix. There are a lot of options here about what we can do. I don’t expect the web will ever be perfect. It’s unreasonable to expect every web site owner to fix all human error that has ever occurred on their site, but if they are getting twenty people a minute going to a 404 or even a 301 someone should be doing something about it.
All this assumes, of course, that errors are being used correctly. If someone is using a 301 to refer visitors to their homepage, then implementing this system could break all the links on your site. It is also feasible that someone might try to use it to stop you from deep linking. Deep linking is creating a link directly to a resource rather than to the homepage of a site. Some content owners have objected to, and even sued over, deep linking to their content – they argue it unfairly damages their revenue stream.
In summary, right now we don’t use half the power of our error pages because we only tell the visitors about the error, and not the people who sent them there in the first place. We have the technology to automatically update our content with fixes and optimizations, and notify content owners when an automatic fix isn’t possible. The only thing we need to do is go out there and do it!