REST brings new hope. We have a clean, uniform interface for clients to interact with one another. Tools like ActiveResource help reinforce the idea that the interface to a distributed resources can be just the same as the interface to local resources. But which clients should interact with which services and why clients ought to interact with them remain questions answered only by human intervention. What if we can find a layer of abstraction to help chip away at this problem? Actually "solving" the problem might be the realm of AI fantasyland. But I think we can begin to chip away at. And that's progress.
We've already had something basic like this in the past. For example, back in the day using the Web required finding a list of servers that implemented a particular Internet protocol: gopher, IRC, etc. Interoperability occured at the level of protocol. Can we make interoperability happen at the level of semantic description of service with REST?
This is the thought that occurred to me reading Richardson and Ruby's excellent RESTful Web Services. As they point out:
Computers do not know what "modifyPlace" means or what data might be a good value for "latitude".
Well, why not? A computer can certainly know what a good value for an email address is. We take it for granted that a compiler knows the difference between an integer and a string. Why not latitude? We just need one more layer of abstraction above what we have today, before clients can consume services without having been programmed in advance.
It prompts the question: how distributed do we want programming to be? I can imagine the ontology itself being a Web service. If a client did not know what latitude was, it could just look it up. What would the service provide? I can imagine lots. An ontology, validations, algorithms, related services.
REST is real progress, because it simplifies things. Hopefully we'll be spending less time wiring together our Web service stacks and more time thinking about the big picture.
No comments:
Post a Comment