Andrew Wahbe
12p
4 comments posted · 5 followers · following 0
13 years ago @ mca blog ... - mca blog [hypermedia a... · 0 replies · +1 points
Will try again later over email!
13 years ago @ mca blog ... - mca blog [hypermedia a... · 2 replies · +2 points
Adaptive controls are more powerful because they allow a browser to act as a mediator between two domains. It maps some domain (defined by the hypermedia format and client) to the uniform interface. For HTML browsers it is GUI, for VoiceXML browsers it is a Voice UI, for Atom clients it is CRUD. CRUD is really close (even though it's not identical) to the uniform interface and so the mediation isn't very flexible (it's essentially a pass through). Another reason CRUD is a bad domain is because the messages received by the browser are commands (create, retrieve, update , delete) instead of events. Events (like a mouse click event) have no associated behavioural constraints and so they can be mapped to anything. An Atom client is restricted to passing CRUD commands through to the associated HTTP methods on the named resources. Atom is only a small step beyond a spider (which is restricted to Retrieve). You seem to be trying to take the basic Atom model further via link relations. This has the fundamental flaw that if forces the client to adapt it's own domain to the resource domain in order to interpret the hypermedia document and make its choices. Instead the hypermedia format should be designed around the client's domain and in such a way that it allows the client to process raised events rather than issued commands. This methodology yields a multitude of hypermedia formats that cater to different classes of client rather than one format to rule them all.
13 years ago @ mca blog ... - mca blog [HTTP is not ... · 0 replies · +3 points
Maybe Mike's implying something with the use of the word "routine" that I'm missing... not sure...
13 years ago @ mca blog ... - mca blog [HTTP is not ... · 2 replies · +3 points
BUT... RPC protocols are usually accompanied by mechanisms to define endpoint "types" with associated protocol constraints. i.e. I can define an endpoint type that supports 2 methods foo and bar, and I can document constraints on those methods that clients can depend on. We can think of RPC as a mechanism for defining protocols. In fact you could envision using an RPC mechanism to define something similar to HTTP (similar to what was done with WS-Transfer).
That's what I was getting at in the Twitter thread. Yes, calling HTTP "RPC" was incorrect. But I do see a similarity between layering RPC over HTTP and layering RPC over an RPC-defined protocol -- e.g. using WS-Transfer to exchange SOAP messages.
But the point you made that "HTTP is not designed to execute routines remotely" was the main thing I disagreed with. And I don't see how anything you've posted here supports that. Can you elaborate?