Andrew Wahbe

Andrew Wahbe

12p

4 comments posted · 3 followers · following 0

12 years ago @ mca blog ... - mca blog [hypermedia a... · 0 replies · +1 points

Yes it's definitely not fun to work with... it just ate my very long response... :-(
Will try again later over email!

12 years ago @ mca blog ... - mca blog [hypermedia a... · 2 replies · +2 points

Following down this path will limit you to spiders and not yield new forms of browsers. There are 2 main types of hypermedia controls IMO: 1) adaptive controls and 2) referential controls. A tag can be either or both depending on how it is used (the type is a function of the format AND the client). <a>, when used in a browser is an adaptive control -- it adapts a GUI interface to the network interface. The subject of affordance is not a resource, but a piece of text on the screen (via colour and underlining). The choice the user makes is between those pieces of text, the action they can take is a click. The text itself allows the user to infer the probable outcome of their action. When <a> is used in a spider, the subject of affordance is the resource identified by the URI and actions that can be taken are the uniform interface methods. The text and the rel value allow the spider to infer the probable outcome of the action.

Adaptive controls are more powerful because they allow a browser to act as a mediator between two domains. It maps some domain (defined by the hypermedia format and client) to the uniform interface. For HTML browsers it is GUI, for VoiceXML browsers it is a Voice UI, for Atom clients it is CRUD. CRUD is really close (even though it's not identical) to the uniform interface and so the mediation isn't very flexible (it's essentially a pass through). Another reason CRUD is a bad domain is because the messages received by the browser are commands (create, retrieve, update , delete) instead of events. Events (like a mouse click event) have no associated behavioural constraints and so they can be mapped to anything. An Atom client is restricted to passing CRUD commands through to the associated HTTP methods on the named resources. Atom is only a small step beyond a spider (which is restricted to Retrieve). You seem to be trying to take the basic Atom model further via link relations. This has the fundamental flaw that if forces the client to adapt it's own domain to the resource domain in order to interpret the hypermedia document and make its choices. Instead the hypermedia format should be designed around the client's domain and in such a way that it allows the client to process raised events rather than issued commands. This methodology yields a multitude of hypermedia formats that cater to different classes of client rather than one format to rule them all.

12 years ago @ mca blog ... - mca blog [HTTP is not ... · 0 replies · +3 points

Well I'm not sure if it was the 140 character limit but it doesn't really make sense to me -- "executing routines remotely" is essentially the client-server REST constraint and so it would be strange if HTTP was not designed to do that.

Maybe Mike's implying something with the use of the word "routine" that I'm missing... not sure...

12 years ago @ mca blog ... - mca blog [HTTP is not ... · 2 replies · +3 points

At first glance, when you look at what's going over the wire, there isn't much difference -- you see request/response pairs, and a method name. While HTTP takes a stream instead of input/output parameters, if your data stream consists of serialized name-value pairs, that looks the same too. Of course, the key difference is in what is constrained by HTTP vs. an RPC/RMI protocol -- HTTP constraints the method name (and semantics of those methods) while RPC does not, yielding the benefits described in the paragraph you quoted.

BUT... RPC protocols are usually accompanied by mechanisms to define endpoint "types" with associated protocol constraints. i.e. I can define an endpoint type that supports 2 methods foo and bar, and I can document constraints on those methods that clients can depend on. We can think of RPC as a mechanism for defining protocols. In fact you could envision using an RPC mechanism to define something similar to HTTP (similar to what was done with WS-Transfer).

That's what I was getting at in the Twitter thread. Yes, calling HTTP "RPC" was incorrect. But I do see a similarity between layering RPC over HTTP and layering RPC over an RPC-defined protocol -- e.g. using WS-Transfer to exchange SOAP messages.

But the point you made that "HTTP is not designed to execute routines remotely" was the main thing I disagreed with. And I don't see how anything you've posted here supports that. Can you elaborate?