I use an email client on my iMac. When it can’t get to the server, it still works. But sometimes, on a slow day with an unreliable network like the one I’m on right now, I don’t realize that I have mail waiting for me. The disconnect between by client-side logic and the server-side data camouflages the fact that the network isn’t working.
By contrast, when I use GMail’s web interface to read my mail, I know when I have new messages. Because Google controls the processing (on its servers) and data (right next to them) the two are connected. No camouflage there: If the network sucks, I know it.
This is a recurring problem in Rich Internet Applications. Once, we knew a page was loading because the little Netscale logo swirled, or the little Internet Explorer logo rotated. Now, we don’t have the visual cues for many client-side apps. If I’m using a Flash-based client, I often don’t realize the server’s gone away for quite a while. By then, I may have done some local work.
Until cloud application developers learn to properly instrument and inform their users about network health, rather than sweeping problems under the rug, I prefer my application logic next to my stored data. I trust Google’s network between its app and its database more than I trust my own. And that’s why I like my data near my logic.