Why Tech Companies Should Celebrate MLK Day

January 17, 2011

In technology companies, fellow workers and customers often span the globe representing a spectrum of races, backgrounds, and nationalities. Look around the walls of your own US-based office and across the halls of your global offices, and you will probably see many races and nationalities at your company.

Martin Luther King Day was signed into law in 1986 as a United States federal holiday to commemorate the birthday of Rev. Dr. Martin Luther King Jr.

To some, this day is just another Monday, the beginning of another week. But to others, this is an extraordinary day. It’s a day that reminds us that merit of our performance and content of character, not color of our skin is what matters. It’s a day that celebrates courage and efficacy of non-violent activism. It’s a day that inspires us to be better citizens of the world and champions of justice. It’s a day that reminds us that as privileged people by virtue of the fact that we work in technology, our responsibility to support human rights and to combat inequality in all forms–political, economic, racial–extends beyond our borders.

We should celebrate that we live in a country where such ideals matter. And we should nurture such ideals.

By inspiring individuals to improve their own character, your company as a whole will be better off. It will give you a collective moral high ground to see new vistas of opportunities where others do not. It will inspire you and your employees to do the best in all facets of life and work. A majestic vantage point will become a competitive advantage as the image of your company will be elevated in the eyes of your customers and in the eyes of fellow technology workers. You will attract the best people if you have the best character.

Some of your competitors already celebrate MLK day, giving employees the day off, and they even go so far as to customize their logo and home page commemorating MLK day. This might be a token marketing ploy–flag wrapping–to curry good will, but artifice or not, it speaks volumes about the recognition, accountability, and stewardship some companies feel when it comes to inspiring potential, positive impact on public consciousness.

To be clear, Martin Luther King wasn’t perfect. Womanizing and plagiarism for sure. Lincoln, Jefferson, and Gandhi weren’t perfect either. But the ideals they all stood for– liberty, freedom, justice, and equality–are incontrovertible. So make your company one of the one out of three companies (in 2007 33% of companies celebrated MLK day) to celebrate the indomitable righteousness of equality.

Today I am taking a float day today to reread “Letter from Birmingham Jail” and to watch Dr. King’s epic “I have a dream speech”. I hope others do the same.

To be a great company, you need to recognize, fight for, and celebrate justice. And that movement begins by mounting small steps of liberty. If you are motivated by enlightened public interest, then once a year (at least), “Let freedom ring” at your company

Overlooked CAP paper by Eric Brewer

April 9, 2010

Thanks to Mike Stonebraker for an eloquent explanation of CAP considerations.

In recent discourse about the CAP theorem, I wonder if another of Eric Brewer’s important contributions, “Harvest, Yield, and Scalable Tolerant Systems“, has been overlooked. Brewer’s 1999 paper describes strategies for dealing with CAP based on probabilistic availability and application decomposition, resulting in graceful degradation and partial failure modes. By decomposing applications into independent subsystems, for example, the failure of one subsystem can be isolated from other subsystems enabling the system at large to still operate (but possibly with reduced functionality).

Since individual subsystems have different requirements for what can be relaxed–e.g., one subsystem may prioritize CA while another may prioritize AP–decomposition also hints at the need for “tunable CAP”, i.e., the ability for system designers to choose where in their system-wide workflows to use CA or CP or AP. Not all activities (subsystems) in a complex workflow have the same data requirements–a billing subsystem might prioritize consistency whereas a web recommendation subsystem might deprioritize consistency and permit eventually consistent, stale reads. Hence being able to relax C or A or P at different places and times and subsystems can result in an optimal system-wide design. So to evoke another Stonebrakerism: “one size does not fit all”. Tunable CAP ideas have in fact already been incorporated (for several years) into state-of-the-art data fabric products currently adopted by all top banking institutions for scalable, continuously available, high-speed trading applications.

As an approach for dealing with CAP issues, application decomposition also enables system designers to identify and exploit “orthogonal mechanisms” (Brewer page 3) for non-invasive subsystem management. As an example, data schema interdependence policies can become orthogonal mechanism that allows users to configure requisite availability semantics for individual subsystems. The value here is that data interdependency policies can be declared at “compile-time” (system configuration time) enabling early detection of problematic/risky configurations that can be quickly corrected by administrators. Second, at “run-time”, process survivability decisions can be guided by the rational logic of schema data dependencies rather than randomness of primary network partitioning models (where the losing “side” of a partition gets killed arbitrarily).

And finally, a consequence of CAP and the use of application decomposition is that deconstructing a system into holistic, autonomous business chunks that communicate via message passing and are transactionally independent leads to a service “entity” abstraction ala Pat Helland’s “Life Beyond Distributed Transactions“. This service entity model, in turn, can be thought of as an Actor model for concurrent computation. So to summarize:

CAP ==> Application Decomposition ==> Service Entities ==> Actor Model

Business Glue: Webhooks for the Enterprise

June 2, 2009

Jeff Lindsay of webhooks.org has single-handedly been championing webhooks across the Web 2.0-scape for years now. Responding to webhooks.org‘s call-to-action to help spread the word about webhooks, I contacted Jeff a few weeks ago expressing my interest to help blog about webhooks too. I think Jeff has pretty much got the pitch down: if you haven’t checked out his latest webhooks slidedeck, it’s slick. My two word definition is that webhooks are simply url callbacks. I think of webhooks as the push-based integration counterpart to pull-based REST APIs (i.e., WORK, Webhook-Oriented Resource Knitting).

Many Web 2.0 use cases for webhooks are well known including post-receive hooks for Github and PayPal Instant Payment Notification. My perspective on webhooks has been in the context of enterprise computing: using webhooks for internal business process automation as well as building webhooks into a realtime distributed enterprise middleware product targeted at Fortune 100 companies. For internal automation within my company, we’re using webhooks from our hosted techsupport system, Zendesk, connecting customer help request events to our in-house Yammer stream. For our product customers–enterprise middleware buyers–an endless mandate is improvement of operational efficiency by linking together disparate islands of information and business logic spanning disparate business domains that in turn span multiple geographical regions. The business goal is to not only connect or integrate processes and data flows, but also to create what industry analysts call Event Driven Architectures (EDA). EDA essentially means realtime event processing. When any activity in the business transaction workflow occurs, invoke a webhook; in other words, “notify me when I should know something and send it to me so I can process it”.

Webhooks are a natural fit for performing this crucial EDA event notification role as they can be instantly developed and deployed in any language using your URL service library of choice. Webhooks thus become a no-sweat mechanism for broadcasting important data changes and enterprise events between business stakeholders.  Furthermore, the cascading flow of webhooks invoking webhooks creates a business “crazy glue” that can quickly “stick together” workflow process and aggregate data resources.

So why aren’t the goals of realtime business already met by existing middleware solutions like SOA, Enterprise Service Buses, Enterprise Information Integration, or Enterprise Application Integration? The basic problem with these approaches is simply their girth–the cost and complexity of these solutions make them far too heavyweight for rapid experimentation and ubiquitous adoption. Bulky, heavy solutions are expensive and time-consuming to prototype much less integrate as they require massive retro-fitting of existing IT infrastructure. Rather than wholesale re-plumbing, a non-invasive “pointcut” on existing infrastructure is needed. Virtually invisible, a webhook can be “dropped-in” quickly to link business processes and data resources with minimal programming design, effort, and maintenance.

To be fair, from a scalability and flexibility viewpoint, to make using webhooks truly as effortless as squeezing a drop of crazy glue and sticking together pipes in a business workflow, a webhook framework that takes cues from well-proven asynchronous distributed messaging protocols like JMS or MQ or AMQP is required. Without a flexible and efficient, publish/subscribe routing mechanism, casual, ad-hoc synchronous invocation and assembly of webhooks could lead to brittle, one-off point-to-point couplings. To avoid “pouring concrete on business processes” with baked-in webhook couplings, what is needed is a messaging-inspired code execution architecture that enables new webhooks to be flexibly/dynamically added as url invocation endpoints triggered by arbitrary enterprise events. Thus, adoption of webhooks inside (and outside) the enterprise can be accelerated by a “hookcloud”, a logical enterprise-wide (or Web) cloud that serves as a clearinghouse for matching up enterprise activities/events (via topics) with webhook subscribers. Invocation of webhooks occurs in an asynchronous manner to permit loose-coupling between business processes, avoid workflow deadlock and livelock, and permit webhooks to run at independent speeds. Furthermore, a scalable hookcloud needs to anticipate failure.  Distributed systems theory teaches us that to create near-infinite scaling architectures, failure detection of processes, message timeouts, and retry settings become vital components for enabling and simplifying idempotent webhook processing. Jeff’s scriplets.org and Hookah are a great start. A worthwhile extension might be a webhook-based pubsub messaging cloud with message selector event filtering.

Speaking of cloud-based resources, let’s rally to get cloud databases like Amazon SimpleDB, Microsoft SQL Data Services, and Google App Engine (BigTable) to make webhook trigger callouts when data changes. Nice to see Google Wave robots are using webhooks!

Multi-user document versioning enables the realtime Web

May 29, 2009

I had the pleasure of speaking to data management visionary Pat Helland earlier in the week about scalable distributed data management.  Since then, I’ve been seeing the world through a lens of loosely-coupled distributed workflow and have started wondering if there is a pattern emerging in the Web: the use of multi-user document versioning for enabling the realtime Web.

One of the most familiar use cases of multi-user document versioning that programmers are certainly aware of is source code control. CVS, SVN, Git all do the paramount job of managing source code repositories making sure that one person’s changes are not stomped on by another user’s changes. They archive history of source code documents and detect conflicts in source files. From a database practitioner’s viewpoint, these source code control systems are very much transactional databases. When a user commits a file, the user is apprised of line-by-line source code conflicts if any. If there are conflicts, the user must resolve these conflicts, potentially rolling back their own changes, and then retry their commit operation. With only a handful of programmers and the typically bulk nature of source code repository commits, the expense of handling “transaction conflicts”  is usually manageable.

But what happens when the source files, or let’s imagine more generally, any document is shared simultaneously by many people who all might make changes very quickly? What happens, for example, in a groupware scenario (a.k.a Computer Supported Collaborative Work) where many users simultaneously edit and read a shared document? The result of increased concurrent activity means more opportunity for conflicts.  If each document mutation is done under a pessimistic locking model, this introduces user responsiveness latencies due to blocking.

To mask latency, optimistic concurrency control modes, i.e., multi-version concurrency control, avoid blocking on reads at the cost of potentially incurring more rollback space in cases of conflict,  e.g., more work must be undone and more apologies must be made.  With the increasingly distributed (e.g., client/server or cloud-based) nature of computing, adoption of multi-version concurrency mechanisms is growing since clients can perform local mods immediately rather than waiting for high latency communication  with servers.

Being optimistic is great because it eliminates latency, however, the amount of concurrency control management is not fundamentally altered. There is still opportunity for conflict; it’s just that compared to pessimistic approaches, the discovery of conflicts occurs at the end (rather than beginning) of a workflow.  It would be nice to scale the number of concurrent participants doing frequent commits to a shared document while masking, i.e., automating, more of the concurrency management. In other words, it would be nice if collaborators on a shared document could enjoy a rich, live, realtime user experience and be spared as much drudgery of rollback as possible.

Patterns, idioms, and topologies for such scalable document management seem to be appearing.  In particular, it seems that concurrency semantics are evolving, e.g., from simple reasoning semantics like linearizability to sophisticated domain specific languages that effectively lay down strict parameters/rules for what and when and how concurrent operations are managed and resolved for a given domain, e.g., group editing of a shared text file. A benefit of parameterization is that it articulates the rules of the game for concurrency to users creating a well-understood playing field for multi-user operations. In other words, users become aware of where their workflow boundaries intersect with other concurrent user workflows. But most important, parameterization of domain specific concurrency idioms can define how competing (conflicting) operations will be transformed and composed into operations that when applied by each participant independently, create a sensible shared outcome. This enables automatic system reconcilaition to occur.  Two conflicting operations can be “transformed” into operations that can be applied at a client and server independently so that active participants see the same agreed upon semantic document changes. Participants see their local changes immediately with no latency while server changes are applied back asynchronously. Thus a groupware editing environment can build tools and semantics that define how concurrent operations are combined to make sense for multiple users. As a trivial folksonomy idiom, in a source code file, two concurrent modifications each occurring in a separate method within the same module are merged without conflict. In a groupware text editing scenario, two concurrent modifications (e..g, character insertions) on separate paragraphs do not conflict and can thus be rendered concurrently (without delay) to multiple users.  A great example (literally yesterday’s news) of more advanced concurrency parameterization and multi-user versioning is Google Wave. Google Wave extends Operational Transformation to create a rich, live collaborative editing experience: “The result is that Wave allows for a very engaging conversation where you can see what the other person is typing, character by character much like how you would converse in a cafe.” Google OT defines what mutation operations are legit on a document and defines how two concurrent operations are morphed and reconciled, creating a de-facto domain-specific language for collaborative editing. Operational transformation mutations include skip, insert characters, insert , delete characters, set attributes, etc.. Hence, these mutation components define/parameterize scopes for concurrency. Furthermore, Wave defines a composition operation which enables two operations to be composed into a single composite operation…which enables conflation of operations.

As a glimpse perhaps of what Google Wave collaboration might be like, try out EtherPad. EtherPad is made by the guys (former Google employees I think) at Appjet. Appjet is a slick platform-as-a-service that enables server-side JavaScript, streamlining application development from client to server (since all you need to do is use one magical language called JavaScript.

Finally, because document space within a single, centralized physical repository will ultimately overflow, there is the need to delegate, shard, partition the global document and history space among multiple machine resources, i.e., a distributed system of resource nodes that can scale with the document and user versioning load. Google Wave prescribes such a federation protocol and data model to  accommodate scale-up of the “wavespace”.

Git as a Google Wave server?

So let’s say you’re looking for a federated home-based document repository that tracks fine-grained mutations for rollback/time travel as needed, has well-defined user idioms for managing and resolving conflicts, does efficient compression of deltas, and is very scalable and fast. To me this sounds like Git and thus, I’m wondering if Git could be a natural as a Google Wave server!  I’m not sure if this will work, but if you’re interested, I’ve started prototyping a “gitwave” server and welcome help/feedback on this idea.

One final thought

Looking at Google Wave OT document mutation operations:

skip
insert characters
insert element start
insert element end
insert anti-element start
insert anti-element end
delete characters
delete element start
delete element end
delete anti-element start
delete anti-element end
set attributes
update attributes
commence annotation
conclude annotation

If we replace “character” with “object”, it’s interesting to see what sort of “object graph” semantics appear. In particular, it seems that the concept of object sequences or what have been called arrables emerge. Also unlimited “style attribute runs” could then be applied over arbitrary sequences as ad-hoc schema markup tags. I’m wondering if OT + Git = temporal database?


Follow

Get every new post delivered to your Inbox.