Monday, June 15, 2009

Cloud on Opensource

Nowadays cloud computing is a buzz word and becoming a popular model of IT service. Everyone talks about the benefits and the business agility of the enterprises those uses the services offered through cloud computing as opposed to the conventional in-house hosted applications. Generally we term the three famous ‘aaS’s (IaaS, PaaS and SaaS) collectively as cloud computing. The business model of all these three works in a similar manner. The user gets charged either based on their usage or based on monthly subscription. In either case the user need not bare the purchase and maintenance cost of the IT asset through which they enjoy the required service. Most of the analysts and architects believe that this model encourages the enterprises to adopt business changes faster and help to improve their business processes by leveraging these new Services. As we can see, the main attraction of this model is near to zero initial cost and ability to scale as required. Though it is in the early stage of evolution, most of the analysts are unanimously voting it as the next generation of IT service
All these days we have another strong business model named Open source model which evangelize the freedom to replicate and scale with out additional cost. Open source product vendors charge the users for their maintenance service and technical help rather than the license fee. Open source philosophy stand for the freedom of the user to use, modify and distribute his favorite software with out any copy right issue. This is indeed a good business model for the customers as they need not pay extra for initial setup and distributing it to other machines. More importantly there is no vender locking.
In the initial days most of the programs are single user based and hence a program implicitly means the executable as well as the data used by the program. But there is a paradigm shift once the networked services appeared in the horizon. A networked service, whether it could be a simple web application or a complex ERP process, the program runs on the server and the users ‘use’ the software through the permissible interfaces, most commonly a web browser.
The introduction of cloud computing increases this use and pay model. People like Richard Stallman and some of the open source philosophers went to the extreme of labeling the cloud computing as a sin and protest it for trapping the user to a vendor lock. Few other groups like O’Riley believe that this is a natural end of open source model. Open source is about enabling innovation and re-use, and at their best, cloud computing can be bent to serve those same aims. Though we may not be able predict the future, it is interesting to see if there is any common space where both these models can complement each other and converge for a better user experience. As we saw earlier, most of the cloud computing implementations are of initial stage and still lags lot of features and standards that hinder the enterprises to adopt this model. Most users worry about the safety of their critical data in the cloud environment. What happen to my data if the provider shut shop and run away? What happen to the program if the platform and/or the framework get changed? How can I change the provider if I am unhappy with the current one? I think we can answer these questions by applying the same open source philosophy.
First and foremost the adoption of open source platform stacks in a cloud computing implementation. This will not only allows to replicate the platform but also reduces the overall cost and hence the user fee. Google app engine is an example for this. They provides java and python based application framework for the users to develop their application and deploy on the cloud. Another available open source framework that helps to create cloud environment is EUCALYPTUS (Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems). The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. EUCALYPTUS is implemented using commonly-available Linux tools and basic Web-service technologies making it easy to install and maintain. This approach of abstracting and providing open interfaces to the user helps to have hassle free movement between the providers.
Ensuring the use of the AGPL license (GNU AFFERO GENERAL PUBLIC LICENSE) which is designed specific to networked service ensures that a user of a particular service is allowed to get the source code of the software in a publicly assessable server. This reduces the fear of provider closing the shutters. Even in this case we can get the code and host the service somewhere else. But the data and other collaborating users/services still remain as problem. The main problem with data is the format which they store for the use of the program. By providing API, tools and open standard to retrieve data could minimize this issue. This is known as open knowledge. Such services can be classified as open software service. The definition of open knowledge is available at
http://opendefinition.org/1.0. The open micro blogging site “http://identi.ca” has almost achieved the openness in code as well as in the knowledge. You can freely download the data and code from their server and set up your own service in case of requirement. Also it uses the open standards like open micro blogging protocol (http://openmicroblogging.org/) for and open ID for authentication. So it is easier to collaborate with other communities and avoids any vendor locking. The non profitable organization named Open cloud consortium (OCC) is a big step towards making open standards and frameworks for cloud computing.
We can see that cloud computing has borrowed from open source in terms of its governing principles, which could well be open source's lasting contribution to the cloud.

Tuesday, September 11, 2007

Federated Trust and SAML

The word Security is an important in the IT arena. Enterprises spend millions of dollars for implementing security measures for their wide range of IT assets varying from physical security to application security. Authentication is a basic requirement for access control. It can be a sophisticated biometric strategies or a simple application level user id/password combination. What exactly is authentication? I consider it as the process of recognizing the user.

Historically software applications, whether it is custom developed or enterprise application, incorporated an authentication module in itself. The user id and password (Probably digested password) is challenged against the store. The store could be an LDAP server or a simple database. Modern applications uses declarative authentication using any of the Pluggable Authentication Module (PAM). Even though it is relatively easy, all the application in an enterprise needs to implement authentication module. If each of this application needs to converse between themselves or the user need to navigate from one application to another, he has to login to the respective application and it created inconvenience and reduced the user experience of the participating applications. Then suddenly a new philosophy appeared into the horizon named ‘Single Sign On’. This concealed the individual authentication process from the user and gave him an experience of smooth and seamless navigation from one application to another in an enterprise.

Now a new thought came into picture. Why do we need to authenticate the user in each and every application in an enterprise? Can’t we believe our fellow applications? If a user is authenticated against one of the application in a trustworthy environment like an enterprise why does another application need to validate the user again? These questions guided the architect community to the very though of federated trust. In this the all or some of the applications in an enterprise forms a trusted group. That means each application in this group considers its fellow application as a trusted application. They keep the digital certificate or any other means of proof of the trusted applications.

If a user is authenticated against the application A and if he wants to talk to application B from A then application A sends a detailed covering letter about the user to B in its letter pad. Seeing the covering letter, B verifies whether it is really from A and if so it assumes that the user is a valid user without a second time authentication. In the digital world, most preferred covering letter is in the form of a SAML message. Security Assertion Markup Language (SAML) is an XML based protocol for exchanging authentication and authorization information. This is standardized by OASIS and the latest version of this standard in 2.0. A SAML message is having three important sections named Authentication statement, Authorization decision statement and Attribute statement.


  • Authentication statements assert to the service provider that the principal did indeed authenticate with the identity provider at a particular time using a particular method of authentication.

  • Authorization decision statement asserts that a subject is permitted to perform action.

  • Attribute statement asserts that a subject is associated with certain attributes. An attribute is simply a name-value pair. Relying parties use attributes to make access control decisions

The SAML message is signed and sends across to the destination. By validating the signature of the message the target application decides whether the message is from a trusted source or not.

Monday, August 20, 2007

RESTful SOAP

Last week there was a big debate going on between REST and SOAP camps in the office. Mails were flying both ways with the arguments. It was very interesting as both were right in their views. Here I would like to put forward my views on web service based on SOAP as well as REST.

REST is the acronym to the architectural style described as REpresentational State Transfer. In this philosophy, web services are viewed as resources that can be uniquely identified by their URL s. This is an extension to the current web architecture and this style merges with the underlying HTTP protocol. As a matter of style URLs need not be physical one and need not reveal the implementation technique used. One needs to be free to change the implementation without impacting clients or having misleading URLs. The protocol method itself is used to denote the operation. For example to retrieve a purchase order in which the order ID is ‘abcd’, make an HTTP request using a GET operation. In this case, the requested URL would be
http://servicehost:8080/restfulwebservice-war/orderservice/abcd. Applications can consume RESTFul service either programmatically or through using the browser. It highly suitable an AJAX based application as the integration is implicit.

SOAP based services are popular and are more in numbers as the implementation is considered. They are more formal in nature with various standards and description mechanism. There is an evolving webservice stack based on SOAP standards grouped under WS-* standards. It includes standards for security, transaction, reliable messaging etc. Also SOAP based services are well defined using WSDL. SOAP is an application level protocol and it is transport independent. So here the underlying transport layer is abstracted.

So which is good SOAP or REST? As both camps are backing their style with valid points, it is confusing for a practitioner to choose. In REST the service leverages the HTTP transport for accomplishing the tasks. But it is useful for developing services over HTTP only. Even though most of the current services are HTTP based, there are large amount of candidate services, especially for asynchronous web services, that works better with other transports like JMS and SMTP. More than that basic architecture philosophy is to “abstract where ever possible”. SOAP based services are very much obeying this rule.

Another principle is to program against a well defined contract. In case of SOAP based service WSDL gives a proper definition of the service and the consumers required to get this alone to invoke a service. I am not denying the existence of WADL but it is not matured enough to compare with WSDL.

In case of webservice security REST camp argues that they use the existing server and network infrastructure to implement security and there is no message level security is required for it. This is true when there is only provider and the consumer is required. But in a case where there are more services involved in a business operation and the message contain various parts and each are intended only to the respective service, the scenario is different. Here message level encryption is the only way. In most of the enterprises this is the case as point to point integration is fading out.

Another advantage REST camp mentioning is the performance and the size of data transfer. In SOAP the overhead of the protocol is there and hence the data size is much more than that of a REST counter part.


After hearing all these I could draw an analogy between a formal programming language like java and a scripting language like PERL. Both has its own merits and demerits.
In an enterprise integration of various applications are unavoidable. In such conditions adhering to the standards are most important because modification of the individual applications should not get affected by the implementation of other applications in the field. Here the vote goes to a more standard based SOAP service. A large software application can be designed as a collection of loosely coupled modules and the interaction between the modules can be made as service invocation. Here performance and ease of development are vital and it is worth to give a shot to RESTful services.

Both are not competing technologies but complement each other. Both have their on space in the technology landscape and selection of any of these styles should depend on the scenario.

Tuesday, August 07, 2007

Be Pessimistic!

Pessimism is not considered as a frill to human nature. People often are being criticized for their skepticism. Is it all that bad? If we take it as being extra cautious or being alert may elevate status of this nature. But what is the relationship between pessimism and software architecture? Let me try to unwind this thread. A software architect, especially enterprise architect, is the one who takes the technology affecting decisions in an enterprise. His decisions can make or break the IT projects and even affect the business. He takes the final call on the technology, software are infrastructure in an enterprise. Current technology landscape is poised with so many products and technologies decorated with nice buzz words. There could be fakes also and it is very easy to be deceived by these traps. So a good architect should be vigilant and should not accept by their face value. He should really have a taste of it before serving it to the rest. But it may not be possible in all the cases. However some of the enterprise wide product evaluation should undergo through inspection of all the alternative options against the cost Vs. benefit. This need not be mere monitory details but in all the possible aspects.

The term SOA is a popular mantra for last few years and some of the IT managers and architects tried to make their project as SOA based. Service oriented Architecture (SOA) is a very good architecture style to make the IT assets reusable and loosely coupled. But there were quite few projects adopt this style by making webservices for everything just for the sake of being the first runner. Some of these projects failed due to poor performance and evolving standards which made them obsolete. The same was the fate of most of the J2EE projects in late nineties, where EJB was a craze. An architect, who aspire success, should approach the technology evolution with a critic's nose.

I know one of my fried who used to Google anything with a suffix ‘S..K’. Every coin has two sides!

Friday, August 03, 2007

Why do we need comments?

Yesterday I was reviewing some java code and it was more or less similar to the one given below.



//Assign i with the value 0

i=0;

//increments the variable i

i++;



Here each and every line is accompanied by some sort of comments. Some programmers believe that good programming practice is to write as much as comments you can. In the case of programming in machine language and or assembly language this is required and will be helpful to understand the program. But will it make any sense in the case of contemporary high level languages?

I certainly believe no. All the modern programming languages are English like and human readable. So if the code itself is meaningful, what is the need of repeating it again? I believe that the code itself is the best document for it. The benefits of practicing this is multi fold. Apart from saving a lots of human effort, the code looks clean and compact. This gives a macro picture of the logic at a glance.

So does it mean commenting a program is that bad? Not at all.
Comments are good aid for the reader if it is given to a set of cohesive statements. This also helps to visually separate such blocks like a paragraph in an essay. Languages like java and c# allows document style comments and this will be highly helpfull to generate the API document. It is good practice to document all the classes and methods with API document style comments.

A well written code is like a poem where it can be appreciated even over the phone. So you can do a simple telephone test to find out the readability of your code. If the listener is able to understand the logic then obviously you have passed the test!

Thursday, April 06, 2006

How Is Technology Accepted?
Building business success on technology is not easy—myths abound based on common sense,
tales told by those who have won, analogies to things like evolution, and appeals to
inventiveness and innovation. When we look closely at how technology is accepted and how
success is built on it, the picture is quite different, and the process of acceptance is both
lengthy and unpredictable. In this talk we’ll look at the myths and the realities, we’ll look at
many specific examples, and we’ll conjecture a set of principles that might work.