Wednesday, December 16, 2009

ASP Vs SaaS-Old wine in New Bottle

Back in the dot-com boom the concept of Application Service Provider (ASP) was introduced as a new breed of software delivery. ASPs got the application outsourcing business rolling by hosting third-party, client-server or early web applications. This was a new delivery model where till then only in-house applications were known to the industry. ASP took the full advantage of internet: - to be miles away from the end user. Applications are hosted somewhere in the internet and managed remotely. The users use it through the web forms. The enterprises need not acquire or maintain the software they can use it for a subscription fee. This model also got burst along with the dot-com bubble. Though it was a revolutionary business model the lack of maturity of tools and standards were the real villains in this story. In other words the idea was a forerunner of the technology. Now we are again started talking about Software as a Service but with a basic difference, this time the foundation is stronger. Broadband connections are faster and cheaper now. Also the Webservices and Web 2 standards are more matured and industry proven. These improvements helped to host applications which are on par with owned applications in terms of usability and speed. Also the other ‘*aaS‘models helped to achieve elasticity which in turn ended up in effective usage of available resources. The emergence of multi-geo organizations also accelerated this paradigm shift.

Now let us discuss how they differ in their outlook. Despite the fact that the medium is same (Internet) ASP is targeting the delivery, where as the SaaS is focusing more on the service aspect of the product. In ASP the provider is someone like a middleman who packages and host third party applications in a data centre for the use of end customer. The original application may not be designed for hosting it. But the new breed, SaaS, is specialized and designed specific to be used as a hosted service. They are generic services as opposed to customer specific bulky applications in the ASP era. Since ASP applications were specific the maintenance cost was high. Another visible difference is in the billing granularity. ASP had a model on per server or per user. But SaaS can bill by CPU cycles, bytes transferred. This facilitates scaling a system

ASP was not proved to be a grant success though the concept was novel where as SaaS is getting momentum in the industry. Let us see what changes the mindset of the decision makers to embrace software services. I could see both financial and technological reasons behind this shift. The turbulence in the global economy is one of the compelling reasons for the CIOs to use a rented service to run the show. Another reason is the ever-changing technology and continues business optimizations. Every enterprise has to rapidly implement new business ideas to survive in the market. To build a quick solution from the limited existing IT resources are near to impossible. So the solution is to rent a service. All of these developments, coupled with an IT outsourcing habit , caused many CIOs to relinquish their company's IT assets to a SaaS, which was not the case a couple of decades ago.

Though the objectives and technology stacks differ, I still like to believe SaaS is a successor of ASP with technological and business model improvements. I think at least conceptually both give similar outcome to the end user though the usage pattern may differ. I like to see more perceptions on this.

Friday, July 10, 2009

SPML in a cloud view

Security standards always fascinated me and by no wonder the one for Cloud computing. Though I believe the future computing is through cloud, a bit of skeptic on their security mechanisms in place. The significance of a standard way of provisioning is the first and foremost security measure that delights a cloud user especially in corporate sector. Security Provisioning Markup Language, a standard derived by OASIS, is for exchanging user information, resource information, and service provisioning information in systems. This is my first hand feelings on the usage of SMPL in cloud though not an expert opinion.

Let me start by asking the question: What is provisioning?
As per the OASIS Provisioning Service Technical committee, provisioning is the automation of all the steps required to manage (setup, amend and revoke) user or system access entitlements or data relative to electronically published services. Before we get into the details of provisioning let us take a scenario of an employee joining a company. In most of the modern enterprises he or she will be greeted with a set of docos and followed by a pc or laptop. Now the hard work of HR starts to setup the working environment for the employee. It starts with getting the credentials and mail account. Apart from that based on the role he may need access to various business applications in the enterprise. The earlier the better!
So now our HR executive is busy in a series of calls accompanied by emails to the IT admin saying that we have a new joiner and need to set up accounts and get him the pc. IT Service requires a set of details like last name, SSN etc to create the account and add it. Arguably in some of the big enterprises this may be automated as part of a workflow process. This is sufficient in case of an in-house IT set up. But let us consider an enterprise with services spread across cloud. Each service may be hosted by different cloud provider. This makes the situation of our HR reps really complex. He needs to ensure that all are well. The situation is more catastrophic in the case if an employee resigns. It is really important that this user needs to de-provision (Who know there is a word deprovision!) the next minute he left the organisation otherwise the organisations assets could be in danger. So we need a standard based automatic provisioning system in place where we need to live in a heterogeneous IT eco system. Here comes the importance of SPML. It provides standards for securely communicate provisioning details between various applications/services.

In SPML theory a provisioning system contains three essential components: a Requesting Authority (RA), a Provisioning Service Provider (PSP), and a Provisioning Service Target (PST).

Requesting Authority (RA): In a typical provisioning system the RA is the client. Well-formed SPML documents are created by the RA and are sent to the SPML service point. These requests describe an operation to be performed at the PSP end.
Provisioning Service Point (PSP): A component that listens and processes well-formed SPML documents is called a Provisioning Service Point.
Provisioning Service Target (PST): The Target is basically actual software or an application on which action is taken.

Though the standard SPML was there a few years back (So laszy to check the exact year!), the importance is augmented by the dawn of Cloud computing.



Monday, June 15, 2009

Cloud on Opensource

Nowadays cloud computing is a buzz word and becoming a popular model of IT service. Everyone talks about the benefits and the business agility of the enterprises those uses the services offered through cloud computing as opposed to the conventional in-house hosted applications. Generally we term the three famous ‘aaS’s (IaaS, PaaS and SaaS) collectively as cloud computing. The business model of all these three works in a similar manner. The user gets charged either based on their usage or based on monthly subscription. In either case the user need not bare the purchase and maintenance cost of the IT asset through which they enjoy the required service. Most of the analysts and architects believe that this model encourages the enterprises to adopt business changes faster and help to improve their business processes by leveraging these new Services. As we can see, the main attraction of this model is near to zero initial cost and ability to scale as required. Though it is in the early stage of evolution, most of the analysts are unanimously voting it as the next generation of IT service
All these days we have another strong business model named Open source model which evangelize the freedom to replicate and scale with out additional cost. Open source product vendors charge the users for their maintenance service and technical help rather than the license fee. Open source philosophy stand for the freedom of the user to use, modify and distribute his favorite software with out any copy right issue. This is indeed a good business model for the customers as they need not pay extra for initial setup and distributing it to other machines. More importantly there is no vender locking.
In the initial days most of the programs are single user based and hence a program implicitly means the executable as well as the data used by the program. But there is a paradigm shift once the networked services appeared in the horizon. A networked service, whether it could be a simple web application or a complex ERP process, the program runs on the server and the users ‘use’ the software through the permissible interfaces, most commonly a web browser.
The introduction of cloud computing increases this use and pay model. People like Richard Stallman and some of the open source philosophers went to the extreme of labeling the cloud computing as a sin and protest it for trapping the user to a vendor lock. Few other groups like O’Riley believe that this is a natural end of open source model. Open source is about enabling innovation and re-use, and at their best, cloud computing can be bent to serve those same aims. Though we may not be able predict the future, it is interesting to see if there is any common space where both these models can complement each other and converge for a better user experience. As we saw earlier, most of the cloud computing implementations are of initial stage and still lags lot of features and standards that hinder the enterprises to adopt this model. Most users worry about the safety of their critical data in the cloud environment. What happen to my data if the provider shut shop and run away? What happen to the program if the platform and/or the framework get changed? How can I change the provider if I am unhappy with the current one? I think we can answer these questions by applying the same open source philosophy.
First and foremost the adoption of open source platform stacks in a cloud computing implementation. This will not only allows to replicate the platform but also reduces the overall cost and hence the user fee. Google app engine is an example for this. They provides java and python based application framework for the users to develop their application and deploy on the cloud. Another available open source framework that helps to create cloud environment is EUCALYPTUS (Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems). The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. EUCALYPTUS is implemented using commonly-available Linux tools and basic Web-service technologies making it easy to install and maintain. This approach of abstracting and providing open interfaces to the user helps to have hassle free movement between the providers.
Ensuring the use of the AGPL license (GNU AFFERO GENERAL PUBLIC LICENSE) which is designed specific to networked service ensures that a user of a particular service is allowed to get the source code of the software in a publicly assessable server. This reduces the fear of provider closing the shutters. Even in this case we can get the code and host the service somewhere else. But the data and other collaborating users/services still remain as problem. The main problem with data is the format which they store for the use of the program. By providing API, tools and open standard to retrieve data could minimize this issue. This is known as open knowledge. Such services can be classified as open software service. The definition of open knowledge is available at
http://opendefinition.org/1.0. The open micro blogging site “http://identi.ca” has almost achieved the openness in code as well as in the knowledge. You can freely download the data and code from their server and set up your own service in case of requirement. Also it uses the open standards like open micro blogging protocol (http://openmicroblogging.org/) for and open ID for authentication. So it is easier to collaborate with other communities and avoids any vendor locking. The non profitable organization named Open cloud consortium (OCC) is a big step towards making open standards and frameworks for cloud computing.
We can see that cloud computing has borrowed from open source in terms of its governing principles, which could well be open source's lasting contribution to the cloud.

Tuesday, September 11, 2007

Federated Trust and SAML

The word Security is an important in the IT arena. Enterprises spend millions of dollars for implementing security measures for their wide range of IT assets varying from physical security to application security. Authentication is a basic requirement for access control. It can be a sophisticated biometric strategies or a simple application level user id/password combination. What exactly is authentication? I consider it as the process of recognizing the user.

Historically software applications, whether it is custom developed or enterprise application, incorporated an authentication module in itself. The user id and password (Probably digested password) is challenged against the store. The store could be an LDAP server or a simple database. Modern applications uses declarative authentication using any of the Pluggable Authentication Module (PAM). Even though it is relatively easy, all the application in an enterprise needs to implement authentication module. If each of this application needs to converse between themselves or the user need to navigate from one application to another, he has to login to the respective application and it created inconvenience and reduced the user experience of the participating applications. Then suddenly a new philosophy appeared into the horizon named ‘Single Sign On’. This concealed the individual authentication process from the user and gave him an experience of smooth and seamless navigation from one application to another in an enterprise.

Now a new thought came into picture. Why do we need to authenticate the user in each and every application in an enterprise? Can’t we believe our fellow applications? If a user is authenticated against one of the application in a trustworthy environment like an enterprise why does another application need to validate the user again? These questions guided the architect community to the very though of federated trust. In this the all or some of the applications in an enterprise forms a trusted group. That means each application in this group considers its fellow application as a trusted application. They keep the digital certificate or any other means of proof of the trusted applications.

If a user is authenticated against the application A and if he wants to talk to application B from A then application A sends a detailed covering letter about the user to B in its letter pad. Seeing the covering letter, B verifies whether it is really from A and if so it assumes that the user is a valid user without a second time authentication. In the digital world, most preferred covering letter is in the form of a SAML message. Security Assertion Markup Language (SAML) is an XML based protocol for exchanging authentication and authorization information. This is standardized by OASIS and the latest version of this standard in 2.0. A SAML message is having three important sections named Authentication statement, Authorization decision statement and Attribute statement.


  • Authentication statements assert to the service provider that the principal did indeed authenticate with the identity provider at a particular time using a particular method of authentication.

  • Authorization decision statement asserts that a subject is permitted to perform action.

  • Attribute statement asserts that a subject is associated with certain attributes. An attribute is simply a name-value pair. Relying parties use attributes to make access control decisions

The SAML message is signed and sends across to the destination. By validating the signature of the message the target application decides whether the message is from a trusted source or not.

Monday, August 20, 2007

RESTful SOAP

Last week there was a big debate going on between REST and SOAP camps in the office. Mails were flying both ways with the arguments. It was very interesting as both were right in their views. Here I would like to put forward my views on web service based on SOAP as well as REST.

REST is the acronym to the architectural style described as REpresentational State Transfer. In this philosophy, web services are viewed as resources that can be uniquely identified by their URL s. This is an extension to the current web architecture and this style merges with the underlying HTTP protocol. As a matter of style URLs need not be physical one and need not reveal the implementation technique used. One needs to be free to change the implementation without impacting clients or having misleading URLs. The protocol method itself is used to denote the operation. For example to retrieve a purchase order in which the order ID is ‘abcd’, make an HTTP request using a GET operation. In this case, the requested URL would be
http://servicehost:8080/restfulwebservice-war/orderservice/abcd. Applications can consume RESTFul service either programmatically or through using the browser. It highly suitable an AJAX based application as the integration is implicit.

SOAP based services are popular and are more in numbers as the implementation is considered. They are more formal in nature with various standards and description mechanism. There is an evolving webservice stack based on SOAP standards grouped under WS-* standards. It includes standards for security, transaction, reliable messaging etc. Also SOAP based services are well defined using WSDL. SOAP is an application level protocol and it is transport independent. So here the underlying transport layer is abstracted.

So which is good SOAP or REST? As both camps are backing their style with valid points, it is confusing for a practitioner to choose. In REST the service leverages the HTTP transport for accomplishing the tasks. But it is useful for developing services over HTTP only. Even though most of the current services are HTTP based, there are large amount of candidate services, especially for asynchronous web services, that works better with other transports like JMS and SMTP. More than that basic architecture philosophy is to “abstract where ever possible”. SOAP based services are very much obeying this rule.

Another principle is to program against a well defined contract. In case of SOAP based service WSDL gives a proper definition of the service and the consumers required to get this alone to invoke a service. I am not denying the existence of WADL but it is not matured enough to compare with WSDL.

In case of webservice security REST camp argues that they use the existing server and network infrastructure to implement security and there is no message level security is required for it. This is true when there is only provider and the consumer is required. But in a case where there are more services involved in a business operation and the message contain various parts and each are intended only to the respective service, the scenario is different. Here message level encryption is the only way. In most of the enterprises this is the case as point to point integration is fading out.

Another advantage REST camp mentioning is the performance and the size of data transfer. In SOAP the overhead of the protocol is there and hence the data size is much more than that of a REST counter part.


After hearing all these I could draw an analogy between a formal programming language like java and a scripting language like PERL. Both has its own merits and demerits.
In an enterprise integration of various applications are unavoidable. In such conditions adhering to the standards are most important because modification of the individual applications should not get affected by the implementation of other applications in the field. Here the vote goes to a more standard based SOAP service. A large software application can be designed as a collection of loosely coupled modules and the interaction between the modules can be made as service invocation. Here performance and ease of development are vital and it is worth to give a shot to RESTful services.

Both are not competing technologies but complement each other. Both have their on space in the technology landscape and selection of any of these styles should depend on the scenario.

Tuesday, August 07, 2007

Be Pessimistic!

Pessimism is not considered as a frill to human nature. People often are being criticized for their skepticism. Is it all that bad? If we take it as being extra cautious or being alert may elevate status of this nature. But what is the relationship between pessimism and software architecture? Let me try to unwind this thread. A software architect, especially enterprise architect, is the one who takes the technology affecting decisions in an enterprise. His decisions can make or break the IT projects and even affect the business. He takes the final call on the technology, software are infrastructure in an enterprise. Current technology landscape is poised with so many products and technologies decorated with nice buzz words. There could be fakes also and it is very easy to be deceived by these traps. So a good architect should be vigilant and should not accept by their face value. He should really have a taste of it before serving it to the rest. But it may not be possible in all the cases. However some of the enterprise wide product evaluation should undergo through inspection of all the alternative options against the cost Vs. benefit. This need not be mere monitory details but in all the possible aspects.

The term SOA is a popular mantra for last few years and some of the IT managers and architects tried to make their project as SOA based. Service oriented Architecture (SOA) is a very good architecture style to make the IT assets reusable and loosely coupled. But there were quite few projects adopt this style by making webservices for everything just for the sake of being the first runner. Some of these projects failed due to poor performance and evolving standards which made them obsolete. The same was the fate of most of the J2EE projects in late nineties, where EJB was a craze. An architect, who aspire success, should approach the technology evolution with a critic's nose.

I know one of my fried who used to Google anything with a suffix ‘S..K’. Every coin has two sides!

Friday, August 03, 2007

Why do we need comments?

Yesterday I was reviewing some java code and it was more or less similar to the one given below.



//Assign i with the value 0

i=0;

//increments the variable i

i++;



Here each and every line is accompanied by some sort of comments. Some programmers believe that good programming practice is to write as much as comments you can. In the case of programming in machine language and or assembly language this is required and will be helpful to understand the program. But will it make any sense in the case of contemporary high level languages?

I certainly believe no. All the modern programming languages are English like and human readable. So if the code itself is meaningful, what is the need of repeating it again? I believe that the code itself is the best document for it. The benefits of practicing this is multi fold. Apart from saving a lots of human effort, the code looks clean and compact. This gives a macro picture of the logic at a glance.

So does it mean commenting a program is that bad? Not at all.
Comments are good aid for the reader if it is given to a set of cohesive statements. This also helps to visually separate such blocks like a paragraph in an essay. Languages like java and c# allows document style comments and this will be highly helpfull to generate the API document. It is good practice to document all the classes and methods with API document style comments.

A well written code is like a poem where it can be appreciated even over the phone. So you can do a simple telephone test to find out the readability of your code. If the listener is able to understand the logic then obviously you have passed the test!