Posted tagged ‘SaaS’

Remote Datasources

February 18, 2010

Almost all applications today deal with persisted data. The only rare exceptions are usually not very complicated such as a calculator, unit conversion tool, or minor utility. Let us assume that any application we’re working on is going to have some sort of persistant state that must be kept between executions of the application. Typically the state is stored in either a file or a database. File access is useful for many desktop applications used by a single user as the format can be created very easily from the data model, the data is easy to transfer to another user via email or some other file transfer process, and backups involve just storing the file somewhere safe.

Many applications that involve multiple users, including web applications, will instead employ a database. Even single user desktop applications sometimes use a database. Database access is often performed by a section of the code converting objects or structures into database tupils and storing them in tables. When the data is required again, select statements are made that call the data and turn it back into a structure. Usually this storage and retrieval of data is done in the same code base as the rest of the application.

There are a few problems with this approach however. For one, the database code can sometimes become tightly coupled with the business logic of the application itself. The application knows about the data being stored in a database and will often be changed accordingly to facilitate database access. Switching to a different database system can involve very large rewrites of not only the database code, but also application code as it can rely on database specific functionality. Think of functions that pass along snippets of SQL in order to construct the correct objects. Often that SQL will be database specific.

Besides being tied to a single database, the same coupling also ties applications to databases in general. If your application is expecting the data to be stored in a relational database, assumptions in the business logic will be made regarding that. This inhibits your ability to transition to a file store or other NoSQL solution if required. Because of these concerns, my topic for today is about abstraction and remote data sources.

If you code your database logic in with your application code, coupling can form. However if you separate them completely your application doesn’t need to know where the data comes from or where it goes when its saved. All the application has to do is follow a simple generic interface for storing and retrieving data. This not only elements coupling, but has some other scalability benefits that I’ll discuss a bit later.

To really appreciate interfaced based design and loose coupling, try removing your data access code from your application code base completely. Instead, use a language neutral format such as JSON or XML to communicate between your application and your data source, itself a small application. The application will send a request along with perhaps some data if it needs something stored. The data source will then transform that data into a format that can be stored and place it either in a database, file, or any other storage medium that you desire. Later if you want to modify how your data is stored, maybe a different DB system is more suited or you’re moving from a file-based system to a database, you can just modify the implementation of the data source application while not changing a line of code in your actual application. The loose coupling means that as long as the data source maintains a solid interface that the application can interact with, the application can be developed independently of the data source.

If each of your interactions between your application and datasource are atomic and independent of each other, stateless, you can horizontally scale your datasource using replication. Each data source can access a different replicated read-only data store for request information. Meanwhile write access will all go to a master data store that can be replicated to the read-only slaves. Read requests from an application can be distributed to multiple data sources, especially useful if the data source does any CPU intensive filtering or processing.

The main benefit however remains that by separating your data source from your application as a remote service, you can ensure that your code is loosely coupled and that your business logic will not rely upon the underlying method which your data is stored. This will lead to greater flexibility in your code, and a more agile platform on which to build.

Advertisements

Breaking a monolithic applicaiton into Services

January 22, 2010

Today I shall be tackling Service Oriented Architecture. I think that that particular buzz phrase has annoyed me a lot in the past. CTOs and middle management talk about how the read in a magazine something about SoA or SaaS (Software As A Service) as being the next big thing and that we should switch all of our stuff to that! Meanwhile there isn’t a very good explanation of how or why to do such a thing. I had written it off mostly as I never saw how independent services could be pulled together into an application. Sure it sounded neat for some other project that wasn’t mine. But no way did that model fit in with the project I’m doing.

Recently however I’ve been tasked with rewriting the section of code that generates reports in our application. The entire application is just a series of Java classes nested together and packaged up as one WAR file that is deployed to a Tomcat server. To scale, we deploy the same war file to another server (ideally everything is stateless and share-nothing so there’s no sessions to worry about). The whole thing is behind a load balancer that uses a round-robin strategy to send requests to various machines. Seems like a pretty good way to scale horizontally I thought.

However the horizontal scaling of this system is fairly coarse grain. If we find that getting hit with a lot of requests to generate reports is slowing down a server, our option is to duplicate the entire application on another server to distribute the load. However that is distributing not only report generation but also every other aspect of the application. So now we have an additional front end, service, and database layer running just to speed up one area. It seems like a bit of a waste of resources.

So instead of scaling a monolithic application, how about we break the whole thing up into ‘services’. Instead of the Report system just being a set of APIs and implementations in the applications source code, we instead make an entire new application that just generates reports as its whole goal. It has a basic interface and API that takes in a request with certain criteria of what type of report you want and in what format and returns that report. It can be a completely stand-alone application that just sits and waits, listening on a port for a request to come in. When one does it processes the request, generates a report, then sends the information back over that port to the client and waits for the next request. Naturally we make it multi-threaded so that we can handle multiple requests at a time, putting a limit on that number and queuing any overflow.

The benefit of this is manyfold. For starters you gain fine-grain horizontal scalability. If you find that one service is too slow or receives a lot of requests you can just deploy that service on additional machines. You only deploy that service however rather than the whole application. The service can be done via RPC, direct sockets, web services, or whatever else you like to listen for your requests. A controller near the front end would just call each individual service to gather the end data up to pass back to the user. Put it behind some sort of load balancer with failover and you have fine-grain horizontal scalability.

Second, you gain faster deployment. Since each service is self-contained they can be deployed independent of the other services. Launching a new version does not have to replace the entire system, just the services that you’re upgrading. In addition, since each service can be running (and should be running) on multiple machines, you can upgrade them piecemeal and perform bucket tests. For instance, if you have a service that is running on 10 machines you can upgrade only 2 of them and monitor user interaction with those services. This way you can have 20% of your users actually utilize your new code, the rest will hit the old code. When you see that everything is going fine you can upgrade the other 8 servers to have the new version of the service.

Because each part of the program is just a small separate program itself, problems become easier to manage and bugs become easier to fix. You can write tests for just the service you’re working on and make sure that it meets its contract obligation. It helps manage the problem with cyclical dependencies and tightly coupled code. It also reinforces the object-oriented strategy of message passing between components.

Further more, try to avoid language specific messages such as Java Serialization for communicating between your services. Utilize a language agnostic format such as JSON or Google’s Protocol Buffers. I would avoid XML because it’s fairly verbose and slower to transmit over a network and parse than those others. The advantage of using a language agnostic format is that each service can be written in a different language depending on the needs of the service. If you have a service that really needs to churn through a lot of data and you need every ounce of speed out of it you can write it in C or even assembly. For those areas where it would be good to do something concurrently you might use Scala, Erlang, or Clojure. The main idea is each part of your program can be written in a language that is best for solving that services problem.