Chapter 10. Remote services and the cloud – OSGi in Depth

Chapter 10. Remote services and the cloud


This chapter covers

  • Importing and exporting a remote OSGi service
  • Understanding distribution providers and their properties
  • Selecting a specific service endpoint and negotiating endpoint policies
  • Learning the semantics of dealing with remote entities
  • Understanding cloud computing and how it relates to OSGi


In today’s world, it’s almost guaranteed that your machine is networked and the services you use are provided through the internet. Remote communication is essential, and OSGi supports it by means of the OSGi Remote Service specification, which is the subject of this chapter.

We’ll start by examining how to export a service for remote access and conversely how to import a remote service for local consumption. Next, we’ll investigate how the actual transport is realized through the use of distribution providers, highlighting the fact that OSGi doesn’t implement its own transport but rather allows existing transports to be plugged in. Finally, we’ll look into how to negotiate common policies between the exporter and the importer of the remote services, for instance, to make sure that both sides agree on a common set of requirements, such as encryption.

Remote services opens up OSGi to be run in a remote server or, as is the tendency, in the cloud. We’ll briefly discuss cloud computing and look at how the OSGi platform is an ideal platform for cloud computing. But before getting into cloud computing, we’ll start the chapter by discussing remote service invocation.

10.1. Remote invocation

Recall from chapter 6 that one of the shortcomings of the Event Admin service is that it doesn’t handle remote clients. Ideally, subscribers that have registered the EventHandler interface should be invoked regardless of their location.

For example, you could have a scenario where the publisher, the actual Event Admin implementation, and the subscriber are hosted in three different OSGi framework instances, as depicted in figure 10.1.

Figure 10.1. The publisher, subscriber, and Event Admin communicate remotely.

This distributed configuration can be achieved using OSGi’s Remote Service. In this setup, distribution providers are responsible for creating proxies that represent remote services, therefore achieving a distributed system. Let’s investigate the pieces, starting with a remote EventHandler subscriber in the next section.

10.1.1. Exporting a remote service

A service is specified as remote by tagging it with the following service properties:


The former specifies which service interfaces are to be exported in a remote fashion, and the latter specifies how those interfaces are to be made remote. The following listing provides an example for this remote subscriber scenario.

Listing 10.1. Remote EventHandler service

As usual, we need to register the EventHandler service. But here we’ve added the service.exported.interfaces property with a value of * , stating that all of the service interfaces used for registration can be made remote, which in this case would include the EventHandler interface. Next, we need to specify which remote configuration type to use for the actual transport implementation. We selected the Web Services–based messaging provided by Apache’s CXF project .



Some providers support a default configuration type, which is used when the service.exported.configs property isn’t specified.


Finally, we need to add any configuration type–specific properties (for example, WS). In the case of CXF’s Web Services stack, we have only one: the WS address to be used .



The OSGi Remote Services specification is recent, and there are still only a few implementations around and some rough edges to smooth out. One of the available implementations is Apache CXF ( But as of this writing, some of the properties and configuration being used in this open source product aren’t fully compliant with version 4.3 of the OSGi Service Platform Core specification.


It’s important to highlight that the actual service interface is invariant with regard to local or remote usage. This is noteworthy because it means that your service implementation likewise doesn’t change. The only changes are constrained to the set of service properties used during registration.

As you’ve seen, the registration of a remote service is simple enough. Next, let’s look at the consumer side.

10.1.2. Consuming a remote service

The consuming of a remote service is quite transparent; you just need to retrieve a service as usual, and the underlying framework takes care of proxying it accordingly, should it be remote. As an example, the following code retrieves our remote EventHandler:

ServiceReference ref =

There’s no difference between the retrieval of a local or remote service. Similarly for the export case, the consumer is shielded from the fact that a service is local or remote. Again, this is to our benefit, but you should keep in mind that a remote service invocation might unexpectedly fail or yield a high latency because of problems with the underlying transport.

Is there a way of knowing if a service is remote? Yes, all remote services have the following service property:


This property is useful for filtering. For example, the following expression filters out all remote services, making sure that only local services are used:


Imported services are also set with the service property service.imported.configs, which specifies the configuration type used for the particular endpoint of the imported service. For example, in our case it would be set to We’ll discuss configuration types in depth later.

Finally, keep in mind that imported services are not set with the service.exported.* properties, such as service.exported.interfaces.

Next, let’s take a quick look at how things actually work under the covers.

10.2. Distribution providers

It’s the role of distribution providers to implement the actual transport between the bundle that’s exporting a remote service in one OSGi framework instance and the bundle that’s importing the service in another framework instance. A distribution provider can support different transports, such as RMI (Remote Method Invocation), SOAP (Simple Object Access Protocol), and REST (Representational State Transfer). The transport to be used is determined by the configuration type specified in the service.exported.configs service property. An endpoint description is created for each remote service and a particular configuration type. Consuming bundles use the endpoint description to establish a connection to the remote service.



The OSGi Remote Service specification is a framework for remote communication that supports different transports, such as RMI and SOAP. The specification itself doesn’t specify which transports are supported. Further, it doesn’t define any new transport protocols.


A remote service may have multiple endpoints. For example, a remote service may be exported with both an RMI and SOAP endpoint descriptions. Clients can then select the proper endpoint for invocation. This approach allows the support of different transports, in a manner agnostic to the remote service and its consumers. Figure 10.2 describes the interaction between distribution providers, their supported configuration types, and the endpoints created by each configuration type for each remote service.

Figure 10.2. Distribution providers support different configuration types, which create separate endpoints for each remote service. In this example, the provider supports RMI and SOAP transports. Each transport endpoint is associated with the remote service and is tied to the client when a connection is established.

As you can infer, a distribution provider is needed for both the consuming and the providing sides, as shown in figure 10.3. On the consuming side, the distribution provider takes care of importing the remote service by establishing a local proxy endpoint and connecting it to the remote endpoint. We’ll call it the importing distribution provider. On the providing side, the exporting distribution provider is responsible for establishing an endpoint that accepts connections and delegates to the proper service.

Figure 10.3. There’s a distribution provider in each distributed framework instance, each responsible for its local endpoint and connecting to a remote endpoint.

The distribution providers are also responsible for discovering and proxying the remote services as needed. That’s why you don’t need to explicitly mention the address of a remote service in the consuming bundle.

You’ve learned how a distribution provider works. In the next section, we’ll look at how to handle multiple distribution providers and transports.

10.2.1. Selecting the proper endpoint

It’s not uncommon for a system to support multiple transports. For example, RMI for general application communication, SOAP for web development, and perhaps some proprietary socket implementation (perhaps running on InfiniBand) when performance is essential.

When exporting a remote service, you should generally be as flexible as possible and therefore specify several or the entire set of supported configuration types. For example, we could export the EventHandler setting the service.exported.configs service property to net.rmi, net.soap, net.socket. If we did so, and the distribution provider supported all the aforementioned transports, then three distinct endpoints would be created, one for each transport.

But on the consuming side, we want to import the service using a single specific endpoint, as the situation demands. For example, if we wanted to use the event handler interface from a web application, then it’s likely that we’d prefer to import the EventHandler service using the SOAP endpoint. This can be done using the service.imported.configs service property. The following code fragment retrieves an EventAdmin service that’s tied to the SOAP endpoint:

ServiceReference ref =

Unlike the service.exported.configs property, the service.imported.configs property contains the single configuration type that’s being used by the supporting endpoint, as shown in figure 10.4.

Figure 10.4. The service.imported.configs property reflects the configuration type of the underlying endpoint for an imported service.

If the importing distribution provider in the OSGi framework supports all three transports we specified when exporting the remote service, then three endpoints are created and likewise three services are imported, each containing a different configuration type for its service.imported.configs property.



The fact that endpoints for all configuration types are created may seem wasteful at first glance, but in reality a distribution provider implementation may do this on demand by using the OSGi service hooks.


But what happens if the importing distribution provider doesn’t support the transport we specified? In other words, in the preceding example, how would we know there’s a SOAP endpoint for us to depend upon? We can find the supported configuration types of a distribution provider by using the service property remote.configs.supported. The distribution providers register some anonymous service, setting this property to the list of supported configuration properties. In our case, should the importing distribution provider only support the configuration types of net.rmi and net.soap, then it would register a service whose remote.configs.supported service property is set to net.rmi, net.soap.

Knowing this, our consuming bundle should first find out if the net.soap configuration type is supported and only then retrieve the SOAP endpoint for the EventHandler service. We can find out if the net.soap configuration type is supported as follows:

ServiceReference ref =

Finding out the supported configuration types is useful, but sometimes you may need more information, such as whether a particular policy is supported. This is the subject of the next section.

10.2.2. Negotiating policies

There are cases when not only do you need a particular endpoint but you also need to certify that a policy from the remote service or distribution provider is being supported. It’s sometimes even necessary to negotiate policies between the consuming and the providing systems and make sure that both sides of the equation agree on common vocabularies.

For example, let’s say that our providing bundle wants to make sure that the (remote) service it is exporting is handled in a manner in which the invocations are kept in order. In other words, the underlying endpoint needs to keep the client invocations ordered, perhaps by using a transport-oriented protocol, such as TCP, instead of a message-oriented protocol, such as UDP. To relay this information, the remote service should be registered with the service property service.exported.intents set to the hypothetical value of ordered.

When the exporting distribution provider sees this service with the ordered intent, it must make sure it’s able to support it with whatever configuration type it uses to establish the endpoint. If the exporting distribution provider isn’t able to fulfill this requirement, then the remote service in question won’t be exported. Next, the importing distribution provider goes through the same process. It realizes the presence of the ordered intent and likewise has to make sure it’s able to fulfill this requirement. If it’s successful in doing so, then the imported service is made available for consuming bundles with the service property service.intents set to ordered.

Therefore, client bundles can use the service.intents property to select the appropriate service endpoint in a manner similar to that of service.imported.configs. In our case, it would allow a consuming bundle to select an endpoint that keeps the remote method invocations ordered. In other words, on the importing side, the service.intents property specifies the capabilities of the remote service, as shown in figure 10.5.

Figure 10.5. A remote service requirement is exported as an imported capability.

The service.exported.intents property specifies requirements without which the remote service won’t function properly. You can use the service property service.exported.intents.extra to specify additional requirements that may not always be present. For example, depending on the environment that’s hosting the system, you may need to encrypt the messages being exchanged between the distribution providers. You can use the service.exported.intents.extra property to configure whether the service needs to be kept confidential or not, depending on whether the system is running within the intranet or outside it. You can do so by allowing this property to be configured through the Configuration Admin service. The distribution provider then merges service.exported.intents and service.exported.intents.extra together, after which their values are treated equally, as shown in figure 10.6.

Figure 10.6. Exported intents are merged and treated equally by distribution providers.

Finally, let’s say that the implementation of the remote service itself is able to fulfill the requirement. Continuing with our example, let’s say that the implementation of our remote service could cope with unordered messages by reordering them internally through the use of monotonic IDs or timestamps. In this case, the remote service can be registered with the service property service.intents set to ordered. This tells the distribution provider that it doesn’t need to support the requirement of keeping the invocations ordered because the remote service itself is doing it.

The distribution provider still propagates the ordered requirement downstream, eventually setting this value in the property service.intents of the imported service in the consuming OSGi framework instance. This allows clients to still filter on the ordered capability of the importing distribution provider endpoint. This makes sense, considering that from the point of view of the consuming bundle, it generally doesn’t matter whether the capability is being served by the distribution provider or by the remote service implementation itself.

Nonetheless, for those cases where it does matter, the distribution provider also publishes an anonymous service containing the service property remote.supported.intents, which is used to specify the intents being supported by it. This follows the same pattern as the remote.supported.configs service property, as shown in figure 10.7.

Figure 10.7. Remote services can implement their own requirements using the service.intents property.

Protocol negotiation is a common theme for most communication stacks, and the intents framework allows for a succinct and flexible, albeit somewhat complicated, approach for doing it. In the next section, we’ll examine an approach to directly selecting an endpoint without having to rely on the use of service properties.

10.2.3. Endpoint descriptions

In addition to the service.imported.* properties, a client can also explicitly select an endpoint. This is done using the MANIFEST.MF entry header Remote-Service, which should point to an endpoint description configuration file. The following listing describes our EventHandler endpoint.

Listing 10.2. EventHandler endpoint description

The endpoint description includes enough information to allow a client bundle to connect to a remote endpoint, particularly the service interface , the configuration type , and most important, the actual transport address for the selected configuration type.

For both OSGi Remote Service and OSGi data-access technologies, the usage pattern is quite common: the provider bundles use special service properties and the consuming bundles use special MANIFEST header entries that point to configuration files. This similarity is a good thing and is applicable to most OSGi services.

Now that you understand how to export and import remote services, in the next section we’ll take a look at some caveats of using distributed services in general.

10.3. Dealing with the semantics of distributed systems

As you’ve seen, OSGi remote services let us deal with distribution in a transparent form. This simplifies application development, but it also hides some of the intrinsic issues of dealing with remote entities and transport protocols. Let’s investigate a couple of scenarios where this is a problem.

First, consider a simple service, such as the following:

public interface CustomerRegistry { /* remote interface */

    public Customer findCustomer(String name);


public interface Customer {

    public String getName();

    public String getAddress();

    public void setAddress(String address);

Next, let’s look at a client bundle that retrieves this service and updates the address of a customer:

ServiceReference servRef =

if (servRef != null) {
    CustomerRegistry registry =
        (CustomerRegistry) bundleContext.getService(servRef);

    Customer customer =

    if (customer != null) {
        customer.setAddress("Updated Address!");

This works fine in a local setup, where changes to the Customer object instance would be reflected to any other client within the same JVM or, more precisely, that shares the same class space. But what happens if the CustomerRegistry is a remote service, and the client bundles that are updating the customer information reside in different OSGi framework instances? As you’ve guessed, updates to the Customer object instance wouldn’t be reflected in other clients or even at the remote service itself. The reason is that remote services deal with call-by-value as opposed to call-by-reference, which means that the Customer object returned by the findCustomer() method is actually a new object instance that reflects the same value that’s present in the exporting distribution provider.

Solving this problem becomes a matter of interface design. The remote service developer should be aware of the call-by-value semantic and design the interface accordingly to avoid the issue. For instance, the following interfaces do the trick:

public interface RemoteCustomerRegistry { /* remote interface */

    public Customer findCustomer(String name);

    public void updateCustomerAddress(String name, String address);


public interface Customer {

    public String getName();

    public String getAddress();

In other words, an update is done by invoking the remote service itself, passing the address information directly (as a value). Likewise, to avoid confusion, we remove the setAddress() method from the Customer object, which mostly becomes a structure of values. Because the consuming bundle generally wouldn’t know it’s dealing with a remote service, the onus of the interface design falls largely on the provider.

The next problem we’ll look at is that a communication transport may go down for a variety of reasons at any arbitrary time. There isn’t much we can do to deal with this in the provider, and instead we need to design for this possibility in the client itself. For instance, just before we invoke the updateCustomerAddress(), our wired or wireless network connection may be lost, and the importing distribution provider would lose connectivity to the exporting distribution provider. In this case, the imported service should raise the runtime exception ServiceException, and a consuming bundle should be ready to handle this exception.

A naïve exception-handling implementation for a ServiceException might wait a few seconds in case the network has simply run into some transient problem and try again, as demonstrated in the following code:

ServiceReference servRef =

if (servRef != null) {
    RemoteCustomerRegistry registry =
        (RemoteCustomerRegistry) bundleContext.getService(servRef);

    try {
        registry.updateCustomerAddress("Alex", "Updated Address!");
    } catch (ServiceException e) {
        Thread.sleep(2000); // wait for two seconds and try again

        registry.updateCustomerAddress("Alex", "Updated Address!");

One final issue to consider is the underlying communication stack being used. The different transports have different characteristics and limitations. For example, if CORBA is being used, there are restrictions on the Java types that can be referenced in the remote services. This is because CORBA IDL (Interface Definition Language) doesn’t support the entire spectrum of the Java types.

Remote services, sometimes called Distributed OSGi, are a major foundation for the emerging cloud technology, which we’ll investigate in the next section.

10.4. Elasticity at the cloud

Needless to say, cloud computing is a vast subject, and it’s beyond the scope of this book. But as you’ll see, OSGi has become an interesting technology for supporting the concept of Platform as a Service (PaaS). In the following sections, we’ll make some assumptions and simplifications as we discuss why and how that is.

Let’s begin by exploring a use case. Suppose we’d like to make the auction application, which was explained in chapter 3, a public service for a website. Somewhat similar to eBay, the goal is to support web clients making offers of items to sell and bidding for these items. The first step toward achieving this is to create a web application that exposes a proper web page, where a customer makes offers and bids.

Simply put, the web application internals implement the Participant class and invoke the Auction class. And we are, in a naïve sense, finished. The web application is implemented as an OSGi bundle, and together with the auction application bundles and the bundle containing the web server (for example, Jetty), they’re all installed in a single OSGi framework instance and hosted in a machine that’s made accessible via the net. We’ll refer to this whole environment as the auction system. That is, the auction system includes all the software components needed to run the auction application, such as the web server and any other supporting components, as shown in figure 10.8.

Figure 10.8. A single OSGi framework instance running the auction system

Life goes on; the auction system is running successfully and in fact is a major success—so much so that we get more customers than we expected, and because of that, performance degrades significantly. Whereas before buyers and sellers were able to bid and make offers instantly, it now takes several seconds, sometimes minutes if the item is very popular. The solution is to expand and acquire another machine. This trend continues, and we reach the point of having to manage a network of eight machines. This setup has become extremely expensive; not only do we need to buy several fully configured machines, but we also need to manage them. We decide to do some profiling to look for the bottleneckand find out what’s causing the performance problems.

10.4.1. Designing for the cloud

As it turns out, the bottleneck occurs when correlating the offers and the bids in the Auction bundle implementation. Although we could try to improve the algorithm, this already tells us that the bottleneck isn’t I/O bound and likewise isn’t in the web application.

We decide to change our architecture. Rather then having eight instances of the OSGi framework each running the auction system, which consists of the web application, the auction application bundles, and the supporting infrastructure bundles, we change it to a two-tier architecture, as shown in figure 10.9. In our two-tier architecture, the first tier consists of the web application and the web server, which provides the client interface to the customers and then forwards the requests to the second tier, called the business tier. The business tier contains the auction bundles, because they perform the actual business-related code.

Figure 10.9. An auction system partitioned into two separate OSGi framework instances, communicating through an OSGi remote service

In this new setup, we host the web applications in only two machines, which communicate via REST with four other machines containing the auction application bundles. Because of the performance improvements, we’re able to serve the same number of clients with only six machines, two fewer than before.



In reality, we should have a three-tier architecture. The auction application would store its state in the third tier, the data tier. You’ll see how to change the auction application to persist its data by using JPA in a future chapter. For the time being, we’ll ignore this issue and focus on the communication aspect of the system rather than the persistence issues. But keep in mind that to function properly in the absence of a data tier, we’d need to make sure that all requests pertaining to an item are routed to a single auction bundle instance.


By doing this, we gain on several fronts:

  • The machines are easier to manage. When a web server security patch is released, we only need to update the two machines of the client tier. Likewise, we only need to set up the firewall in these two machines. Further, had we been persisting the data, we’d only need to install the persistence storage, such as the RDBMS, in the machines of the data tier.
  • The overall environment is less expensive, because we can customize the machines for their specific roles. For example, the machines in the client tier need better I/O but don’t need particularly faster CPUs or disks. But the machines in the business tier should have fast CPUs, because we know from the profile that the auction application is CPU bound. By segregating the system into separate nodes, we can avoid having to buy a machine that has everything, such as fast I/O, fast CPUs, and fast disks. Instead, we can be smart and use cheaper machines without impacting the result.
  • Performance improves, because we allow the machines that had the bottleneck to focus on just one thing, correlating the bids and offers, rather than also spending CPU cycles on the web application.
  • The system is overall more scalable, because we have a flexible setup, where we understand which components can be replicated and how best to partition the requests.

Surprisingly, it’s quite easy to change our existing implementation to the new two-tier architecture. Instead of collocating all the bundles of the auction system in a single OSGi framework instance, we partition the system into several separate OSGi framework instances, placing the web application bundle in the OSGi framework instances hosted in the client tier machines and placing the auction application bundles in the business tier machines, as shown in figure 10.10. The main effort is in making sure that the web application bundle is able to locate and use the remote Auction service. This, as you’ve seen in this chapter, is done by using distribution providers and setting the proper service properties, such as service.exported.configs.

Figure 10.10. Two-tier architecture for the auction system

During this exercise, you’ve learned an important albeit subtle lesson. The reason we’ve been able to easily move from a local environment to a multiple-machine environment and finally to a two-tier multiple-machine environment is that our auction system is modular. Had we designed the whole system as a single monolithic (web) application, we wouldn’t have been able to partition it when needed. As you’ll see next, being able to partition our system is a prerequisite to running in the cloud.

Let’s continue our story. We’ve been running our auction system successfully for several months now. During this period, we’ve noticed that business is very slow on some dates, like at the end of January, and business is very good in other cases, like near Valentine’s Day. This burst-like behavior is difficult to manage and eventually becomes a problem. If the business continues doing well, we may have to double the number of machines we have for Christmas, only to have them sit idle in the months that follow. We need to be able to dynamically cope with this variation of demand. Coping with this dynamism is the exact proposition of cloud computing, as you’ll see next.

10.4.2. Cloud computing

The National Institute of Standards and Technology (NIST) defines cloud computing as follows:



Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.


To utilize cloud computing, we need to move our auction system from being hosted on our own machines to being hosted in machines managed by a cloud provider, as shown in figure 10.11. In fact, we don’t even need to know the physical layout and characteristics of these machines; they’re simply computing nodes performing their work. To do this, we create a virtual machine (VM) containing all of our software components—the operating system (OS), generally Linux, the OSGi framework distribution, and all of the application bundles and supporting libraries, like the Jetty web server. Our VM not only contains the binaries but also knows how to start them appropriately. For example, the VM should bootstrap the OSGi framework instance and make sure that our bundles have been installed and are activated. We can do this by setting up a shell script as an OS service. OS services are automatically started when the VM and thus the OS is booted.

Figure 10.11. Auction system running in the cloud

Next, we give the VM to the cloud provider and tell the provider to assign six computing nodes to the VM. That is, our VM is deployed in six nodes somewhere in the cloud. As it gets close to Christmas, we ask the cloud provider to assign four additional nodes to our VM. After the holiday season ends, we ask the cloud provider to remove the four additional nodes; in fact, we ask it to keep only four nodes total running our VM. The cloud provider has a pool of resources, these being computing nodes, disks, and the like, which can be dynamically provisioned to run the customer’s software.



The cloud is made up of a pool of resources that are provisioned dynamically on behalf of the customers.


In our particular case, we’re assigning the nodes per calendar date, but we could even be more ambitious and monitor the utilization of the resources. For example, we could set up provisioning rules, as follows:

  • If the CPU crosses an 80% utilization threshold, then automatically provision a new node for our VM.
  • If the CPU utilization falls to below 20% for more than two days, then automatically shut down the node.

As you can see, cloud computing allows us to dynamically and efficiently change our environment based on demand. It also has other advantages, just like any other hosted service, such as us not having to manage our own machines and networks. But keep in mind that the main proposition for cloud computing is indeed its elasticity.



Cloud computing also has its drawbacks, one of the most serious being security. Because services from different customers are all sharing resources, there’s always a higher likelihood of security breaches. Another disadvantage of cloud computing is the increased latency because services may hop several times between computing nodes before being fully served.


The question that remains to be answered is how OSGi relates to cloud computing. We’ll address this question in the next section.

10.4.3. OSGi as the cloud platform

Let’s say that to keep our business competitive, we need to fine-tune our auction algorithm frequently. Updating our auction system in the cloud isn’t a trivial task; we need to change the auction bundle, re-create a new VM image, upload it to the cloud, shut down the VM instances, and finally replace them with the new image.

Generating VM images and the whole workflow process just described can be quite costly, especially if it must be done on a daily basis. Moreover, keep in mind that when generating a new VM image, we need to test all the components, and the time used for testing is proportional to the number of components being tested. By using OSGi, we can modularize our systems and therefore partition adequately, creating simpler and smaller VMs. You already saw an example of this when we partitioned the auction system into two tiers. The result is that we can have two separate VM images, one containing the components of the client tier and the second containing the components of the business tier. Managing smaller VMs is obviously better. For example, if we’re updating just the auction bundle, it would prevent us from having to repackage and retest the web application and server. Potentially, we could also rent cheaper computing nodes for the client tier VM, which contains the less-demanding web application bundles.



It’s a common misconception that you don’t need to design your system to run in the cloud. To fully utilize cloud computing, the system must be able to scale. Modular systems scale better; therefore, OSGi is a clear infrastructure choice, because it emphasizes and helps in the creation of modular applications.


That’s not the complete story though. Suppose that instead of uploading and managing VMs in the cloud, we were allowed to manage OSGi framework instances and their bundles. If this were the case, we’d just need to inform the cloud provider to update our auction bundle. The auction application, being an OSGi application, already has to deal with the fact that dynamic changes and updates may happen, so the fact that it’s happening when it’s running in the cloud or running as a standalone local application is mostly immaterial. Thus, if our supposition were true, OSGi would give us two important advantages:

  • OSGi would let us update our system quite easily and efficiently, without us having to get into the business of generating full VM images.
  • It avoids lock-ins, because we can design and validate our applications using any OSGi implementation in a standalone environment before moving to the cloud.

We could even take it a step further. The cloud provider could give us access to an OSGi bundle repository (OBR), where we could install, update, and uninstall bundles as needed. Then, after having updated the auction bundle, we could select an OSGi framework instance and tell it to move forward and take the new versions of the bundles from the OBR. If there are any problems, we could always tell it to revert to the previous versions.

Let’s take a step back and revise. We have several OSGi bundles that have been uploaded to the cloud’s OBR. In addition, we’ve provisioned OSGi framework instances to computing nodes, as shown in figure 10.12. For example, we have two OSGi framework instances in the client tier; therefore, we could assign each to a different computing node. Next, we have the remaining eight OSGi instances, which we could likewise assign to a different computing node. Finally, we could install the web application in the OSGi instances of the client tier and the auction application bundles in the OSGi instances of the business tier. And, of course, we would need to make sure that the web application bundle could invoke remote services residing in the bundles of the business tier. In this environment, we can easily manage updates to our system by leveraging the OSGi framework. But there’s still one problem to consider: we’re having to explicitly provision the OSGi framework instances and target the bundles to specific framework instances.

Figure 10.12. OBR and OSGi framework as a platform for cloud computing

A better scenario would be the following: we upload the bundles to the OBR in the cloud, and each bundle tells the cloud its requirements, so that the infrastructure can dynamically provision the system. For example, the web application bundle could state that it depends on a web server and a cloud resource that has fast public access to the internet. Likewise, the auction bundles could state that they depend on a cloud resource where they get at least 80% CPU utilization.

The cloud infrastructure would consider all these requirements and assign the bundles to the correct OSGi framework instances that it provisions to its computing nodes. For example, the cloud infrastructure would spawn a new computing node that has internet access and install the web application bundle into it.

Requirements and Capabilities

Does it seem far-fetched? Perhaps, but we already have most of the pieces. Dependencies between bundles can already be specified in OSGi using the regular import-package mechanism. Furthermore, starting in version 4.3 of the OSGi Service Platform, there’s a generic mechanism for the specification of general requirements and capabilities. You can think of the import-package and export-package headers respectively as a particular case of a requirement and a capability specification.

For example, consider a bundle that specifies the following OSGi 4.3 Require-Capability MANIFEST header entry:

    com.cloudprovider;    filter:="(&(web-access)(cpu<90))"

This header is placed in the namespace of com.cloudprovider, which means that its interpretation is intended for some particular cloud provider. It states that this bundle requires the attribute web-access to be true and that the attribute cpu, which is a numeric value, must be less than 90 (percent).

On the opposite side, the cloud provider has to find an OSGi framework instance whose System bundle has been launched with the following property:

        "com.cloudprovider; web-access:Boolean=true; cpu:Long=80"

Essentially, the framework instance is telling us that it’s capable of providing web access and it will monitor the CPU to make sure that it’s never more than 80% busy. This is shown in figure 10.13.

Figure 10.13. The cloud infrastructure matches a bundle’s requirements with a resource node that’s able to fulfill them.

There’s still much to be collaborated before cloud computing is considered a mature technology. But the OSGi platform has three intrinsic qualities that make it ideal for supporting elasticity in the cloud:

  • Application behavior dynamism provided by the OSGi framework API
  • Transport abstraction through OSGi Remote Services
  • Dependency management using the OSGi requirement/capability framework

In other words, cloud providers could expose the OSGi service platform as a service to their clients, so that they can manage their applications. In this environment, an OSGi bundle becomes the cloud’s deployment unit, and OSGi the ideal Platform as a Service.

10.5. Summary

The OSGi Remote Service specification allows a consuming bundle to invoke a service that’s provided by a bundle in a remote OSGi framework instance. You register a service using the service properties service.exported.interfaces and service.exported.configs to mark it as remote.

Distribution providers are responsible for transparently discovering and handling the remote services for the consuming bundles, and they support different transport protocols, as dictated by the service.exported.configs property. A remote service may be exported through different endpoints. A consuming bundle may optionally select a particular endpoint by filtering on the service property service.imported.configs.

It’s not uncommon for the consumer and the provider of the remote services to negotiate policies, such as if encryption is necessary or based on the QoS of the transport itself. For this purpose, the provider can specify its requirements in the service.exported.intents property. The distribution providers must be able to fulfill these requirements, which are seen as capabilities in the consuming side in the service property service.intents.

Cloud computing is all about being able to efficiently meet the varying demand of customers by dynamically provisioning resources from a resource pool. The OSGi platform is an ideal Platform as a Service, because it has native support for the dynamic behavior of applications, remote services, and a generic requirements/capability framework for dependency resolution.

In the next chapter, we’ll look into the crucial issue of how to manage this complex environment. Managing the OSGi framework and its applications is the subject of the next chapter.