JEE Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/jee/ Technology Evangelist - Big Data Analytics - Middleware - Apache Kafka Tue, 26 Nov 2013 19:07:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.kai-waehner.de/wp-content/uploads/2020/01/cropped-favicon-32x32.png JEE Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/jee/ 32 32 Book Review: “Java EE 7 Developer Handbook” by PACKT / Pilgrim https://www.kai-waehner.de/blog/2013/11/26/book-review-java-ee-7-developer-handbook-by-packt-pilgrim/ Tue, 26 Nov 2013 19:07:37 +0000 http://www.kai-waehner.de/blog/?p=776 Java EE 7 Developer Handbook is a book for experienced Java developers, published by PACKT. Author is Peter…

The post Book Review: “Java EE 7 Developer Handbook” by PACKT / Pilgrim appeared first on Kai Waehner.

]]>
Java EE 7 Developer Handbook is a book for experienced Java developers, published by PACKT. Author is Peter A. Pilgrim.

Content
The books introduces many important Java EE 7 specifications: CDI, EJB, JPA, Servlets, JMS, Bean Validation, JAX-RS and some other stuff such as WebSockets, HTML5 support and Java Transaction API. Each chapter contains an introduction, source code examples and explanations of most important features and configurations. Source code examples can be downloaded, too.

Cool side note
Introduces and uses Gradle as build system and Arquillian for writing integration tests.

Summary
Even though the book starts with a short introduction to Java EE in general, this book is not suited for beginners. If you have no experience with Java EE yet, the information of this book will be too much for you. Get another book which offers step-by-step introduction examples for getting started with Java EE.
The book is perfect for getting an overview about many new Java EE 7 features. If you already have experience with Java EE, then this book is for you! The book does not go into all detail, of course. Java EE is too extensive for one book. You can write a single book about each specification. So, this book is a very good introduction to Java EE 7 and can also be used as reference book. If you need more details, you have to buy additional books for specific topics such as EJB or JSF.
A disappointing aspect is that, unfortunately, some new Java EE 7 features are not mentioned with more than just one or two sentences. IMO, this is fine for minor updates (e.g. JSF or JCA). Though, important new specifications (especially Batch) are missing, too.

Have fun with Java EE 7…

Best regards,
Kai Wähner (@KaiWaehner)

The post Book Review: “Java EE 7 Developer Handbook” by PACKT / Pilgrim appeared first on Kai Waehner.

]]>
Progress Report from the Java EE Conference “Confess 2012” in Leogang, Salzburg (Austria) https://www.kai-waehner.de/blog/2012/05/08/progress-report-from-the-java-ee-conference-confess-2012-in-leogang-salzburg-austria/ Tue, 08 May 2012 16:11:40 +0000 http://www.kai-waehner.de/blog/?p=404 This week, I was at Confess 2012 (http://2012.con-fess.com) in Leogang, Salzburg (Austria). Confess is an international conference for Java professionals in its fifth year, organized by IRIAN and the EJUG Austria. It is reasonably priced with 275 € for the two-day conference, and 500 € for the workshop day. The speaker lineup is very good with many well-known international speakers, such as JSF spec lead Edwuard Burns from Oracle America, Hazem Saleh from IBM Egypt, or Jürgen Höller from SpringSource.

The post Progress Report from the Java EE Conference “Confess 2012” in Leogang, Salzburg (Austria) appeared first on Kai Waehner.

]]>
This week, I was at Confess 2012 in Leogang, Salzburg (Austria). Confess is an international conference for Java professionals in its fifth year, organized by IRIAN and the EJUG Austria. It is reasonably priced with 275 € for the two-day conference, and 500 € for the workshop day. The speaker lineup is very good with many well-known international speakers, such as JSF spec lead Edwuard Burns from Oracle America, Hazem Saleh from IBM Egypt, or Jürgen Höller from SpringSource.

Sessions

There were six main topics for this year’s conference:

  • Concurrent Programming
  • Mobile Development
  • Cloud Computing
  • HTML5 and Modern Web Architectures
  • Latest and Greatest in Java EE
  • NoSQL

The sessions were split into introductory and advanced sessions. There were introductions to several important technology trends, such as Java EE 7, NoSQL databases, the build management system Gradle, or cross-platform mobile development. Besides, several advanced sessions went into more detail, e.g. for DCVS Git, Java EE 6’s CDI, or web frameworks (JSF, Wicket, Tapestry). Surprisingly, Scala or other modern JVM languages besides Groovy were missing totally.

In the following, some more details about two of the sessions:

Tiggzi

Tiggzi – which I have never heard before – is an interesting cloud-based rich internet application for building mobile HTML5, iPhone, and Android apps. Internally it uses jQuery and PhoneGap. Tiggzi is definitely worth a look. Try out for yourself if its promises are true to be “the fastest & easiest way to create mobile apps”.

Better Presentations of Software Developers and Architects

An awesome off-topic presentation by Michael Plöd from Senacor Technologies has to be mentioned, too: “Better presentations of software developers and architects” explains how everybody can make good presentations. If you have to do internal or external presentations sometimes, then take a look at these (German) slides!

My Session: Systems Integration in the Cloud Era

My talk was about Systems Integration in the Cloud Era with the lightweight open source integration framework Apache Camel.

Abstract

“Cloud Computing is the future! Nevertheless, everybody should be aware that there won’t be one single cloud solution, but several clouds. These clouds will be hosted at different providers, use different deployment models (IaaS, PaaS, SaaS) and use products, technologies and APIs from different vendors. Thus, in the future you will have to integrate these clouds as you integrate applications today.

The open source integration framework Apache Camel is already prepared for this challenging task. Apache Camel implements the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications and clouds. The efficiency is even more increased through the use of the modern JVM languages such as Groovy or Scala. It can be used in almost every integration project within the JVM environment. All integration projects can be realized in a consistent way without redundant boilerplate code. Even automatic testing is supported.

This session demonstrates the elegance of Apache Camel for cloud integration. Several examples are shown for all deployment models by integrating cloud services from Amazon Web Services (IaaS), Google App Engine (PaaS) and salesforce.com (SaaS). If the required cloud service is not supported by Apache Camel, you can easily create your own Camel component with very low effort. This procedure is explained at the end of the session.”

Slides

Here are the slides from the talk:

Location

The conference was in Leogang, Salzburg (Austria) at Krallerhof – a really nice location! Presentation rooms, food, hotel rooms, etc. were awesome. Nevertheless, the location had a huge problem: It was in the middle of nowhere. No airport, no good train connection, no motorway. So, reaching the hotel was not really comfortable. Due to this, the number of attendees was far smaller than in the last years. However, the location was really great and all guys enjoyed it. Nevertheless, the conference will probably head back to Vienna next year. I heard some rumors that a cinema is considered as location for 2013 – which I think is the best place for conferences (see Jazoon, Devoxx, etc.).

Conclusion

Confess 2012 was a great conference with many high quality sessions. Due to the low number of attendees and a family-like environment, there were great discussions all over the day. Definitely, Confess is a nice alternative to larger, more expensive conferences in DACH countries such as JAX or Jazoon.

 

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Progress Report from the Java EE Conference “Confess 2012” in Leogang, Salzburg (Austria) appeared first on Kai Waehner.

]]>
Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing, and other Concepts https://www.kai-waehner.de/blog/2012/05/04/apache-camel-tutorial-introduction/ Fri, 04 May 2012 13:32:15 +0000 http://www.kai-waehner.de/blog/?p=367 Data exchanges between companies increase a lot. The number of applications, which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests. Such a standard exists with the Enterprise Integration Patterns (EIP) [1], which have become the industry standard for describing, documenting and implementing integration problems. Apache Camel [2] implements the EIPs and offers a standardized, internal domain-specific language (DSL) [3] to integrate applications. This article gives an introduction to Apache Camel including several code examples.

The post Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing, and other Concepts appeared first on Kai Waehner.

]]>
Data exchanges between companies increase a lot. The number of applications, which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests. Such a standard exists with the Enterprise Integration Patterns (EIP) [1], which have become the industry standard for describing, documenting and implementing integration problems. Apache Camel [2] implements the EIPs and offers a standardized, internal domain-specific language (DSL) [3] to integrate applications. This article gives an introduction to Apache Camel including several code examples.

 

Enterprise Integration Patterns

EIPs can be used to split integration problems into smaller pieces and model them using standardized graphics. Everybody can understand these models easily. Besides, there is no need to reinvent the wheel every time for each integration problem.

Using EIPs, Apache Camel closes a gap between modeling and implementation. There is almost a one-to-one relation between EIP models and the DSL of Apache Camel. This article explains the relation of EIPs and Apache Camel using an online shop example.

 

Use Case: Handling Orders in an Online Shop

The main concepts of Apache Camel are introduced by implementing a small use case. Starting your own project should be really easy after reading this article. The easiest way to get started is using a Maven archetype [4]. This way, you can rebuild the following example within minutes. Of course, you can also download the whole example at once[5].

Figure 1 shows the example from EIP perspective. The task is to process orders of an online shop. Orders arrive in csv format. At first, the orders have to be transformed to the internal format. Order items of each order must be split because the shop only sells dvds and cds. Other order items are forwarded to a partner.

Figure 1: EIP Perspective of the Integration Problem

 

This example shows the advantages of EIPs:  The integration problem is split into several small, perseverative subproblems. These subproblems are easy to understand and solved the same way each time. After describing the use case, we will now look at the basic concepts of Apache Camel.

 

Basic Concepts

Apache Camel runs on the Java Virtual Machine (JVM). Most components are realized in Java. Though, this is no requirement for new components. For instance, the camel-scala component is written in Scala. The Spring framework is used in some parts, e.g. for transaction support. However, Spring dependencies were reduced to a minimum in release 2.9 [6]. The core of Apache Camel is very small and just contains commonly used components (i.e. connectors to several technologies and APIs) such as Log, File, Mock or Timer.

Further components can be added easily due to the modular structure of Apache Camel., Maven is recommended for dependency management, because most technologies require additional libraries. Though, libraries can also be downloaded manually and added to the classpath, of course.

The core functionality of Apache Camel is its routing engine. It allocates messages based on the related routes. A route contains flow and integration logic. It is implemented using EIPs and a specific DSL. Each message contains a body, several headers and optional attachments. The messages are sent from a provider to a consumer. In between, the messages may be processed, e.g. filtered or transformed. Figure 1 shows how the messages can change within a route.

Messages between a provider and a consumer are managed by a message exchange container, which contains an unique message id, exception information, incoming and outgoing messages (i.e. request and response), and the used message exchange pattern (MEP). „In Only“ MEP is used for one-way messages such as JMS whereas „In Out“ MEP executes request-response communication such as a client side HTTP based request and its response from the server side.

After shortly explaining the basic concepts of Apache Camel, the following sections will give more details and code examples. Let’s begin with the architecture of Apache Camel.

 

Architecture

Figure 2 shows the architecture of Apache Camel. A CamelContext provides the runtime system. Inside, processors handle things in between endpoints like routing or transformation. Endpoints connect several technologies to be integrated. Apache Camel offers different DSLs to realize the integration problems.

Figure 2: Architecture of Apache Camel

 

CamelContext

The CamelContext is the runtime system of Apache Camel and connects its different concepts such as routes, components or endpoints. The following code snipped shows a Java main method, which starts the CamelContext and stops it after 30 seconds. Usually, the CamelContext is started when loading the application and stopped at shutdown.

 

public class CamelStarter {

public static void main(String[] args) throws Exception {

CamelContext context = new DefaultCamelContext();

context.addRoutes(new IntegrationRoute());

context.start();

Thread.sleep(30000);

context.stop();

}

}

 

The runtime system can be included anywhere in the JVM environment, including web container (e.g. Tomcat), JEE application server (e.g. IBM WebSphere AS), OSGi container, or even in the cloud.

 

Domain Specific Languages

DSLs facilitate the realization of complex projects by using a higher abstraction level. Apache Camel offers several different DSLs. Java, Groovy and Scala use object-oriented concepts and offer a specific method for most EIPs. On the other side, Spring XML DSL is based on the Spring framework and uses XML configuration. Besides, OSGi blueprint XML is available for OSGi integration.

Java DSL has best IDE support. Groovy and Scala DSL are similar to Java DSL, in addition they offer typical features of modern JVM languages such as concise code or closures. Contrary to these programming languages, Spring XML DSL requires a lot of XML. Besides, it offers very powerful Spring-based dependency injection mechanism and nice abstractions to simplify configurations (such as JDBC or JMS connections). The choice is purely a matter of taste in most use cases. Even a combination is possible. Many developer use Spring XML for configuration whilst routes are realized in Java, Groovy or Scala.

Routes

Routes are a crucial part of Apache Camel. The flow and logic of an integration is specified here. The following example shows a route using Java DSL:

public class IntegrationRoute extends RouteBuilder {

@Override

public void configure() throws Exception {

from(“file:target/inbox”)

                      .process(new LoggingProcessor())

                      .bean(new TransformationBean(),

                                  “makeUpperCase”)                                                  

.to(“file:target/outbox/dvd”);

}

}

 

The DSL is easy to use. Everybody should be able to understand the above example without even knowing Apache Camel. The route realizes a part of the described use case. Orders are put in a file directory from an external source. The orders are processed and finally moved to the target directory.

Routes have to extend the „RouteBuilder“ class and override the „configure“ method. The route itself begins with a „from“ endpoint and finishes at one or more „to“ endpoints. In between, all necessary process logic is implemented. Any number of routes can be implemented within one „configure“ method.

 

The following snippet shows the same route realized via Spring XML DSL:

 

<beans … >

 

<bean class=”mwea.TransformationBean” id=”transformationBean”/>

<bean class=”mwea.LoggingProcessor” id=”loggingProcessor”/>

 

<camelContext xmlns=”http://camel.apache.org/schema/spring”>

<package>mwea</package>

 

<route>

        <from uri=”file:target/inbox”/>

                    <process ref=”loggingProcessor”/>

                    <bean ref=”transformationBean”/>

        <to uri=”file:target/outbox”/>

    </route>

 

</camelContext>

 

</beans>

 

Besides routes, another important concept of Apache Camel is its components. They offer integration points for almost every technology.

 

Components

In the meantime, over 100 components are available. Besides widespread technologies such as HTTP, FTP, JMS or JDBC, many more technologies are supported, including cloud services from Amazon, Google, GoGrid, and others. New components are added in each release. Often, also the community builds new custom components because it is very easy.

The most amazing feature of Apache Camel is its uniformity. All components use the same syntax and concepts. Every integration and even its automatic unit tests look the same. Thus, complexity is reduced a lot. Consider changing the above example: If orders should be sent to a JMS queue instead of a file directory, just change the „to“ endpoint from „file:target/outbox“ to „jms:queue:orders“. That’s it! (JMS must be configured once within the application before, of course)

While components offer the interface to technologies, Processors and Beans can be used to add custom integration logic to a route.

 

Processors and Beans

Besides using EIPs, you have to add individual integration logic, often. This is very easy and again uses the same concepts always: Processors or Beans. Both were used in the route example above.

Processor is a simple Java interface with one single method: „process“. Inside this method, you can do whatever you need to solve your integration problem, e.g. transform the incoming message, call other services, and so on.

 

public class LoggingProcessor implements Processor {

@Override

public void process(Exchange exchange) throws Exception {

System.out.println(“Received Order: ” +

exchange.getIn().getBody(String.class));

}

}

 

The „exchange“ parameter contains the Messsage Exchange with the incoming message, the outgoing message, and other information. Due to implementing the Processor interface, you have got a dependency to the Camel API. This might be a problem sometimes. Maybe you already have got existing integration code which cannot be changed (i.e. you cannot implement the Processor interface)? In this case, you can use Beans, also called POJOs (Plain Old Java Object). You get the incoming message (which is the parameter of the method) and return an outgoing message, as shown in the following snipped:

 

public class TransformationBean {

public String makeUpperCase(String body) {

String transformedBody = body.toUpperCase();

return transformedBody;

}

}

 

The above bean receives a String, transforms it, and finally sends it to the next endpoint. Look at the route above again. The incoming message is a File. You may wonder why this works? Apache Camel offers another powerful feature: More than 150 automatic type converters are included from scratch, e.g. FileToString, CollectionToObject[] or URLtoInputStream. By the way: Further type converters can be created and added to the CamelContext easily [7].

If a Bean only contains one single method, it even can be omitted in the route. The above call therefore could also be .bean(new TransformationBean()) instead of .bean(new TransformationBean(), “makeUpperCase”).

 

Adding some more Enterprise Integration Patterns

The above route transforms incoming orders using the Translator EIP before processing them. Besides this transformation, some more work is required to realize the whole use case. Therefore, some more EIPs are used in the following example:

 

public class IntegrationRoute extends RouteBuilder {

@Override

public void configure() throws Exception {

from(“file:target/inbox”)

.process(new LoggingProcessor())

.bean(new TransformationBean())

.unmarshal().csv()

.split(body().tokenize(“,”))

.choice()

.when(body().contains(“DVD”))

.to(“file:target/outbox/dvd”)

.when(body().contains(“CD”))

.to(“activemq:CD_Orders”)

.otherwise()

.to(“mock:others”);

}

}

 

Each csv file illustrates one single order containing one or more order items. The camel-csv component is used to convert the csv message. Afterwards, the Splitter EIP separates each order item of the message body. In this case, the default separator (a comma) is used. Though, complex regular expressions or scripting languages such as XPath, XQuery or SQL can also be used as splitter.

Each order item has to be sent to a specific processing unit (remember: there are dvd orders, cd orders, and other orders which are sent to a partner). The content-based router EIP solves this problem without any individual coding efforts. Dvd orders are processed via a file directory whilst cd orders are sent to a JMS queue.

ActiveMQ is used as JMS implementation in this example. To add ActiveMQ support to a Camel application, you only have to add the related maven dependency for the camel-activemq component or add the JARs to the classpath manually. That’s it. Some other components need a little bit more configuration, once. For instance, if you want to use WebSphere MQ or another JMS implementation instead of ActiveMQ, you have to configure the JMS provider.

All other order items besides dvds and cds are sent to a partner. Unfortunately, this interface is not available, yet. The Mock component is used instead to simulate this interface momentarily.

The above example shows impressively how different interfaces (in this case File, JMS, and Mock) can be used within one route. You always apply the same syntax and concepts despite very different technologies.

 

Automatic Unit and Integration Tests

Automatic tests are crucial. Nevertheless, it usually is neglected in integration projects. The reason is too much efforts and very high complexity due to several different technologies.

Apache Camel solves this problem: It offers test support via JUnit extensions. The test class must extend CamelTestSupport to use Camel’s powerful testing capabilities. Besides additional assertions, mocks are supported implicitly. No other mock framework such as EasyMock or Mockito is required. You can even simulate sending messages to a route or receiving messages from it via a producer respectively consumer template. All routes can be tested automatically using this test kit. It is noteworthy to mention that the syntax and concepts are the same for every technology, again.

The following code snipped shows a unit test for our example route:

 

public class IntegrationTest extends CamelTestSupport {

@Before

public void setup() throws Exception {

super.setUp();

context.addRoutes(new IntegrationRoute());

}

 

@Test

public void testIntegrationRoute() throws Exception {

 

// Body of test message containing several order items

String bodyOfMessage = “Harry Potter / dvd, Metallica / cd, Claus Ibsen –

Camel in Action / book “;

// Initialize the mock and set expected results

MockEndpoint mock = context.getEndpoint(“mock:others”,

MockEndpoint.class);

mock.expectedMessageCount(1);

mock.setResultWaitTime(1000);

 

// Only the book order item is sent to the mock

// (because it is not a cd or dvd)

String bookBody = “Claus Ibsen – Camel in Action / book”.toUpperCase();

mock.expectedBodiesReceived(bookBody);

// ProducerTemplate sends a message (i.e. a File) to the inbox directory

template.sendBodyAndHeader(“file://target/inbox”, bodyOfMessage, Exchange.FILE_NAME, “order.csv”);

Thread.sleep(3000);

// Was the file moved to the outbox directory?

File target = new File(“target/outbox/dvd/order.csv”);

assertTrue(“File not moved!”, target.exists());

// Was the file transformed correctly (i.e. to uppercase)?

String content = context.getTypeConverter().convertTo(String.class, target);

String dvdbody = “Harry Potter / dvd”.toUpperCase();

assertEquals(dvdbody, content);

// Was the book order (i.e. „Camel in action“ which is not a cd or dvd) sent to the mock?

mock.assertIsSatisfied();

 

}

}

 

The setup method creates an instance of CamelContext (and does some additional stuff). Afterwards, the route is added such that it can be tested. The test itself creates a mock and sets its expectations. Then, the producer template sends a message to the „from“ endpoint of the route. Finally, some assertions validate the results. The test can be run the same way as each other JUnit test: directly within the IDE or inside a build script. Even agile Test-driven Development (TDD) is possible. At first, the Camel test has to be written, before implementing the corresponding route.

If you want to learn more about Apache Camel, the first address should be the book „Camel in Action“ [8], which describes all basics and many advanced features in detail including working code examples for each chapter. After whetting your appetite, let’s now discuss when to use Apache Camel…

 

Alternatives for Systems Integration

Figure 3 shows three alternatives for integrating applications:

 

  • Own custom Solution: Implement an individual solution that works for your problem without separating problems into little pieces. This works and is probably the fastest alternative for small use cases. You have to code all by yourself.

 

  • Integration Framework: Use a framework, which helps to integrate applications in a standardized way using several integration patterns. It reduces efforts a lot. Every developer will easily understand what you did. You do not have to reinvent the wheel each time.

 

  • Enterprise Service Bus (ESB): Use an ESB to integrate your applications. Under the hood, the ESB often also uses an integration framework. But there is much more functionality, such as business process management, a registry or business activity monitoring. You can usually configure routing and such stuff within a graphical user interface (you have to decide at your own if that reduces complexity and efforts). Usually, an ESB is a complex product. The learning curve is much higher than using a lightweight integration framework. Though, therefore you get a very powerful tool, which should fulfill all your requirements in large integration projects.

Figure 3: Alternatives for Systems Integration

 

If you decide to use an integration framework, you still have three good alternatives in the JVM environment: Spring Integration [9], Mule [10], and Apache Camel. They are all lightweight, easy to use and implement the EIPs.  Therefore, they offer a standardized way to integrate applications and can be used even in very complex integration projects. A more detailed comparison of these three integration frameworks can be found at [11].

My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. Spring Integration and Mule only offer XML configuration. I would only use Mule if I need some of its awesome unique connectors to proprietary products (such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway). I would only use Spring Integration in an existing Spring project and if I only need to integrate widespread technologies such as FTP, HTTP or JMS. In all other cases, I would use Apache Camel.

Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job!

 

Apache Camel is ready for Enterprise Integration Projects

Apache Camel already celebrated its fourth birthday in July 2011 [12] and represents a very mature and stable open source project. It supports all requirements to be used in enterprise projects, such as error handing, transactions, scalability, and monitoring. Commercial support is also available.

The most important gains is its available DSLs, many components for almost every thinkable technology, and the fact, that the same syntax and concepts can be used always – even for automatic tests – no matter which technologies have to be integrated. Therefore, Apache Camel should always be evaluated as lightweight alternative to heavyweight ESBs. Get started by downloading the example of this article. If you need any help or further information, there is a great community and a well-written book available.

 

About the author

Kai Wähner works as an IT-Consultant at MaibornWolff et al in Munich, Germany. His main area of expertise lies within the fields of Java EE, SOA and Cloud Computing.

He is speaker at international IT conferences such as Jazoon or Confess, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Contact: kontakt@kai-waehner.de or Twitter: @KaiWaehner.

 

Sources:

The post Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing, and other Concepts appeared first on Kai Waehner.

]]>
Spoilt for Choice: Which Integration Framework to use – Spring Integration, Mule ESB or Apache Camel? https://www.kai-waehner.de/blog/2012/01/10/spoilt-for-choice-which-integration-framework-to-use-spring-integration-mule-esb-or-apache-camel/ Tue, 10 Jan 2012 19:57:56 +0000 http://www.kai-waehner.de/blog/?p=340 Three lightweight integration frameworks are available in the JVM environment: Spring Integration, Mule ESB and Apache Camel. They implement the well-known Enteprise Integration Patterns (EIP, http://www.eaipatterns.com) and therefore offer a standardized, domain-specific language to integrate applications.

These integration frameworks can be used in almost every integration project within the JVM environment - no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.

This article compares all three alternatives and discusses their pros and cons.

The post Spoilt for Choice: Which Integration Framework to use – Spring Integration, Mule ESB or Apache Camel? appeared first on Kai Waehner.

]]>
Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests.

Three integration frameworks are available in the JVM environment, which fulfil these requirements: Spring Integration, Mule ESB and Apache Camel. They implement the well-known Enteprise Integration Patterns (EIP, http://www.eaipatterns.com) and therefore offer a standardized, domain-specific language to integrate applications.

These integration frameworks can be used in almost every integration project within the JVM environment – no matter  which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.

This article compares all three alternatives and discusses their pros and cons. If you want to know, when to use a more powerful Enterprise Service Bus (ESB) instead of one of these lightweight integration frameworks, then you should read this blog post: http://www.kai-waehner.de/blog/2011/06/02/when-to-use-apache-camel/ (it explains when to use Apache Camel, but the title could also be „When to use a lightweight integration framework“).

Comparison Criteria

Several criteria can be used to compare these three integration frameworks:

  • Open source
  • Basic concepts / architecture
  • Testability
  • Deployment
  • Popularity
  • Commercial support
  • IDE-Support
  • Errorhandling
  • Monitoring
  • Enterprise readiness
  • Domain specific language (DSL)
  • Number of components for interfaces, technologies and protocols
  • Expandability

Similarities

All three frameworks have many similarities. Therefore, many of the above comparison criteria are even! All implement the EIPs and offer a consistent model and messaging architecture to integrate several technologies. No matter which technologies you have to use, you always do it the same way, i.e. same syntax, same API, same automatic tests. The only difference is the the configuration of each endpoint (e.g. JMS needs a queue name while JDBC needs a database connection url). IMO, this is the most significant feature. Each framework uses different names, but the idea is the same. For instance, „Camel routes“ are equivalent to „Mule flows“, „Camel components“ are called „adapters“ in Spring Integration.

Besides, several other similarities exists, which differ from heavyweight ESBs. You just have to add some libraries to your classpath. Therefore, you can use each framework everywhere in the JVM environment. No matter if your project is a Java SE standalone application, or if you want to deploy it to a web container (e.g. Tomcat), JEE application server (e.g. Glassfish), OSGi container or even to the cloud. Just add the libraries, do some simple configuration, and you are done. Then you can start implementing your integration stuff (routing, transformation, and so on).

All three frameworks are open source and offer familiar, public features such as source code, forums, mailing lists, issue tracking and voting for new features. Good communities write documentation, blogs and tutorials (IMO Apache Camel has the most noticeable community). Only the number of released books could be better for all three. Commercial support is available via different vendors:

IDE support is very good, even visual designers are available for all three alternatives to model integration problems (and let them generate the code). Each of the frameworks is enterprise ready, because all offer required features such as error handling, automatic testing, transactions, multithreading, scalability and monitoring.

Differences

If you know one of these frameworks, you can learn the others very easily due to their same concepts and many other similarities. Next, let’s discuss their differences to be able to decide when to use which one. The two most important differences are the number of supported technologies and the used DSL(s). Thus, I will concentrate especially on these two criteria in the following. I will use code snippets implementing the well-known EIP „Content-based Router“ in all examples. Judge for yourself, which one you prefer.

Spring Integration

Spring Integration is based on the well-known Spring project and extends the programming model with integration support. You can use Spring features such as dependency injection, transactions or security as you do in other Spring projects.

Spring Integration is awesome, if you already have got a Spring project and need to add some integration stuff. It is almost no effort to learn Spring Integration if you know Spring itself. Nevertheless, Spring Integration only offers very rudimenary support for technologies – just „basic stuff“ such as File, FTP, JMS, TCP, HTTP or Web Services. Mule and Apache Camel offer many, many further components!

Integrations are implemented by writing a lot of XML code (without a real DSL), as you can see in the following code snippet:

 

<file:inbound-channel-adapter

            id=”incomingOrders”

            directory=”file:incomingOrders”/>

           

<payload-type-router input-channel=”incomingOrders”>

            <mapping type=”com.kw.DvdOrder” channel=”dvdOrders” />

            <mapping type=”com.kw.VideogameOrder”

                                channel=”videogameOrders” />

            <mapping type=”com.kw.OtherOrder” channel=”otherOrders” />

</payload-type-router>

 

<file:outbound-channel-adapter

               id=”dvdOrders”

               directory=”dvdOrders”/>

 

<jms:outbound-channel-adapter

               id=”videogamesOrders”

               destination=”videogameOrdersQueue”

               channel=”videogamesOrders”/>

 

<logging-channel-adapter id=”otherOrders” level=”INFO”/>

 

You can also use Java code and annotations for some stuff, but in the end, you need a lot of XML. Honestly, I do not like too much XML declaration. It is fine for configuration (such as JMS connection factories), but not for complex integration logic. At least, it should be a DSL with better readability, but more complex Spring Integration examples are really tough to read.

Besides, the visual designer for Eclipse (called integration graph) is ok, but not as good and intuitive as its competitors. Therefore, I would only use Spring Integration if I already have got an existing Spring project and must just add some integration logic requiring only „basic technologies“ such as File, FTP, JMS or JDBC.

Mule ESB

Mule ESB is – as the name suggests – a full ESB including several additional features instead of just an integration framework (you can compare it to Apache ServiceMix which is an ESB based on Apache Camel). Nevertheless, Mule can be use as lightweight integration framework, too – by just not adding and using any additional features besides the EIP integration stuff. As Spring Integration, Mule only offers a XML DSL. At least, it is much easier to read than Spring Integration, in my opinion. Mule Studio offers a very good and intuitive visual designer. Compare the following code snippet to the Spring integration code from above. It is more like a DSL than Spring Integration. This matters if the integration logic is more complex.

 

<flow name=”muleFlow”>

        <file:inbound-endpoint path=”incomingOrders”/>

        <choice>

            <when expression=”payload instanceof com.kw.DvdOrder”

                         evaluator=”groovy”>

                        <file:outbound-endpoint path=”incoming/dvdOrders”/>

            </when>

            <when expression=”payload instanceof com.kw.DvdOrder”

                          evaluator=”groovy”>

                          <jms:outbound-endpoint

                          queue=”videogameOrdersQueue”/>

            </when>

            <otherwise>

                                <logger level=”INFO”/>

            </otherwise>

        </choice>

</flow>

 

The major advantage of Mule is some very interesting connectors to important proprietary interfaces such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway. If your integration project requires some of these connectors, then I would probably choose Mule!

A disadvantage for some projects might be that Mule says no to OSGi: http://blogs.mulesoft.org/osgi-no-thanks/

Apache Camel

Apache Camel is almost identical to Mule. It offers many, many components (even more than Mule) for almost every technology you could think of. If there is no component available, you can create your own component very easily starting with a Maven archetype! If you are a Spring guy: Camel has awesome Spring integration, too. As the other two, it offers a XML DSL:

 

<route>

        <from uri=”file:incomingOrders”/>

        <choice>

            <when>

                <simple>${in.header.type} is ‘com.kw.DvdOrder'</simple>

                            <to uri=”file:incoming/dvdOrders”/>

            </when>

            <when>

                <simple>${in.header.type} is ‘com.kw.VideogameOrder’

               </simple>

                            <to uri=”jms:videogameOrdersQueue”/>

            </when>

            <otherwise>

                <to uri=”log:OtherOrders”/>

            </otherwise>

        </choice>

    </route>

 

Readability is better than Spring Integration and almost identical to Mule. Besides, a very good (but commercial) visual designer called Fuse IDE is available by FuseSource – generating XML DSL code. Nevertheless, it is a lot of XML, no matter if you use a visual designer or just your xml editor. Personally, I do not like this.

Therefore, let’s show you another awesome feature: Apache Camel also offers DSLs for Java, Groovy and Scala. You do not have to write so much ugly XML. Personally, I prefer using one of these fluent DSLs instead XML for integration logic. I only do configuration stuff such as JMS connection factories or JDBC properties using XML. Here you can see the same example using a Java DSL code snippet:

 

from(“file:incomingOrders “)

       .choice()

                .when(body().isInstanceOf(com.kw.DvdOrder.class))

                                .to(“file:incoming/dvdOrders”)

                .when(body().isInstanceOf(com.kw.VideogameOrder.class))

                                .to(“jms:videogameOrdersQueue “)

                .otherwise()

                                .to(“mock:OtherOrders “);

 

The fluent programming DSLs are very easy to read (even in more complex examples). Besides, these programming DSLs have better IDE  support than XML (code completion, refactoring, etc.). Due to these awesome  fluent DSLs, I would always use Apache Camel, if I do not need some of Mule’s excellent connectors to proprietary products. Due to its very good integration to Spring, I would even prefer Apache Camel to Spring Integration in most use cases.

By the way: Talend offers a visual designer generating Java DSL code, but it generates a lot of boilerplate code and does not allow vice-versa editing (i.e. you cannot edit the generated code). This is a no-go criteria and has to be fixed soon (hopefully)!

And the winner is…

… all three integration frameworks, because they are all lightweight and easy to use – even for complex integration projects. It is awesome to integrate several different technologies by always using the same syntax and concepts – including very good testing support.

My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. I would only use Mule if I need some of its unique connectors to proprietary products. I would only use Spring Integration in an existing Spring project and if I only need to integrate „basic technologies“ such as FTP or JMS. Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job!

 

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Spoilt for Choice: Which Integration Framework to use – Spring Integration, Mule ESB or Apache Camel? appeared first on Kai Waehner.

]]>
Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects in 2012 https://www.kai-waehner.de/blog/2011/11/21/why-i-will-use-java-ee-jee-and-not-j2ee-instead-of-spring-in-new-enterprise-java-projects-in-2012/ Mon, 21 Nov 2011 18:39:16 +0000 http://www.kai-waehner.de/blog/?p=327 The question comes up often. It came up in my new project in November 2011, too. I will use Java EE (JEE, and not J2EE) instead of the Spring framework in this new Enterprise Java project.

I know: Several articles, blogs and forum discussions are available regarding this topic. Why is there a need for one more? Because many blogs talk about older versions of Java EE or because they are not neutral (I hope to be neutral). And because many people still think thank EJBs are heavy! And because the time has changed: It is Java EE 6 time now, J2EE is dead. Finally! Finally, because not only JEE 6 is available, but also several application servers. I do not want to start a flame war (too many exist already), I just want to describe my personal opinion of the JEE vs. Spring „fight“…

The post Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects in 2012 appeared first on Kai Waehner.

]]>
The question comes up often. It came up in my new project in November 2011, too. I will use Java EE (JEE) instead of the Spring framework in this new Enterprise Java project.

I know: Several articles, blogs and forum discussions are available regarding this topic. Why is there a need for one more? Because many blogs talk about older versions of Java EE or because they are not neutral (I hope to be neutral). And because many people still think thank EJBs are heavy! And because the time has changed: It is Java EE 6 time now, J2EE is dead. Finally! Finally, because not only JEE 6 is available, but also several application servers (not just Glassfish as reference implementation). I do not want to start a flame war (too many exist already), I just want to describe my personal opinion of the JEE vs. Spring „fight“…

Therefore, I think it is very important to start with a short overview and history of both alternatives. Afterwards, I will list the differences of both and explain why these differences lead me to JEE instead of Spring for most new Java projects. I am explicitly talking about new applications. If you have to extend an existing application, continue using the existing framework!

One more disclaimer: I am talking about mission-critical Enterprise Java applications. I am not talking about a little internal application or other uncritical stuff. I also would prefer using a combination of Scala, Groovy and Clojure persisting to a NoSQL database while being deployed at a PaaS cloud service such as JBoss OpenShift or VMware CloudFoundry…

General Information about JEE and Spring

First, I want to summarize some general information about JEE and Spring:

  • In the end, both alternatives consist of several libraries which can be used by developers to create enterprise applications.
  • Both can be used in most use cases, they have very similar functionality (business logic, transactions, web-frameworks, whatever…) – they only differ in realization (e.g. declarative transactions in Spring vs. conventions in JEE).
  • You also can use only one or some of the available libraries. You can even combine JEE and Spring stuff.
  • Usually, the crucial question is: „Should I use JEE (i.e. especially EJB, JPA, CDI, etc.) or the Spring core framework (i.e. especially Spring Application Context, Spring beans, etc.) for realizing my new application? Mostly, you can choose both, it does not matter from the point of view of the end user. But you should not merge both, this only creates higher complexity.
  • There always was a debate about which alternative to choose. It is very difficult to discuss this question in a neutral way. That’s why almost all discussions end up in praising one framework and bashing the other one (I hope to be neutral in this blog post).

History: J2EE was horrible, so Spring helped!

J2EE was horrible. So much XML configuration, so many interfaces, and so lame application servers. This is why the Spring framework was created. It solved many problems of J2EE. It was lightweight, easy to use, and applications could be deployed in a  web container (such as Tomcat) instead of a heavy J2EE application server. Deployment took seconds instead of 15 minutes. Unfortunately, JRebel did not exist at that time. The Spring framework is no standard as J2EE, nevertheless it became very widespread and an large community arose.

Today: J2EE is dead. JEE „stole“ the lightweight Spring ideas!

Everything started with a little shortcut change. J2EE was dead. The new shortcut was JEE. JEE 5 was born in 2006. It „stole“ many good, lightweight ideas such as „convention over configuration“ or „dependency injection“ from Spring and other frameworks. Yes, JEE application servers still were heavy, and testing was almost impossible. Nevertheless, developing JEE applications was fun with JEE 5. You did not have to write 20 interfaces when creating an EJB. Wow, amazing!

Then, in 2009, JEE 6 was released. Development is so easy. Finally! For example, you have to add only one annotation and your EJB is ready! Of course, the developers of the Spring framework did not sleep. Much new stuff was added. Today, you can create a Spring application without any one XML file as I have read in a „No Fluff Just Stuff“ article some weeks ago. Besides, several really cool frameworks were added to the Spring stack, e.g. Spring Integration, Spring Batch or Spring Roo.

Today (November, 2011), both JEE and Spring are very widespread and have a large community. Much information is available for both, e.g. books, blogs, tutorials, etc.

So, after I have described the evolution of JEE and Spring, why will I use JEE in most new Java projects?

Pros and Cons of JEE and Spring

A decision must be made. Which alternative to use in new projects? Let’s look at the pros and cons of both. I will add a „BUT“ to the Spring advantages – these „BUTs“ are the reason why I prefer JEE to Spring.

Advantages of JEE

  • JEE is a set of standard specifications, thus it is vendor-independent. Usually, several implementations exist of a specification.
  • Sustainability: Well, this is the advantage of a standard which is supported by several big players.
  • Yes, believe it or not, testing is possible! Lightweight application servers and frameworks such as Arquillian arrived in the JEE world!
  • Convention over Configuration is everywhere instead of explicit (I know that some people will disagree that this is an advantage).

Advantages of Spring

  • You do not need a heavy JEE application server, you can deploy your application in a web container such as Tomcat.

BUT: JEE application servers are not as heavy as they were some years ago. Besides, the JEE web profile can be used, too. You do not have to use a Tomcat or Jetty to be lightweight!

  • Spring offers features which are not available as JEE standards, such as Spring Batch.

BUT: You can add such a library to a JEE project without problems. You can also add other Spring libraries such as JDBCTemplate or JMSTemplate (which help reducing some boilerplate code) if you want.

  • Spring offers much more flexiblity and power, e.g. aspect-oriented programming is more powerful than JEE interceptors.

BUT: In most projects you do not need this flexibility or power. If you do need it, then use Spring, not JEE – of course!

  • Faster Releases (because it is no standard and only one vendor). The reaction to market requirements is much faster. Some current examples: cloud, mobile, social computing.

BUT: All enterprise projects – of many different clients – which I have seen, are not that flexible. Enterprise applications do not change every month or year. If there is a project, where you can change your version very easily, Spring might be better than JEE under some circumstances. But in most enterprise projects, you cannot simply upgrade from Spring 2.5 to Spring 3.x or from JEE 5 to JEE 6. I wish this would be possible, but low flexibility and politics rule in large companies with thousands of employees.

Conclusion: I will use JEE in most new Enterprise Java Projects

Due to the reasons I explained against Spring in the „BUT“ parts, I will choose JEE in most new Enterprise Java projects. Nevertheless, I will sometimes use a Spring libraries, too (such as Spring Batch). Sometimes, I will even have to use Spring (if I need its flexibility or power), but only then I will choose it. Of course, for existing projects, I will continue using the framework that is used already. I would probably not migrate a Spring 2.5 application to JEE, but I would migrate it to Spring 3.x instead!

So, I have described my reasons why I will use JEE in most new Enterprise Java projects. If I have missed something, or if you have got another opinion (probably many guys have), you can bash me in the comments. I appreciate all „non-flame-war“ discussions…

 

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects in 2012 appeared first on Kai Waehner.

]]>
Pros and Cons – When to use a Portal and Portlets instead of just Java Web-Frameworks https://www.kai-waehner.de/blog/2011/10/07/pros-and-cons-when-to-use-a-portal-and-portlets-instead-of-just-java-web-frameworks/ Fri, 07 Oct 2011 21:00:42 +0000 http://www.kai-waehner.de/blog/?p=304 I had to answer the following question: Shall we use a Portal and if yes, should it be Liferay Portal or Oracle Portal? Or shall we use just one or more Java web frameworks? This article shows my result. The short answer: A Portal makes sense only in a few use cases, in the majority of cases you should not use one. In my case, we will not use one.

The post Pros and Cons – When to use a Portal and Portlets instead of just Java Web-Frameworks appeared first on Kai Waehner.

]]>
I had to answer the following question: Shall we use a Portal and if yes, should it be Liferay Portal or Oracle Portal? Or shall we use just one or more Java web frameworks? This article shows my result. I had to look especially at Liferay and Oracle products, nevertheless the result can be used for other products, too. The short answer: A Portal makes sense only in a few use cases, in the majority of cases you should not use one. In my case, we will not use one.

What is a Portal?

It is important to know that we are talking about an Enterprise Portal. Wikipedia has a good definition:

An enterprise portal […] is a framework for integrating information, people and processes across organizational boundaries. It provides a secure unified access point, often in the form of a web-based user interface, and is designed to aggregate and personalize information through application-specific portlets.

Several Portal server are available in the Java / JVM environment. Liferay Portal and GateIn Portal (former JBoss Portal) are examples for open source products while Oracle Portal or IBM WebSphere Portal are proprietary products.

You develop Portlets („simple“ web applications) and deploy them in your portal. If you need to know more about a Portal or the Portlet JSR standards, ask Wikipedia: http://en.wikipedia.org/wiki/Portlet.

Should we use a Portal or not?

I found several pros and cons for using a Portal instead of just web applications.

 

Disadvantages of using a Portal:

  • Higher complexity
    • Additional configuration (e.g. portlet.xml, Portal server)
    • Communication  between Portlets using Events is not trivial (it is also not trivial if two applications communicate without portlets, of course)
    • Several restrictions when developing a web application within a Portlet
  • Additional testing efforts (test your web applications and test it within a Portal and all its Portal features)
  • Additional costs
    • Open source usually offers enterprise editions which include support (e.g. Liferay)
    • Proprietary products have very high initial costs. Besides, you need support, too (e.g. Oracle)
  • You still have to customize the portal and integrate applications. A portal product does not give you corporate identity or systems integration for free. Software licensing often is only ten percent of the total price.
  • Developers need additional skills besides using a web framework
  • Several restrictions must be considered choosing a web-framework and implement the web application
    • Rethinking about web application design is necessary
    • Portlets use other concepts such as events or an action and render phase instead of only one phase
    • Frameworks (also called bridges) help to solve this problem (but these are standardized for JSF only, a few other plugins are available, e.g. for GWT or Grails)
    • Actually, IMO you have to use JSF if you want to realize Portlets in a stable, relatively „easy“ and future-proof way. There is no standard bridge for other frameworks. There are no books, best practices, articles or conference sessions about Portlets without JSF, right?

 

Advantages of using a Portal

Important: Many of the pros can be realized by oneself with relatively low efforts (see red font).

  • Single Sign On (SSO)

Several Java frameworks are available, e.g OpenSSO (powerful, but complicated) or JOSSO (not so powerful, but easy to use).

Good products are available, e.g. Atlassian Crowd (I love Atlassian products such as Crowd, Jira or Confluence, because they are very intuitive and easy to use).

  • Integration of several applications within one GUI
    • A portal gives you layout and sequence of the applications for free (including stuff such as drag & drop, minimizing windows, and so on)
  • Communication between Portlets (i.e. between different applications)

This is required without a portal, too.

Several solutions can be used, such as a database, messaging, web services, events, and so on.

Even „push“ is possible for some time now (using specific web framework features or HTML 5 websockets).

  • Uniform appearence

CSS can solve this problem (the keyword „corporate identity“ exists in almost every company).

Create a HTML template and include your applications within this template. Done.

  • Personalization
    • Regarding content, structure or graphical presentation
    • Based on individual preferences or metadata

Some of these features can be realized very easily by oneself (e.g. a simple role concept).

Nevertheless, GUI features such as drag & drop are more effort (although component libraries can help you a lot).

  • Many addons are included
    • Search
    • Content management
    • Document management
    • Web 2.0 tools (e.g. blogs or wikis)
    • Collaboration suites (e.g. team pages)
    • Analytics and reporting
    • Development platforms

But:

A) Do you really need these Features?

B) Is the offered functionality sufficent? Portals only offer „basic“ versions of stand-alone products. For instance, the content management system or search engine of a Portal is less powerful than other „real“ products offering this functionality.

 

Thus, you have to think about the following central question: Do we really need all those features offered by a portal?

Conclusion:

The total cost of ownership (TCO) is much higher when using a portal. You have to be sure, that you really need the offered features.

 

In some situations, you can defer your decision. Create your web applications as before. You can still integrate them in a Portal later, if you really need one. For instance, the following Oracle blog describes how you can use iFrames to do this: http://blogs.oracle.com/jheadstart/entry/integrating_a_jsf_application

 

If you decide to use a Portal, you have to choose a Portal product.

Should we use an Open Source or Proprietary Portal Product?

Both, open source and proprietary Portal products have pros and cons. I especially looked at Oracle Portal and Liferay Portal, but probably most aspects can be considered when evaluating other products, too.

Advantages  of Oracle Portal

  • Oracle offers a full-stack suite for development (including JSF and Portlets): Oracle Application Development Framework (ADF)
  • Oracle JDeveloper offers good support for ADF.
  • Everything from one product line increases efficiency (database, application server, ESB, IDE, Portal, …) – at least in theory J

Disadvantages of Oracle Portal:

Advantages  of Liferay Portal:

  • Open source
  • Drastically lower initial costs
  • Lightwight product (1-Click-Install, etc.)

Disadvantages of Liferay Portal:

  • Not everything is from one product line (this cannot be considered as disadvantage always, but in our case the customer preferred very few different vendors (keyword “IT consolidation”)
  • Portlets are still Portlets. Although Liferay is lightweight, realizing Portlets still sucks as it does with a proprietary product

 

When to use a Portal?

Well, the conclusion is difficult. In my opinion, it does make sense only in a few use cases. If you really need many or all of those Portal features, and they are also sufficient, then use a Portal product. Though, usually it is much easier to create a simple web application which integrates your applications. Use a SSO framework, create a template, and you are done. Your developers will appreciate not to work with Portlets and its increased complexity and restrictions.

 

Did I miss any pros or cons? Do you have another opinion (probably, many people do???), then please write a comment and let’s discuss…

 

 

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Pros and Cons – When to use a Portal and Portlets instead of just Java Web-Frameworks appeared first on Kai Waehner.

]]>
Cloud Integration with Apache Camel and Amazon Web Services (AWS): S3, SQS and SNS https://www.kai-waehner.de/blog/2011/08/30/cloud-integration-with-apache-camel-and-amazon-web-services-aws-s3-sqs-and-sns/ Tue, 30 Aug 2011 08:13:29 +0000 http://www.kai-waehner.de/blog/?p=289 The integration framework Apache Camel already supports several important cloud services. This article describes the combination of Apache Camel and the Amazon Web Services (AWS) interfaces of Simple Storage Service (S3), Simple Queue Service (SQS) and Simple Notification Service (SNS). Thus, The concept of Infrastructure as a Service (IaaS) is used to access messaging systems and data storage without any need for configuration.

The post Cloud Integration with Apache Camel and Amazon Web Services (AWS): S3, SQS and SNS appeared first on Kai Waehner.

]]>
The integration framework Apache Camel already supports several important cloud services (see my overview article at http://www.kai-waehner.de/blog/2011/07/09/cloud-computing-heterogeneity-will-require-cloud-integration-apache-camel-is-already-prepared for more details). This article describes the combination of Apache Camel and the Amazon Web Services (AWS) interfaces of Simple Storage Service (S3), Simple Queue Service (SQS) and Simple Notification Service (SNS). Thus, The concept of Infrastructure as a Service (IaaS) is used to access messaging systems and data storage without any need for configuration.

Registration to AWS and Setup of Camel

First, you have to register to the Amazon Web Services (for free). Most AWS services include a free monthly quota, which is absolutely sufficient to play around and develop some simple applications. As its name states, AWS uses technology-independent web services. Besides, APIs for several different programming languages are available to ease development. By the way, Camel uses the AWS SDK for Java (http://aws.amazon.com/sdkforjava), of course. The documentation is detailed and easy to understand, including tutorials, screenshots and code examples .

Hint 1:

You should read the introductions to S3, SQS and SNS (go to http://aws.amazon.com and click on „products“) and play around with the AWS Management Console (http://aws.amazon.com/console) before you continue. This step is very easy and takes less than one hour. Then, you will have a much better understanding about AWS and where Camel can help you!

Hint 2:

It really helps to look at the source code of the camel-aws component, It helps you to understand how Camel uses the AWS Java API internally. If you want to write tests, you can do it the same way. In the past, I was afraid of looking at „complex“ source code of open source frameworks. But there is no need to be scared! The camel-aws component (and most other camel components) contain only of a few classes. Everything is easy to understand. It helps you to understand Camel internals, the AWS API, and to spot and solve errors due to exceptions in your code.

In the meanwhile, the current Camel version 2.8 supports three AWS services: S3, SQS and SNS. All of them use similar concepts. Therefore, they are included in one single camel component: „camel-aws“. You have to add the libraries to your existing Camel project. As always, the simplest way is to use Maven and add the following dependency to the pom.xml:

<dependency>

<groupId>org.apache.camel</groupId>

<artifactId>camel-aws</artifactId>

<version>${camel-version}</version>

</dependency>

Configuration of the Camel Endpoint

The implementation and configuration of all three services is very similar. The URI looks like this (the code shows the SQS service):

aws-sqs://queue-name[?options]

There are two alternatives to configure your endpoint.

Using Parameters

The easy way is to use two paramters in the URI of your endpoint: „accessKey“ and „secretKey“ (you receive both after your AWS registration).

“aws-sqs://unique-queue-name?accessKey=“INSERT_ME“&secretKey=INSERT_ME”

Be aware of the following problem, which can result in a strange, non-speaking exception (thanks to Brendan Long):

You’ll need to URL encode any +’s in your secret key (otherwise, they’ll
be treated as spaces). + = %2B, so if your secretkey was
“my+secret\key”, your Camel URL should have “secretKey=my%2Bsecret\key”.

“Within the query string, the plus sign is reserved as shorthand
notation for a space. Therefore, real plus signs must be encoded. This
method was used to make query URIs easier to pass in systems which did
not allow spaces.”

Source: WC3 URI Recommendations
<http://www.w3.org/Addressing/URL/4_URI_Recommentations.html#z5>

Adding a configured AmazonClient to the Registry

If you need to do more configuration (e.g. because your system is behind a firewall), you have to add an AmazonClient object to your registry. The following code shows an example using SQS, but SNS and S3 use exactly the same concept.

@Override

protected JndiRegistry createRegistry() throws Exception {

JndiRegistry registry = super.createRegistry();

AWSCredentials awsCredentials = new BasicAWSCredentials(“INSERT_ME”, “INSERT_ME”);

ClientConfiguration clientConfiguration = new ClientConfiguration();

clientConfiguration.setProxyHost(“http://myProxyHost”);

clientConfiguration.setProxyPort(8080);

AmazonSQSClient client = new AmazonSQSClient(awsCredentials, clientConfiguration);

registry.bind(“amazonSQSClient”, client);

return registry;

}

This example overwrites the createRegistry() method of a JUnit test (extending CamelTestSupport). You can also add this information to your runtime Camel application, of course.

Apache Camel and the Simple Storage Service (S3)

Simple Storage Service (S3) is a key-value-store. You can store small to very large data. The usage is very easy. You create buckets and put key-value data into these buckets. You can also create folders within buckets to organize your data. That’s it. You can monitor your buckets using the AWS Management Console – an intuitive GUI supporting most AWS services.

The following example shows both alternatives for accessing the Amazon services (as described above): Paramenters and the AmazonClient.

// Transfer data from your file inbox to the AWS S3 service

from(“file:files/inbox”)

// This is the key of your key-value data

.setHeader(S3Constants.KEY, simple(“This is a static key”))

// Using parameters for accessing the AWS service

.to(“aws-s3://camel-integration-bucket-mwea-kw?accessKey=INSERT_ME&secretKey=INSERT_ME&region=eu-west-1”);


// Transfer data from the AWS S3 service to your file outbox

from(“aws-s3://camel-integration-bucket-mwea-kw?amazonS3Client=#amazonS3Client&region=eu-wes”)

.to(“file:files/outbox”);


There are some additional parameters, for instance you can submit the desired AWS region or delete data after receiving it (see http://camel.apache.org/aws-s3.html and the corresponding SQS and SNS sites for more details about parameters and message headers).

As you see in the code, you can use the AWS-S3 endpoint for producing and for consuming messages. Each bucket must be unique, thus you have to add some specific information such as your company to its name.

Hint:

If a bucket does not exist, Camel is creating it automatically (as the AWS API does). This concept is also used for SQS queues and SNS topics.

Apache Camel and the Simple Queue Service (SQS)

The Simple Queue Service (SQS) is similar to a JMS provider such as WebSphere MQ or ActiveMQ (but with some differences). You create queues and send messages to them. Consumers receive the messages. Contrary to most other AWS services, you cannot monitor queues by using the AWS management console directly. You have to use the service „Cloudwatch“ (http://aws.amazon.com/cloudwatch) and start an EC2 instance to monitor queues and its content.

As you can see in the following code example, the syntax and concepts are almost the same as for the S3 service:

from(“file:inbox”)

.to(“aws-sqs://camel-integration-queue-mwea-kw?accessKey=INSERT_ME&secretKey=INSERT_ME”);


from(“aws-sqs://camel-integration-queue-mwea-kw?amazonSQSClient=#amazonSQSClient”)

.to(“file:outbox?fileName=sqs-${date:now:yyyy.MM.dd-hh:mm:ss:SS}”);


Again, you can use the AWS-SQS endpoint for producing and for consuming messages. Each queue name must be unique.

There exist two important differences to JMS (copy & paste from the AWS documentation):

Q: How many times will I receive each message?

Amazon SQS is engineered to provide “at least once” delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.

Q: Why are there separate ReceiveMessage and DeleteMessage operations?

When Amazon SQS returns a message to you, that message stays in the queue, whether or not you actually received the message. You are responsible for deleting the message; the delete request acknowledges that you’re done processing the message. If you don’t delete the message, Amazon SQS will deliver it again on another receive request.

Apache Camel and the Simple Notification Service (SNS)

The Simple Notification Service (SNS) acts like JMS topics. You create a topic, consumers subscribe to the topic and then receive notifications. Several transport protocols are supported: HTTP(S), Email and SQS. Further interfaces will be added in the future, e.g. the Short Message Service (SMS) for mobile phones.

Contrary to S3 and SQS, Camel only offers a producer endpoint for this AWS service. You can only create topics and send messages via Camel. The reason is simple: Camel already offers endpoints for consuming these messages: HTTP, Email and SQS are already available.

There is one tradeoff: A consumer cannot subscribe to topics using Camel – at the moment. The AWS Management Console has to be used. A very interesting discussion can be read on the Camel JIRA issue regarding the following questions: Should Camel be able to subscribe to topics? Should the producer contain this feature or should there be a consumer? In my opinion, there should be a consumer which is able to subscribe to topics, otherwise Camel is missing a key part of the AWS SNS service! Please read the discussion and contribute your opinion: https://issues.apache.org/jira/browse/CAMEL-3476.

Apache Camel is already ready for the Cloud Computing Era

AWS offers many more services for the cloud. Probably, it does not make sense to integrate everyone into Camel, but more AWS services will be supported in the future. For instance, SimpleDB and the Relational Database Service (RDS) are already planned and make sende, too: http://camel.apache.org/aws.html.

The conclusion is easy: Apache Camel is already ready for the cloud computing era. Several important cloud services are already supported. Cloud integration will become very important in the future. Thus, Camel is on a very good way. Hopefully, we will see more cloud components, soon.

I will continue to write articles about other Camel cloud components (and new AWS addons, ouf course). For instance, a component for the  Platform as a Service (PaaS) product Google App Engine (GAE) is already available.

If you have any additional important information, questions or other feedback, please write a comment. Thank you in advance…

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Cloud Integration with Apache Camel and Amazon Web Services (AWS): S3, SQS and SNS appeared first on Kai Waehner.

]]>
Rapid Cloud Development with Spring Roo – Part 2: VMware Cloud Foundry https://www.kai-waehner.de/blog/2011/08/12/rapid-cloud-development-with-spring-roo-part-2-vmware-cloud-foundry/ Fri, 12 Aug 2011 07:49:38 +0000 http://www.kai-waehner.de/blog/?p=284 Spring Roo is a tool to offer rapid application development on the Java platform. It supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the Cloud Foundry support of Spring Roo. GAE was discussed in part 1 of this article series (http://www.kai-waehner.de/blog/2011/07/18/rapid-cloud-development-with-spring-roo-%E2%80%93-part-1-google-app-engine-gae).

The post Rapid Cloud Development with Spring Roo – Part 2: VMware Cloud Foundry appeared first on Kai Waehner.

]]>
Spring Roo is a tool to offer rapid application development on the Java platform. I already explained when to use it: http://www.kai-waehner.de/blog/2011/04/05/when-to-use-spring-roo.  Spring Roo supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the Cloud Foundry support of Spring Roo. GAE was discussed in part 1 of this article series (http://www.kai-waehner.de/blog/2011/07/18/rapid-cloud-development-with-spring-roo-%E2%80%93-part-1-google-app-engine-gae).

Deployment of a Cloud Foundry Application to the Cloud

The reference guide of Spring Roo gives an introduction at http://www.springsource.org/roo/guide?w=base-cloud-foundry, which describes the combination of Spring Roo and Cloud Foundry. In a nutshell, there is not much to do to deploy your (CRUD-) application in the Cloud Foundry cloud.

You have to login to your Cloud Foundry account, create a WAR file and deploy it. Three Roo commands execute these tasks. If you use any Cloud Foundry services (such as MySQL, Redis or RabbitMQ), then you have to create and bind these services using other Roo commands. The deployment is very easy. You can choose to deploy your application to a private cloud (your own servers) or to the public cloud (VMware servers).

I got a strange non-speaking exception (that’s a major problem of Spring Roo often): „Operation could not be completed: 400 Bad Request“, but no further details or exceptions. Forum support was necessary. The problem was that the name of my cloud app was already used by another developer, it was not unique (I tried to use the name „SimpleCloudFoundry“). A more speaking error message would be nice! Using another (unique) name solved the problem.

Cloud Foundry is just a traditional Web Application –  Contrary to GAE

So, after reading the previous paragragh, the conlusion is the following: Spring Roo supports deploying  its applications to the Cloud Foundry cloud. Thus, everything is fine? Yes, more or less surprisingly, that is true! The statement of the Cloud Foundry documentation is also true: „You won’t need to architect your applications in a special way or make do with a restricted subset of language or framework features, nor will you need to call Cloud Foundry specific APIs. You just develop your application as you do without Cloud Foundry, then you deploy it.“

So, why should you think about using another PaaS solution instead of Cloud Foundry? Cloud Foundry applications are traditional Java web applications which are using Spring and being deployed to a Tomcat web container. You do not have many limitations (remember the Java classes white list of GAE) or database restrictions (remember the BigTable concepts of GAE). Be aware that due to this advantage, you have to use the services offered by Cloud Foundry! At the moment, you can use MySQL, Redis, Mongo DB and RabbitMQ. No other databases or messaging solutions can be used. If the offered services meet your demands, everything is fine.

Almost all Cloud Foundry Commands are available in the Roo Shell

Usually, you develop a Cloud Foundry application in an IDE such as Eclipse. Besides, you use the VMware CLI (which is a command line tool) to login to Cloud Foundry, create and bind services, deploy, start and stop your application, and so on.

Spring Roo offers more than 30 unique Cloud Foundry commands. With Roo’s Cloud Foundry integration, you can now manage the entire life cycle of your application from the Roo shell. That is great! Of course, VMware wants to push both, Cloud Foundry and Spring Roo, so the connection between both products is really good. But …

There is no Reason to use Spring Roo for Cloud Foundry Development

Spring Roo’s goal is to help the developer to realize applications easier and faster. It is awesome for creating prototypes or CRUD web applications. Nevertheless, it does not help to create Cloud Foundry applications. Sure, you can use all VMC commands directly within the Roo shell, but that’s it. I wonder if this is an advantage? I found it annoying to always type „cloud foundry“ in the Roo shell before entering the real command which I wanted to use.  Thus, I switched back to the VMC command line tool quickly. The SpringSource Tool Suite also offers a Cloud Foundry plugin to bind services and deploy applications via „drag and drop“. Very nice!

In my opinion, there is no benefit to use Spring Roo for developing Cloud Foundry applications. There is one exception, of course: If you develop a Spring Roo application (let’s say a CRUD app), then you can do everything within the same shell, that is cool.

By the way: Though I do think that the combination with Spring Roo brings no benefits, I really like Cloud Foundry. It is one of the first PaaS solutions (besides Amazon Elastic Beanstalk) which offers relational database support. Besides, it is possible to deploy to public AND private clouds. It is open source, thus much more support and services will be available in the future. But be aware: Contrary to GAE, Cloud Foundry is still BETA at the moment.

The current conclusion of this article series is that Spring Roo does not really help to develop applications for the cloud. Nevertheless, I like Spring Roo and I like PaaS solutions such as GAE and Cloud Foundry – but not combined. I will write further articles if this situation changes or if further PaaS products are supported by Spring Roo.

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Rapid Cloud Development with Spring Roo – Part 2: VMware Cloud Foundry appeared first on Kai Waehner.

]]>
Rapid Cloud Development with Spring Roo – Part 1: Google App Engine (GAE) https://www.kai-waehner.de/blog/2011/07/18/rapid-cloud-development-with-spring-roo-part-1-google-app-engine-gae/ Mon, 18 Jul 2011 10:43:39 +0000 http://www.kai-waehner.de/blog/?p=276 Spring Roo is a tool to offer rapid application development on the Java platform. I already explained when to use it: http://www.kai-waehner.de/blog/2011/04/05/when-to-use-spring-roo. Spring Roo supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the GAE support of Spring Roo. Cloud Foundry will be analyzed in part 2 of this article series.

The post Rapid Cloud Development with Spring Roo – Part 1: Google App Engine (GAE) appeared first on Kai Waehner.

]]>
Spring Roo is a tool to offer rapid application development on the Java platform. I already explained when to use it: http://www.kai-waehner.de/blog/2011/04/05/when-to-use-spring-roo.  Spring Roo supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the GAE support of Spring Roo. Cloud Foundry will be analyzed in part 2 of this article series.

Deployment of a GAE Application to the Cloud

A very good introductory article, which describes the combination of Spring Roo and GAE, already exists here: http://java.dzone.com/articles/creating-application-using. In a nutshell, there is not much to do to deploy your (CRUD-) application in the GAE cloud. You have to choose another database provider, enter your GAE application id in a configuration file and deploy the application using one single Maven command (mvn gae:deploy). That is the difference to „traditional“ Roo applications. Thus, no rocket science! Nevertheless, there are several restrictions for developing GAE applications, for instance you cannot use @OneToMany annotations to specify relations due to NoSQL concepts. Deployment will fail respectively the application will not work as expected if you do not follow the rules.

GAE is much more than just deploying a traditional Web Application to the Cloud

So, after reading the previous paragragh, the conlusion is the following: Spring Roo supports deploying  its applications to the GAE cloud. Thus, everything is fine? No, not at all!

Yes, you can deploy your CRUD application to the GAE cloud (if you do not use relations), but GAE is much more. You can or rather should use Task Queues to segment your long-running work, the BigTable datastore and blobstore to store your data, use the URL fetch service to communicate to other applications using HTTP(S), and several other GAE services such as XMPP, Memcache, Mail, and so on. The number of available services further increases with new GAE releases.

These GAE services exist for some reasons: You should be able to create a cloud application which scales automatically without any manual server configuration and such stuff. That is the reason why you have to use NoSQL database concepts and Fetch URL instead of a SQL database, Threads, socket programming, and the other techniques which you used in the past when not developing an application for the cloud. Google developers are NOT too dumb to support SQL databases, but it is not the appropriate technology for highly scaling cloud applications. A nice article about „SQL versus NoSQL“ can be found here: http://java.dzone.com/news/sql-vs-nosql-cloud-which

Several Spring Roo Commands are missing for developing GAE Applications

Spring Roo has no special GAE command. You use the persistence command to create support for BigTable, and you use a Maven goal to deploy the GAE application. Besides, there are no GAE commands although you would need them to create your Task Queues, BigTable datastore access (including relations), URL fetches, and so on. You have to code everything by yourself, as you have to do without Spring Roo. Thus, there is no real support for GAE, yet – contrary to Cloud Foundry (as we will see in part 2 of this article series). Of course, VMware wants to push its own PaaS solution, I understand that. Nevertheless, Spring Roo should also offer good support for other solutions as it does for web frameworks (in the meantime, there is offical support for Spring MVC and GWT, besides plugins for Vaadin, Flex and JSF are available respectively in work).

GAE is the only stable, production-ready PaaS Solution in the Java environment

Be aware that GAE is the only stable and production-ready PaaS solution in the Java environment at the moment. Other offerings such as Cloud Foundry or Red Hat OpenShift are still in BETA status. Also be aware that there exist reasons why Google is not offering SQL database support yet. They will probably add this feature in the future, because public criticism is huge. Nevertheless, NoSQL databases will be required in many use cases where you want to deploy your application in the cloud. Thus, I hope that Spring Roo will offer better GAE support in future versions.

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Rapid Cloud Development with Spring Roo – Part 1: Google App Engine (GAE) appeared first on Kai Waehner.

]]>
Cloud Computing Heterogeneity will require Cloud Integration – Apache Camel is already prepared! https://www.kai-waehner.de/blog/2011/07/09/cloud-computing-heterogeneity-will-require-cloud-integration-apache-camel-is-already-prepared/ Sat, 09 Jul 2011 14:06:30 +0000 http://www.kai-waehner.de/blog/?p=271 Cloud Computing is the future – if you believe market forecasts from companies such as Gartner. I think so, too. But everybody should be aware that there won’t be one single cloud solution, but several clouds. These clouds will be hosted at different providers, use products and APIs from different vendors and use different concepts (IaaS, PaaS, SaaS). Thus, in the future you will have to integrate these clouds as you integrate applications today.

The post Cloud Computing Heterogeneity will require Cloud Integration – Apache Camel is already prepared! appeared first on Kai Waehner.

]]>
Cloud Computing is the future – if you believe market forecasts from companies such as Gartner. I think so, too. But everybody should be aware that there won’t be one single cloud solution, but several clouds. These clouds will be hosted at different providers, use products and APIs from different vendors and use different concepts (IaaS, PaaS, SaaS). Thus, in the future you will have to integrate these clouds as you integrate applications today.

Apache Camel already offers Components for several Cloud Interfaces

Probably, it will take some months (or years) until enterprise projects need to integrate different clouds. Nevertheless, Apache Camel already offers several components for these tasks. You can integrate clouds using the same concepts as you use for integration of HTTP, FTP, JMS, JDBC, and all other Camel components. That is the biggest advantage of this integration framework, in my opinion.

IaaS Integration

Infrastructure as a Service (IaaS) offers a broad range of use cases. You can rent whole servers where you install whatever operating system and applications you want. You can integrate everything as you do it today with your common servers. IaaS also offers computing services (e.g. Amazon Elastic Compute Cloud – EC2) and storage services (e.g. Amazon Relational Database Service – RDS, SimpleDB or Simple Storage Service – S3). Camel already offers components to communicate directly with some of these IaaS services.

PaaS Integration

Platform as a Service (PaaS) offers a development container where you can deploy your application. Several restrictions exist, e.g. Google App Engine (GAE) has a white list of Java classes which are allowed. Further, no SQL database can be used at the moment. VMware Cloud Foundry is an open source example which offers MySQL support besides NoSQL databases. Camel already offers components to connect to GAE applications. The benefit of PaaS is that once you know the programming model, you can develop and deploy cloud applications very easily with automatic, elastic high availablity.

SaaS Integration

Software as a Service (PaaS) means using web applications in your web browser. Gmail is a very well-known, simple example. Salesforce is a better example for business applications. In fact, it is easy to use these SaaS applications. But if you want to integrate them, you still need a programming interface to each SaaS application which you want to integrate. For instance, Camel offers a component to send emails via Gmail. Some products already offer documentation how to integrate their SaaS application to Camel (here you can see an example from Hippo CMS: http://hst-salesforce.forge.onehippo.org/usingtasks.html).

The Number of Cloud Computing Solutions will increase a lot in the Future

Above, I mentioned some examples of IaaS, PaaS and SaaS alternatives.  Of course, the number of products and solutions will increase a lot within the next months and years. Let’s list some more brands which are already available: Rackspace Cloud, CloudBees, Windows Azure, Elastic Bean Stalk, vCloud, AppForce, Hyper-V Cloud.

Apache Camel is future-proof for the Cloud Computing Era

As you can see, Apache Camel already offers several components for cloud computing offerings. Hopefully, many more components will be created for other coming cloud interfaces (BTW: I am sure this will happen). Different clouds will need to be integrated. Apache Camel has great potential to use the same concepts (routes, processors, test support, and so on) to integrate all these different cloud concepts and technologies. Nevertheless, you can still choose between „old style“ DSLs (using Java or Spring XML) and  new modern JVM programming languages (using Groovy or Scala). In the next months, I will show more blogs which will show code examples to describe how to integrate the different cloud interfaces, starting with Amazon services (IaaS), Google App Engine (PaaS) and Salesforce (SaaS), as these components are already available…

Best regards,

Kai Wähner (Twitter: @KaiWaehner)

The post Cloud Computing Heterogeneity will require Cloud Integration – Apache Camel is already prepared! appeared first on Kai Waehner.

]]>