Enterprise Application Integration Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/enterprise-application-integration/ Technology Evangelist - Big Data Analytics - Middleware - Apache Kafka Sun, 23 Apr 2017 18:41:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.kai-waehner.de/wp-content/uploads/2020/01/cropped-favicon-32x32.png Enterprise Application Integration Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/enterprise-application-integration/ 32 32 Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain https://www.kai-waehner.de/blog/2017/04/23/agile-cloud-cloud-integration-ipaas-api-management-blockchain/ Sun, 23 Apr 2017 18:41:06 +0000 http://www.kai-waehner.de/blog/?p=1152 Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain. Scenario use case using IBM's open source Hyperledger Fabric on BlueMix, TIBCO Cloud Integration (TCI) and Mashery.

The post Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain appeared first on Kai Waehner.

]]>
Cloud-to-Cloud integration is part of a hybrid integration architecture. It enables to implement quick and agile integration scenarios without the burden of setting up complex VM- or container-based infrastructures. One key use case for cloud-to-cloud integration is innovation using a fail-fast methodology where you realize new ideas quickly. You typically think in days or weeks, not in months. If an idea fails, you throw it away and start another new idea. If the idea works well, you scale it out and bring it into production to a on premise, cloud or hybrid infrastructure. Finally, you make expose the idea and make it easily available to any interested service consumer in your enterprise, partners or public end users.

A great example where you need agile, fail-fast development is blockchain because it is a very hot topic, but frameworks are immature and change very frequently these days. Note that blockchain is not just about Bitcoin and finance industry. This is just the tip of the iceberg. Blockchain, which is the underlying infrastructure of Bitcoin, will change various industries, beginning with banking, manufacturing, supply chain management, energy, and others.

Middleware and Integration as Key for Success in Blockchain Projects

A key for successful blockchain projects is middleware as it allows integration with the rest of an enterprise architecture. Blockchain only adds value if it works together with your other applications and services. See an example of how to combine streaming analytics with TIBCO StreamBase and Ethereum to correlate blockchain events in real time to act proactively, e.g. for fraud detection.

The drawback of blockchain is its immaturity today. APIs change with every minor release, development tools are buggy and miss many required features for serious development, and new blockchain cloud services, frameworks and programming languages come and go every quarter.

This blog post shows how to leverage cloud integration with iPaaS and API Management to realize innovative projects quickly, fail fast, and adopt changing technologies and business requirements easily. We will use TIBCO Cloud Integration (TCI), TIBCO Mashery and Hyperledger Fabric running as IBM Bluemix cloud service.

IBM Hyperledger Fabric as Bluemix Cloud Service

Hyperledger is a vendor independent open source blockchain project which consists of various components. Many enterprises and software vendors committed to it for building different solutions for various problems and use cases. One example is IBM’s Hyperledger Fabric. The benefit of being a very flexible construction kit makes Hyperledger much more complex for getting started than other blockchain platforms, like Ethereum, which are less flexible but therefore easier to set up and use.

This is the reason why I use the BlueMix BaaS (Blockchain as a Service) to get started quickly without spending days on just setting up my own Hyperledger network. You can start a Hyperledger Fabric blockchain infrastructure with four peers and a membership service with just a few clicks. It takes around two minutes. Hyperledger Fabric is evolving fast. Therefore, a great example for quickly changing features, evolving requirements and (potentially) fast failing projects.

My project uses version Hyperledger Fabric 0.6 as free beta cloud service. I leverage its Swagger-based REST API in my middleware microservice to interconnect other cloud services with the blockchain:

 

However, when I began the project, it was already clear that the REST interface is deprecated and will not be included in the upcoming 1.0 release anymore. Thus, we are aware that an easy move to another API is needed in a few weeks or months.

Cloud-to-Cloud Integration, iPaaS and API Management for Agile and Fail-Fast Development

As mentioned in the introduction, middleware is key for success in blockchain projects to interconnect it with the existing enterprise architecture. This example leverages the following middleware components:

  • iPaaS: TIBCO Cloud Integration (TCI) is hosted and managed by TIBCO. It can be used without any setup or installation to quickly build a REST microservice which integrates with Hyperledger Fabric. TCI also allows to configure caching, throttling and security configuration to ensure controlled communication between the blockchain and other applications and services.
  • API Management: TIBCO Mashery is used to expose the TCI REST microservice for other services consumers. This can be internal, partners or public end users, depending on the use case.
  • On Premise / Cloud-Native Integration Infrastructure: TIBCO BusinessWorks 6 (BW6) respectively TIBCO BusinessWorks Container Edition (BWCE) can be used to deploy and productionize successful TCI microservices in your existing infrastructure on premise or on a cloud-native platform like CloudFoundry, Kubernetes or any other Docker-based platform. Of course, you can also continue to use the services within TCI itself to run and scale the production services in the public cloud, hosted and managed by TIBCO.

Scenario: TIBCO Cloud Integration + Mashery + Hyperledger Fabric Blockchain + Salesforce

I implemented the following technical scenario with the goal of showing agile cloud-to-cloud integration with fail-fast methodology from a technical perspective (and not to build a great business case): The scenario implements a new REST microservice with TCI via visual coding and configuration. This REST service connects to two cloud services: Salesforce and Hyperledger Fabric. The following picture shows the architecture:

 

Here are the steps to realize this scenario:

  • Create a REST service which receives customer ID and customer name as parameters.
  • Enhance the input data from the REST call with additional data from Salesforce CRM. In this case, we get a block hash which is stored in Salesforce as reference to the blockchain data. The block hash allows to doublecheck if Salesforce has the most up-to-date information about a customer while the blockchain itself ensures that the customer updates from various systems like CRM, ERP, Mainframe are stored and updated in a correct, secure, distributed chain – which can be accessed by all related systems, not just Salesforce.
  • Use Hyperledger REST API to validate via Salesforce’ block hash that the current information about the customer in Salesforce is up-to-date. If Salesforce has an older block hash, then update Salesforce to up-to-date values (both, the most recent hash block and the current customer data stored on blockchain)
  • Return the up-to-date customer data to the service consumer of the TCI REST service
  • Configure caching, throttling and security requirements so that service consumers cannot do unexpected behavior
  • Leverage API Management to expose the TCI REST service as public API so that external service consumers can subscribe to a payment plan and use it in other applications or microservices.

Implementation: Visual Coding, Web Configuration and Automation via DevOps

This section discusses the relevant steps of the implementation in more detail, including development and deployment of the chain code (Hyperledger’s term for smart contracts in blockchain), implementation of the REST integration service with visual coding in the IDE of TCI, and exposing the service as API via TIBCO Mashery.

Chain Code for Hyperledger Fabric on IBM Bluemix Cloud

Hyperledger Fabric has some detailed “getting started” tutorials. I used a “Hello World” Tutorial and adopted it to our use case so that you can store, update and query customer data on the blockchain network. Hyperledger Fabric uses Go for chain code and will allow other programming languages like Java in future releases. This is both a pro and con. The benefit is that you can leverage your existing expertise in these programming languages. However, you also have to be careful not to use “wrong” features like threads, indefinite loops or other functions which cause unexpected behavior on a blockchain. Personally, I prefer the concept of Ethereum. This platform uses Solidity, a programming language explicitly designed to develop smart contracts (i.e. chain code).

Cloud-to-Cloud Integration via REST Service with TIBCO Cloud Integration

The following steps were necessary to implement the REST microservice:

  • Create REST service interface with API Modeler web UI leveraging Swagger
  • Import Swagger interface into TCI’s IDE
  • Use Salesforce activity to read block hash from Salesforce’ cloud interface
  • Use Rest Client activity to do an HTTP Request to Hyperledger Fabric’s REST interface
  • Configure graphical mappings between the activities (REST Request à Salesforce à Hyperledger Fabric à REST Response)
  • Deploy microservice to TCI’s runtime in the cloud and test it from Swagger UI
  • Configure caching and throttling in TCI’s web UI

The last point is very important in blockchain projects. A blockchain infrastructure does not have millisecond response latency. Furthermore, every blockchain update costs money – Bitcoin, Ether, or whatever currency you use to “pay for mining” and to reach consensus in your blockchain infrastructure. Therefore, you can leverage caching and throttling in blockchain projects much more than in some other projects (only if it fits into your business scenario, of course).

Here is the implemented REST service in TCI’s graphical development environment:

 

 

And the caching configuration in TCI’s web user interface:

Exposing the REST Service to Service Consumers through API Management via TIBCO Mashery

The deployed TCI microservice can either be accessed directly by service consumers or you expose it via API Management, e.g. with TIBCO Mashery. The latter option allows to define rules, payment plans and security configuration in a centralized manner with easy-to-use tooling.

Note that the deployment to TCI and exposing its API to Mashery can also be done in one single step. Both products are loosely coupled, but highly integrated. Also note that this is typically not done manually (like in this technical demonstration), but integrated into a DevOps infrastructure leveraging frameworks like Maven or Jenkins to automate the deployment and testing steps.

Agile Cloud-to-Cloud Integration with Fail-Fast Methodology

Our scenario is now implemented successfully. However, it is already clear that the REST interface is deprecated and not included in the upcoming 1.0 release anymore. In blockchain frameworks, you can expect changes very frequently.

While implementing this example, IBM announced on its big user conference IBM Interconnect in Las Vegas that Hyperledger Fabric 1.0 is now publicly available. Well, it is really available on Bluemix since that day, but if you try to start the service you get the following message: “Due to high demand of our limited beta, we have reached capacity.” A strange error as I thought we are using public cloud infrastructures like Amazon AWS, Microsoft Azure, Google Cloud Platform or IBM Bluemix for exactly this reason to scale out easily… J

Anyway, the consequence is that we still have to work on our 0.6 version and do not know yet when we will be able or have to migrate to version 1.0. The good news is that iPaaS and cloud-to-cloud integration can realize changes quickly. In the case of our TCI REST service, we just need to replace the REST activity calling Hyperledger REST API with a Java activity and leverage Hyperledger’s Java SDK – as soon as it is available. Right now only a Client SDK for Node.js is available – not really an option for “enterprise projects” where you want to leverage the JVM respectively Java platform instead of JavaScript. Side note: This topic of using Java vs. JavaScript in blockchain projects is also well discussed in “Static Type Safety for DApps without JavaScript”.

This blog post focuses just on a small example, and Hyperledger Fabric 1.0 will bring other new features and concept changes with it, of course. Same is true for SaaS cloud offerings such as Salesforce. You cannot control when they change their API and what exactly they change. But you have to adopt it within a relative short timeframe or your service will not work anymore. An iPaaS is the perfect solution for these scenarios as you do not have a lot of complex setup in your private data center or a public cloud platform. You just use it “as a service”, change it, replace logic, or stop it and throw it away to start a new project. The implicit integration into API Management and DevOps support also allow to expose new versions for your service consumers easily and quickly.

Conclusion: Be Innovative by Failing Fast with Cloud Solutions

Blockchain is very early stage. This is true for platforms, tooling, and even the basic concepts and theories (like consensus algorithms or security enforcement). Don’t trust the vendors if they say Blockchain is now 1.0 and ready for prime time. It will still feel more like a 0.5 beta version these days. This is not just true for Hyperledger and its related projects such as IBM’s Hyperledger Fabric, but also for all others, including Ethereum and all the interesting frameworks and startups emerging around it.

Therefore, you need to be flexible and agile in blockchain development today. You need to be able to fail fast. This means set up a project quickly, try out new innovative ideas, and throw them away if they do not work; to start the next one. The same is true for other innovative ideas – not just for blockchain.

Middleware helps integrating new innovative ideas with existing applications and enterprise architectures. It is used to interconnect everything, correlate events in real time, and find insights and patterns in correlated information to create new added value. To support innovative projects, middleware also needs to be flexible, agile and support fail fast methodologies. This post showed how you can leverage iPaaS with TIBCO Cloud Integration and API Management with TIBCO Mashery to build innovative middleware microservices in innovative cloud-to-cloud integration projects.

The post Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain appeared first on Kai Waehner.

]]>
TIBCO BusinessWorks and StreamBase for Big Data Integration and Streaming Analytics with Apache Hadoop and Impala https://www.kai-waehner.de/blog/2015/04/14/tibco-businessworks-and-streambase-for-big-data-integration-and-streaming-analytics-with-apache-hadoop-and-impala/ Tue, 14 Apr 2015 14:41:46 +0000 http://www.kai-waehner.de/blog/?p=944 Apache Hadoop is getting more and more relevant. Not just for big data processing (e.g. MapReduce), but also in fast data processing (e.g. stream processing). Recently, I published two blog posts on the TIBCO blog to show how you can leverage TIBCO BusinessWorks 6 and TIBCO StreamBase to realize big data and fast data Hadoop use cases.

The post TIBCO BusinessWorks and StreamBase for Big Data Integration and Streaming Analytics with Apache Hadoop and Impala appeared first on Kai Waehner.

]]>
Apache Hadoop is getting more and more relevant. Not just for Big Data processing (e.g. MapReduce), but also for Fast Data processing (e.g. Stream Processing). Recently, I published two blog posts on the TIBCO blog to show how you can leverage TIBCO BusinessWorks 6 and TIBCO StreamBase to realize Big Data and Fast Data Hadoop use cases.

The post TIBCO BusinessWorks and StreamBase for Big Data Integration and Streaming Analytics with Apache Hadoop and Impala appeared first on Kai Waehner.

]]>
Micro Services Architecture = Death of Enterprise Service Bus (ESB)? https://www.kai-waehner.de/blog/2015/01/08/micro-services-architecture-death-enterprise-service-bus-esb/ Thu, 08 Jan 2015 15:08:06 +0000 http://www.kai-waehner.de/blog/?p=929 Challenges, requirements and best practices for creating a good Microservicess architecture, and what role an Enterprise Service Bus (ESB) plays in this game.

The post Micro Services Architecture = Death of Enterprise Service Bus (ESB)? appeared first on Kai Waehner.

]]>
These days, it seems like everybody is talking about microservices. You can read a lot about it in hundreds of articles and blog posts, but my recommended starting point would be this article by Martin Fowler, which initiated the huge discussion about this new architectural concept. This article is about the challenges, requirements and best practices for creating a good microservices architecture, and what role an Enterprise Service Bus (ESB) plays in this game.

Branding and Marketing: EAI vs. SOA vs. ESB vs. Microservices

Let’s begin with a little bit of history about Service-oriented Architecture (SOA) and Enterprise Service Bus to find out why microservices have become so trendy.

Many years ago, software vendors offered a middleware for Enterprise Application Integration (EAI), often called EAI broker or EAI backbone. The middleware was a central hub. Back then, SOA was just emerging. The tool of choice was an ESB. Many vendors just rebranded their EAI tool into an ESB. Nothing else changed. Some time later, some new ESBs came up, without central hub, but distributed agents. So, ESB served for different kinds of middleware. Many people do not like the term “ESB” as they only know the central one, but not the distributed one. As of today, some people even talk about this topic using the term “NoESB” or Twitter hashtag #noesb (similar to NoSQL). Let’s observe the future if we see the term “NoESB” more in the future…

Therefore, vendors often avoid talking about an ESB. They cannot sell a central integration middleware anymore, because everything has to be distributed and flexible. Today, you can buy a service delivery platform. In the future, it might be a microservices platform or something similar. In some cases, the code base might still be the same of the EAI broker 20 years ago. What all these products have in common is that you can solve integration problems by implementing “Enterprise Integration Patterns”.

To summarise the history about branding and marketing of integration products: Pay no attention to sexy impressive sounding names! Instead, make looking at the architecture and features the top priority. Ask yourself what business problems you need to solve, and evaluate which architecture and product might help you best. It is amazing how many people still think of a “central ESB hub”, when I say “ESB”.

Requirements for a Good Microservices Architecture

Six key requirements to overcome those challenges and leverage the full value of microservices:

  • Services Contract
  • Exposing microservices from existing applications
  • Discovery of services
  • Coordination across services
  • Managing complex deployments and their scalability
  • Visibility across services

The full article discusses these six requirements in detail, and also answers the question how a modern ESB is related to a Microservices architecture. Read the full article here:

Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

Comments appreciated in this blog post or at the full article. Thanks.

The post Micro Services Architecture = Death of Enterprise Service Bus (ESB)? appeared first on Kai Waehner.

]]>
Real World Use Cases and Success Stories for In-Memory Data Grids https://www.kai-waehner.de/blog/2014/11/24/real-world-use-cases-success-stories-memory-data-grids/ Mon, 24 Nov 2014 12:43:21 +0000 http://www.kai-waehner.de/blog/?p=876 Use Cases and Success Stories for In-Memory Data Grids, e.g. TIBCO ActiveSpaces, Oracle Coherence, Infinispan, IBM WebSphere eXtreme Scale, Hazelcast, Gigaspaces, GridGain, Pivotal Gemfire (Presentation by Kai Wähner at NoSQL Matters 2014 in Barcelona) - NOT SAP HANA :-)

The post Real World Use Cases and Success Stories for In-Memory Data Grids appeared first on Kai Waehner.

]]>
NoSQL Matters Conference 2014

NoSQL Matters is a great conference about different NoSQL topics. A lot of great NoSQL products and use cases are presented. In November 2014, I had a talk about “Real World Use Cases and Success Stories for In-Memory Data Grids” in Barcelona, Spain. I discussed several different use cases, which our TIBCO customers implemented using our In-Memory Data Grid “TIBCO ActiveSpaces“. I will present the same content at data2day, a German conference in Karlsruhe about big data topics.

In-Memory Data Grids: TIBCO ActiveSpaces, Oracle Coherence, Infinispan, IBM eXtreme Scale, Hazelcast, Gigaspaces, etc.

A lot of in-memory data grid products are available. TIBCO ActiveSpaces, Oracle Coherence, Infinispan, IBM WebSphere eXtreme Scale, Hazelcast, Gigaspaces, GridGain, Pivotal Gemfire to name most of the important ones. See a great graphic by 451 Research Group, which shows different databases and how data grids fit into that landscape. You can always get the newest version: 451 DataBase Landscape.

It is important to understand that an in-memory data grid offers much more than just caching and storing data in memory. Further in-memory features are event processing, publish / subscribe, ACID transactions, continuous queries and fault-tolerance – to name a few… Therefore, let’s discuss one example in the next section to get a better understanding of what an in-memory data grid actually is.

TIBCO ActivesSpaces In-Memory Data Grid

TIBCO ActiveSpaces combines the best out of NoSQL and In-Memory features. The following description is taken from TIBCO’s website:

To lift the burden of big data, TIBCO ActiveSpaces provides a distributed in-memory data grid that can increase processing speed so you can reduce reliance on costly transactional systems.

ActiveSpaces EE provides an infrastructure for building highly scalable, fault-tolerant applications. It creates large virtual data caches from the aggregate memory of participating nodes, scaling automatically as nodes join and leave. Combining the features and performance of databases, caching systems, and messaging software, it supports very large, highly volatile data sets and event-driven applications. And it frees developers to focus on business logic rather than on the complexities of distributing, scaling, and making applications autonomously fault-tolerant.

ActiveSpaces EE supplies configurable replication of virtual shared memory. This means that the space autonomously re-replicates and re-distributes lost data, resulting in an active-active fault-tolerant architecture without resource overhead.

Benefits

  • Reduce Management Cost: Off-load slow, expensive, and hard-to-maintain transactional systems.
  • Deliver Ultra-Low, Predictable Latency: Use peer-to-peer communication, avoiding intervention by a central server.
  • Drastically Improve Performance: Create next-generation elastic applications including high performance computing, extreme transaction processing, and complex event processing.
  • Simplify Administration: Eliminate the complexity of implementing and configuring a distributed caching platform using a command-line administration tool with shell-like control keys that provide command history, syntax completion, and context-sensitive help.
  • Become Platform Independent: Store database rows and objects and use the system as middleware to exchange information between heterogeneous platforms.
  • Speed Development: Enable data virtualization and let developers focus on business logic rather than on the details of data implementation.

If you want to learn more about TIBCO ActiveSpaces take a look at a great recording from QCon 2013: TIBCO Fellow Jean-Noel Moyne discusses in-memory data grids in more detail.

SAP HANA is not an In-Memory Data Grid

I should write an additional blog post about this topic. Nevertheless, to make it clear: SAP HANA is not an in-memory data grid. This is important to mention as everybody thinks about SAP HANA when talking about in-memory, right?  Take a look at the 451 database landscape, which I mentioned above. SAP HANA is put into the “relational zone” under appliances (SAP HANA is only available as appliance), whereas all the other products I named are put in the “Grid / Cache Zone”.

SAP Hana is primarily being used to reduce dependency on other relational databases (e.g. Oracle). It is designed to make SAP run faster not to speed up other applications (non-SAP). SAP HANA is more like a traditional DB that is meant to ‘run reports faster’ by leveraging the large amount of RAM on the servers. It is great for some analytical use cases, e.g. faster reporting and “after the fact analysis”.

Compared to other in-memory products (i.e. “real data grids”) such as TIBCO ActiveSpaces and the other products mentioned above, SAP HANA misses several features such as implicit eventing (publish / subscribe) or deployment with flexible elasticity on commodity hardware. You can implement custom logic on SAP HANA with JavaScript or a proprietary SQL-like language (SQLScript), of course. Though, building several of  the use cases in my presentation below is much more difficult with SAP HANA than with other “real data grid” products.

Be aware: I am not saying that SAP HANA is a bad product. Though, it serves different use cases than in-memory data grids such as TIBCO ActiveSpaces! For example, SAP HANA is great to replace Oracle RACs as database backend for SAP ERPs to speed up the systems and improve user experience.

Real World Use Cases and Success Stories for In-Memory Data Grids

The goal of my talk was not very technical. Instead, I discussed several different real world use cases and success stories for using in-memory data grids. Here is the abstract for my talk:

NoSQL is not just about different storage alternatives such as document store, key value store, graphs or column-based databases. The hardware is also getting much more important. Besides common disks and SSDs, enterprises begin to use in-memory storages more and more because a distributed in-memory data grid provides very fast data access and update. While its performance will vary depending on multiple factors, it is not uncommon to be 100 times faster than corresponding database implementations. For this reason and others described in this session, in-memory computing is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable, fault-tolerant applications.The session begins with a short introduction to in-memory computing. Afterwards, different frameworks and product alternatives are discussed for implementing in-memory solutions. Finally, the main part of this session shows several different real world uses cases where in-memory computing delivers business value by supercharging the infrastructure.

Here is the slide deck:

As always, I appreciate every feedback. Please post a comment or contact me via Email, Twitter, LinkedIn or Xing…

The post Real World Use Cases and Success Stories for In-Memory Data Grids appeared first on Kai Waehner.

]]>
Intelligent Business Process Management Suites (iBPMS) – The Next-Generation BPM for a Big Data World https://www.kai-waehner.de/blog/2014/08/27/intelligent-business-process-management-suites-ibpms-next-generation-bpm-big-data-world/ Wed, 27 Aug 2014 13:51:21 +0000 http://www.kai-waehner.de/blog/?p=840 I had a talk at ECSA 2014 in Vienna: The Next-Generation BPM for a Big Data World: Intelligent Business Process Management Suites (iBPMS), sometimes also abbreviated iBPM. I want to share the slides with you. The slides include an example how to implement iBPMS easily with the TIBCO middleware stack: TIBCO AMX BPM + BusinessWorks + StreamBase + Tibbr.

The post Intelligent Business Process Management Suites (iBPMS) – The Next-Generation BPM for a Big Data World appeared first on Kai Waehner.

]]>
In August 2014, I had an interesting talk at ECSA 2014 in Vienna about iBPMS called The Next-Generation BPM for a Big Data World: Intelligent Business Process Management Suites (iBPMS). iBPMS is a term introduced by Gartner some time ago: Magic Quadrant for Intelligent Business Process Management Suites.

I want to share the slides with you. As always, I appreciate every comment or feedback…

Abstract: iBPMS / iBPM

Here is the abstract of my session about iBPMS:

Business Process Management (BPM) is established, tools are stable, and many companies use it successfully. However, today’s business processes are based just on “dumb” data from relational databases or web services. Humans make decisions based on this information. Instead, the value of big data analytics should be integrated into business processes, too. Besides, user interfaces are inflexible. Modern concepts such as mobile devices or social media are not integrated into business processes. That is status quo. Companies miss a huge opportunity here!
This session explains the idea behind next-generation BPM (also called Intelligent Business Process Management, iBPMS, iBPM), which includes big data analytics, social media, and mobile device support. The talk will focus on real world use cases. The audience will learn how to realize intelligent business processes technically by combining BPM, integration, big data and analytics.

Use Case: TIBCO AMX BPM + BusinessWorks + StreamBase + Tibbr

The content of the slides is vendor-independent. It will help you to understand the concepts of iBPMS and how different parts such as BPM, Big Data Analytics or Integration are related. It does not matter if you want to / have to use IBM, Oracle, TIBCO, or any other software for realizing iBPMS.

To demonstrate the implementation of a real world sue case, the slides also include an example of how to implement iBPMS with the TIBCO middleware stack. The solution uses:

  • TIBCO ActiveMatrix BPM for business process management to combine human interaction and automatic tasks
  • TIBCO ActiveMatrix BusinessWorks – an Enterprise Service Bus (ESB) – for integration  of applications (SAP, Salesforce, Mainframe, EDI, etc.) and technologies (SOAP Web Services, REST APIs, JMS, TCP, etc.)
  • TIBCO StreamBase for stream processing (fast data processing and streaming analytics)
  • TIBCO Tibbr as social enterprise network for work distribution to occasional users

A huge benefit of the TIBCO stack is that the products are loosely coupled, but integrated. Thus, it is easy to implement iBPMS.

Slides: iBPMS at ECSA 2014

Here are the slides:

The post Intelligent Business Process Management Suites (iBPMS) – The Next-Generation BPM for a Big Data World appeared first on Kai Waehner.

]]>
TIBCO BusinessWorks (ESB) for Integration of Salesforce (CRM), SAP (ERP) and Tibbr (Social Enterprise Network) https://www.kai-waehner.de/blog/2014/08/18/tibco-businessworks-esb-integration-salesforce-crm-sap-erp-tibbr-social-enterprise-network/ Mon, 18 Aug 2014 10:56:50 +0000 http://www.kai-waehner.de/blog/?p=831 This short video shows how you can integrate different technologies and enterprise software easily with TIBCO Businessworks (ESB).
The demo shows live development and integration of Salesforce (CRM), SAP (ERP) and Tibbr (TIBCO's Social Enterprise Network).

The post TIBCO BusinessWorks (ESB) for Integration of Salesforce (CRM), SAP (ERP) and Tibbr (Social Enterprise Network) appeared first on Kai Waehner.

]]>
This short video shows how you can integrate different technologies and enterprise software easily with TIBCO Businessworks (ESB).

Demo Content

The demo shows live development and integration of Salesforce (CRM), SAP (ERP) and Tibbr (TIBCO’s Social Enterprise Network).
Any other technology or application can be integrated in the same easy way with the same concepts. Visual development, mapping, testing and debugging is shown. No coding is required.

Video: Integration of Salesforce, SAP and Tibbr with TIBCO BusinessWorks

Watch yourself:

The post TIBCO BusinessWorks (ESB) for Integration of Salesforce (CRM), SAP (ERP) and Tibbr (Social Enterprise Network) appeared first on Kai Waehner.

]]>
Enterprise Integration Patterns (EIP) Revisited in 2014 https://www.kai-waehner.de/blog/2014/07/17/enterprise-integration-patterns-eip-revisited-in-2014/ Thu, 17 Jul 2014 10:03:48 +0000 http://www.kai-waehner.de/blog/?p=822 This slide deck revisits Enterprise Integration Patterns (EIP) and gives an overview about the status quo. Fortunately, EIPs offer more possibilities than just be used for modelling integration problems in a standardized way. Several frameworks and tools already implement these patterns. The developer does not have to implement EIPs on his own. Therefore, the end of the slide deck shows different frameworks and tools available, which can be used for modelling and implementing complex integration scenarios by using the EIPs.

The post Enterprise Integration Patterns (EIP) Revisited in 2014 appeared first on Kai Waehner.

]]>
Today, I had a talk about “Enterprise Integration Patterns (EIP) Revisited in 2014” at Java Forum Stuttgart 2014, a great conference for developers and architects with 1600 attendees.

Enterprise Integration Patterns

Data exchanges between companies increase a lot. Hence, the number of applications which must be integrated increases, too. The emergence of service-oriented architectures and cloud computing boost this even more. The realization of these integration scenarios is a complex and time-consuming task because different applications and services do not use the same concepts, interfaces, data formats and technologies.

Originated and published over ten years ago by Gregor Hohpe and Bobby Woolf,  Enteprise Integration Patterns (EIP) became the world wide de facto standard for describing integration problems. They offer a standardized way to split huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project. Most developers already have used some of these patterns such as the filter, splitter or content-based-router – some of them without being aware of using EIPs. Today, EIPs are still used to reduce efforts and complexity a lot. This session revisits EIPs and gives an overview about the status quo.

Open Source, Apache Camel, Talend ESB, JBoss, WSO2, TIBCO BusinessWorks, StreamBase, IBM WebSphere, Oracle, …

Fortunately, EIPs offer more possibilities than just be used for modelling integration problems in a standardized way. Several frameworks and tools already implement these patterns. The developer does not have to implement EIPs on his own. Therefore, the end of the session shows different frameworks and tools available, which can be used for modelling and implementing complex integration scenarios by using the EIPs.

Slides

The post Enterprise Integration Patterns (EIP) Revisited in 2014 appeared first on Kai Waehner.

]]>
Slides online: “Enterprise Integration Patterns Revisited” – Talk at OBJEKTspektrum Information Days 2013 https://www.kai-waehner.de/blog/2013/11/19/slides-online-enterprise-integration-patterns-revisited-talk-at-objektspektrum-information-days-2013/ Tue, 19 Nov 2013 17:56:20 +0000 http://www.kai-waehner.de/blog/?p=770 I had a brand new talk at OBJEKTspektrum Information Days 2013 in Frankfurt and Munich this week: Enterprise Integration Patterns Revisited. I wanna share my slides with you.

The post Slides online: “Enterprise Integration Patterns Revisited” – Talk at OBJEKTspektrum Information Days 2013 appeared first on Kai Waehner.

]]>
I had a brand new talk at OBJEKTspektrum Information Days 2013 in Frankfurt and Munich this week: Enterprise Integration Patterns Revisited. I wanna share my slides with you.

Content

Applications have to be integrated – no matter which programming languages, databases or infrastructures are used. However, the realization of integration scenarios is a complex and time-consuming task. Over 10 years ago, Enteprise Integration Patterns (EIP) became the world wide defacto standard for splitting huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project.
This session revisits EIPs and gives shows status quo. After giving a short introduction with several examples, the audience will learn which EIPs still have a „right to exist“, and which new EIPs emerged in the meantime. The end of the session shows different frameworks and tools which already implement EIPs and therefore help the architect to reduce efforts a lot.

Slides

The post Slides online: “Enterprise Integration Patterns Revisited” – Talk at OBJEKTspektrum Information Days 2013 appeared first on Kai Waehner.

]]>
JBoss OneDayTalk 2013: “NoSQL Integration with Apache Camel – MongoDB, CouchDB, Neo4j, Cassandra, HBase, Hazelcast, Riak, etc.” https://www.kai-waehner.de/blog/2013/10/24/jboss-onedaytalk-2013-nosql-integration-with-apache-camel-mongodb-couchdb-neo4j-cassandra-hbase-hazelcast-riak-etc/ Thu, 24 Oct 2013 05:27:53 +0000 http://www.kai-waehner.de/blog/?p=756 JBoss OneDayTalk is a great annual event around open source development. I have done a talk about "NoSQL Integration with Apache Camel". This blog post shows you the updated slide deck of this talk.

The post JBoss OneDayTalk 2013: “NoSQL Integration with Apache Camel – MongoDB, CouchDB, Neo4j, Cassandra, HBase, Hazelcast, Riak, etc.” appeared first on Kai Waehner.

]]>
JBoss OneDayTalk is a great annual event around open source development. I have done a talk about “NoSQL Integration with Apache Camel”. This blog post shows you the updated slide deck of this talk.

Abstract

SQL cannot solve several problems emerging with big data. A distributed, fault-tolerant architecture is necessary. NoSQL comes to the rescue, but therefore it does not use SQL as its query language or give full ACID guarantees. Thus, in the future you will have to learn new concepts and integrate these NoSQL databases as you integrate SQL databasestoday. The open source integration framework Apache Camel is already prepared for this challenging task.

Apache Camel implements the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications and clouds. It can be used in almost every integration project within the JVM environment. All integration projects can be realized in a consistent way without redundant boilerplate code.

This session demonstrates the elegance of Apache Camel for NoSQL integration. Several examples are shown for all different concepts by integrating NoSQL databases from CouchDB (Document Store), HBase (Column-oriented), Neo4j (Graph), Amazon Web Services (Key Value Store), and others.

If the required NoSQL database is not supported by Apache Camel, you can easily create your own Camel component with very low effort. This procedure is explained at the end of the session.

Slides

The post JBoss OneDayTalk 2013: “NoSQL Integration with Apache Camel – MongoDB, CouchDB, Neo4j, Cassandra, HBase, Hazelcast, Riak, etc.” appeared first on Kai Waehner.

]]>
Yet another new Camel book: “Apache Camel Messaging Systems” (Danger of Confusion) – PACKT PUBLISHING https://www.kai-waehner.de/blog/2013/10/08/yet-another-new-camel-book-apache-camel-messaging-systems-danger-of-confusion-packt-publishing/ Tue, 08 Oct 2013 06:37:30 +0000 http://www.kai-waehner.de/blog/?p=748 “Apache Camel Messaging System” is a new book (see http://www.packtpub.com/apache-camel-messaging-system/book) published on September, 25th, 2013 by PACKT PUBLISHING (ISBN: 9781782165347). Author is Evgeniy Sharapov. As it’s subtitle says, the book describes how to “tackle integration problems and learn practical ways to make data flow between your application and other systems using Apache Camel”.

Apache Camel is the best integration framework “on the market”. It has very good domain specific languages, many connectors, different companies behind it, and an awesome worldwide open source community. So, seeing a new book about Apache Camel is always good news!

The post Yet another new Camel book: “Apache Camel Messaging Systems” (Danger of Confusion) – PACKT PUBLISHING appeared first on Kai Waehner.

]]>
“Apache Camel Messaging System” is a new book (see http://www.packtpub.com/apache-camel-messaging-system/book) published on September, 25th, 2013 by PACKT PUBLISHING (ISBN: 9781782165347). Author is Evgeniy Sharapov. As it’s subtitle says, the book describes how to “tackle integration problems and learn practical ways to make data flow between your application and other systems using Apache Camel”.

Apache Camel is the best integration framework “on the market”. It has very good domain specific languages, many connectors, different companies behind it, and an awesome worldwide open source community. So, seeing a new book about Apache Camel is always good news!

Danger of Confusion

There are two new Apache Camel books:

Sorry PACKT, this is ridiculous! Two books within one month with same name and almost same content. Therefore, both reviews are very similar. Why not connecting both authors to write ONE book ?! Be sure to buy just one of the two books!

Content

First of all, the book has only 70 pages. So it does not contain that much content. It offers a short introduction to Apache Camel. The book explains in detail what Camel is, how to install it, and how to get started. You will also learn about different domain specific languages and some Enteprise Integration Patterns.

Compare to the other new Camel book by PACKT, this one differs in two things:

  • The introduction / theory is more detailed
  • The practical examples are less detailed

Both alternatives are a good introduction to Camel, you can get started easily with both!

Conclusion

Good news:

  • Yet another new Apache Camel book
  • Easy to read
  • Good examples
  • Good starting point for newbies

Bad news:

  • Just 70 pages. Just for getting started. Therefore, very expensive.
  • You can find all information of this book on Camel’s website for free (though, newbies might not find all this information easily by searching the website)
  • “Camel in Action” is also available. That’s another awesome Apache Camel book, which also explains all basics, but also many many more details (500 pages)
  • If you already know Apache Camel, you do NOT need this book

Summary:

If you are looking an easy-to-read book for getting started with Apache Camel, then this book is for you. Afterwards, you still need to buy “Camel in action”, too. “Camel in action” is not as easy as this one for getting started, as even the first chapters contain many details. This might be to much for newbies. So, it is no bad idea to start with this book, then buy “Camel in Action” for using Apache Camel in your projects.

 

Best regards,

Kai Wähner

Twitter: @KaiWaehner

Website: www.kai-waehner.de

The post Yet another new Camel book: “Apache Camel Messaging Systems” (Danger of Confusion) – PACKT PUBLISHING appeared first on Kai Waehner.

]]>