docker Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/docker/ Technology Evangelist - Big Data Analytics - Middleware - Apache Kafka Sun, 17 Nov 2019 15:30:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.kai-waehner.de/wp-content/uploads/2020/01/cropped-favicon-32x32.png docker Archives - Kai Waehner https://www.kai-waehner.de/blog/tag/docker/ 32 32 Service Mesh and Cloud-Native Microservices with Apache Kafka, Kubernetes and Envoy, Istio, Linkerd https://www.kai-waehner.de/blog/2019/09/24/cloud-native-apache-kafka-kubernetes-envoy-istio-linkerd-service-mesh/ Tue, 24 Sep 2019 15:50:45 +0000 http://www.kai-waehner.de/blog/?p=1585 This blog post takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh for a scalable, robust and observable microservice architecture.

The post Service Mesh and Cloud-Native Microservices with Apache Kafka, Kubernetes and Envoy, Istio, Linkerd appeared first on Kai Waehner.

]]>
Microservice architectures are not free lunch! Microservices need to be decoupled, flexible, operationally transparent, data aware and elastic. Most material from last years only discusses point-to-point architectures with tightly coupled and non-scalable technologies like REST / HTTP. This blog post takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed and efficiency.

Here are the key requirements for building a scalable, reliable, robust and observable microservice architecture:

Key Requirements for Microservices

Before we go into more detail, let’s take a look at the key takeaways first:

  • Apache Kafka decouples services, including event streams and request-response
  • Kubernetes provides a cloud-native infrastructure for the Kafka ecosystem
  • Service Mesh helps with security and observability at ecosystem / organization scale
  • Envoy and Istio sit in the layer above Kafka and are orthogonal to the goals Kafka addresses

The following  sections cover some more thoughts about this. The end of the blog post contains a slide deck and video recording to get some more detailed explanations.

Microservices, Service Mesh and Apache Kafka

Apache Kafka became the de facto standard for microservice architectures. It goes far beyond reliable and scalable high-volume messaging. The distributed storage allows high availability and real decoupling between the independent microservices. In addition, you can leverage Kafka Connect for integration and the Kafka Streams API for building lightweight stream processing microservices in autonomous teams.

A Service Mesh complements the architecture. It describes the network of microservices that make up such applications and the interactions between them. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

I explore the problem of distributed Microservices communication and how both Apache Kafka and Service Mesh solutions address it. This blog post takes a look at some approaches for combining both to build a reliable and scalable microservice architecture with decoupled and secure microservices.

Kafka Service Mesh Domain Driven Design

Discussions and architectures include various open source technologies like Apache Kafka, Kafka Connect, Kubernetes, HAProxy, Envoy, LinkerD and Istio.

Learn more about decoupling microservices with Kafka in this related blog post about “Microservices, Apache Kafka, and Domain-Driven Design (DDD)“.

Cloud-Native Kafka with Kubernetes

Cloud-native infrastructures are scalable, flexible, agile, elastic and automated. Kubernetes got the de factor standard. Deployment of stateless services is pretty easy and straightforward. Though, deploying stateful and distributed applications like Apache Kafka is much harder. A lot of human operations is required. Kubernetes does NOT automatically solve Kafka-specific challenges like rolling upgrades, security configuration or data balancing between brokers. The Kafka Operator – implemented in K8s Custom Resource Definitions (CRD) – can help here!

The Operator pattern for Kubernetes aims to capture the key aim of a human operator who is managing a service or set of services. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.

People who run workloads on Kubernetes often like to use automation to take care of repeatable tasks. The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.

Different implementations for a Kafka Operator  for Kubernetes exist: Confluent Operator, IBM’s / Red Hat’s Strimzi, Banzai Cloud. I won’t go into more detail about the characteristics and advantages of a K8s Kafka Operator here. I already explained it in detail in another blog post (and the video below will also discuss this topic):

Service Mesh with Kubernetes-based Technologies like Envoy, Linkerd or Istio

Service Mesh is a microservice pattern to move visibility, reliability, and security primitives for service-to-service communication into the infrastructure layer, out of the application layer.

A great, detailed explanation of the design pattern “service mesh” can be found here, including the following diagram which shows the relation between a control plane and the microservices with proxy sidecars:

design pattern service mesh

You can find much more great content about service mesh concepts and its implementations from the creators of frameworks like Envoy or Linkerd. Check out these two links or just use Google for more information about the competing alternatives and their trade-offs.

(Potential) Features for Apache Kafka and Service Mesh

An event streaming platform like Apache Kafka and a service mesh on top of Kubernetes are cloud-native, orthogonal and complementary. Together they solve the key requirements for building a scalable, reliable, robust and observable microservice architecture:

Key Requirements for Microservices Solved with Kafka Kubernetes Envoy Istio

Companies use Kafka together with service mesh implementations like Envoy, Linkerd or Istio already today. You can easily combine them to add security, enforce rate limiting, or implement other related use cases. Banzai Cloud published one of the most interesting architectures: They use Istio for adding security to Kafka Brokers and ZooKeeper via proxies using Envoy.

However, in the meantime, the support gets even better: The pull request for Kafka support in Envoy was merged in May 2019. This means you now have native Kafka protocol support in Envoy. The very interesting discussions about its challenges and potential features of implementing a Kafka protocol filter are also worth reading.

With native Kafka protocol support, you can do many more interesting things beyond L4 TCP filtering. Here are just some ideas (partly from above Github discussion) of what you could do with L7 Kafka protocol support in a Service Mesh:

Protocol conversion from HTTP / gRPC to Kafka

  • Tap feature to dump to a Kafka stream
  • Protocol parsing for observability (stats, logging, and trace linking with HTTP RPCs)
  • Shadow requests to a Kafka stream instead of HTTP / gRPC shadow
  • Integrate with Kafka Connect and its whole ecosystem of connectors

Proxy features

  • Dynamic Routing
  • Rate limiting at both the L4 connection and L7 message level
  • Filter, add compression, …
  • Automatic topic name conversion (e.g. for canary release or blue/green deployment)

Monitoring and Tracing

  • Request logs and stats
  • Data lineage / audit log
  • Audit log by taking request logs and enriching them with the user info.
  • Client specific metrics (Byte rate per client id / per consumer groups, versions of the client libraries, consumer lag monitoring for the entire data center)

Security

  • SSL Termination
  • Mutual TLS (mTLS)
  • Authorization

Validation of Events

  • Serialization format (JSON, Avro, Protobuf, etc.)
  • Message schema
  • Headers, attributes, etc.

That’s awesome, isn’t it?

Microservices, Kafka and Service Mesh – Slide Deck and Video Recording

Let’s take a look at my slide deck and video recording to understand the requirements, challenges and opportunities of building a Service Mesh with Apache Kafka, its ecosystem, Kubernetes and Service Mesh technologies in more detail…

Here is the slide deck:

The video recording walks you through the slide deck:

Any thoughts or feedback? Please let me know via a comment or Tweet or let’s connect on LinkedIn.

 

 

The post Service Mesh and Cloud-Native Microservices with Apache Kafka, Kubernetes and Envoy, Istio, Linkerd appeared first on Kai Waehner.

]]>
Apache Kafka + Kafka Streams + Mesos = Highly Scalable Microservices https://www.kai-waehner.de/blog/2018/01/12/apache-kafka-kafka-streams-mesos-highly-scalable-microservices/ Fri, 12 Jan 2018 16:34:12 +0000 http://www.kai-waehner.de/blog/?p=1227 This blog post discusses how to build a highly scalable, mission-critical microservice infrastructure with Apache Kafka, Kafka Streams, and Apache Mesos respectively in their vendor-supported platforms from Confluent and Mesosphere.

The post Apache Kafka + Kafka Streams + Mesos = Highly Scalable Microservices appeared first on Kai Waehner.

]]>
My latest article about Apache Kafka, Kafka Streams and Apache Mesos was published on Confluent’s blog:

Apache Mesos, Apache Kafka and Kafka Streams for Highly Scalable Microservices

This blog post discusses how to build a highly scalable, mission-critical microservice infrastructure with Apache Kafka, Kafka Streams, and Apache Mesos respectively in their vendor-supported platforms from Confluent and Mesosphere.

https://www.confluent.io/blog/apache-mesos-apache-kafka-kafka-streams-highly-scalable-microservices/

Have fun reading it and let me know if you have any feedback…

The post Apache Kafka + Kafka Streams + Mesos = Highly Scalable Microservices appeared first on Kai Waehner.

]]>
Apache Kafka + Kafka Streams + Mesos / DCOS = Scalable Microservices https://www.kai-waehner.de/blog/2017/10/27/mesos-kafka-streams-scalable-microservices/ Fri, 27 Oct 2017 08:05:16 +0000 http://www.kai-waehner.de/blog/?p=1208 Apache Kafka + Kafka Streams + Apache Mesos = Highly Scalable Microservices. Mission-critical deployments via DC/OS and Confluent on premise or public cloud.

The post Apache Kafka + Kafka Streams + Mesos / DCOS = Scalable Microservices appeared first on Kai Waehner.

]]>
I had a talk at MesosCon 2017 Europe in Prague about building highly scalable, mission-critical microservices with Apache Kafka, Kafka Streams and Apache Mesos / DCOS. I would like to share the slides and a video recording of the live demo.

Abstract

Microservices establish many benefits like agile, flexible development and deployment of business logic. However, a Microservice architecture also creates many new challenges. This includes increased communication between distributed instances, the need for orchestration, new fail-over requirements, and resiliency design patterns.

This session discusses how to build a highly scalable, performant, mission-critical microservice infrastructure with Apache Kafka, Kafka Streams and Apache Mesos respectively DC/OS. Apache Kafka brokers are used as powerful, scalable, distributed message backbone. Kafka’s Streams API allows to embed stream processing directly into any external microservice or business application. Without the need for a dedicated streaming cluster. Apache Mesos can be used as scalable infrastructure for both, the Apache Kafka brokers and external applications using the Kafka Streams API, to leverage the benefits of a cloud native platforms like service discovery, health checks, or fail-over management.

A live demo shows how to develop real time applications for your core business with Kafka messaging brokers and Kafka Streams API. You see how to deploy / manage / scale them on a DC/OS cluster using different deployment options.

Key takeaways

  • Successful microservice architectures require a highly scalable messaging infrastructure combined with a cloud-native platform which manages distributed microservices
  • Apache Kafka offers a highly scalable, mission critical infrastructure for distributed messaging and integration
  • Kafka’s Streams API allows to embed stream processing into any external application or microservice
  • Mesos respectively DC/OS allow management of both, Kafka brokers and external applications using Kafka Streams API, to leverage many built-in benefits like health checks, service discovery or fail-over control of microservices
  • See a live demo which combines the Apache Kafka streaming platform and DC/OS

Architecture: Kafka Brokers + Kafka Streams on Kubernetes and DC/OS

The following picture shows the architecture. You can either run Kafka Brokers and Kafka Streams microservices natively on DC/OS via Marathon or leverage Kubernetes as Docker container orchestration tool (which is also supported my Mesosphere in the meantime).

 

Architecture - Kafka Streams, Kubernetes and Mesos / DCOS

Slides

Here are the slides from my talk:

Live Demo

The following video shows the live demo. It is built on AWS using Mesosphere’s Cloud Formation script to setup a DC/OS cluster in ten minutes.

Here, I deployed both – Kafka brokers and Kafka Streams microservices – directly to DC/OS without leveraging Kubernetes. I expect to see many people continue to deploy Kafka brokers directly on DC/OS. For microservices many teams might move to the following stack: Microservice –> Docker –> Kubernetes –> DC/OS.

Do you also use Apache Mesos respectively DC/OS to run Kafka? Only the brokers or also Kafka clients (producers, consumers, Streams, Connect, KSQL, etc)? Or do you prefer another tool like Kubernetes (maybe on DC/OS)?

 

The post Apache Kafka + Kafka Streams + Mesos / DCOS = Scalable Microservices appeared first on Kai Waehner.

]]>
Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain https://www.kai-waehner.de/blog/2017/04/23/agile-cloud-cloud-integration-ipaas-api-management-blockchain/ Sun, 23 Apr 2017 18:41:06 +0000 http://www.kai-waehner.de/blog/?p=1152 Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain. Scenario use case using IBM's open source Hyperledger Fabric on BlueMix, TIBCO Cloud Integration (TCI) and Mashery.

The post Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain appeared first on Kai Waehner.

]]>
Cloud-to-Cloud integration is part of a hybrid integration architecture. It enables to implement quick and agile integration scenarios without the burden of setting up complex VM- or container-based infrastructures. One key use case for cloud-to-cloud integration is innovation using a fail-fast methodology where you realize new ideas quickly. You typically think in days or weeks, not in months. If an idea fails, you throw it away and start another new idea. If the idea works well, you scale it out and bring it into production to a on premise, cloud or hybrid infrastructure. Finally, you make expose the idea and make it easily available to any interested service consumer in your enterprise, partners or public end users.

A great example where you need agile, fail-fast development is blockchain because it is a very hot topic, but frameworks are immature and change very frequently these days. Note that blockchain is not just about Bitcoin and finance industry. This is just the tip of the iceberg. Blockchain, which is the underlying infrastructure of Bitcoin, will change various industries, beginning with banking, manufacturing, supply chain management, energy, and others.

Middleware and Integration as Key for Success in Blockchain Projects

A key for successful blockchain projects is middleware as it allows integration with the rest of an enterprise architecture. Blockchain only adds value if it works together with your other applications and services. See an example of how to combine streaming analytics with TIBCO StreamBase and Ethereum to correlate blockchain events in real time to act proactively, e.g. for fraud detection.

The drawback of blockchain is its immaturity today. APIs change with every minor release, development tools are buggy and miss many required features for serious development, and new blockchain cloud services, frameworks and programming languages come and go every quarter.

This blog post shows how to leverage cloud integration with iPaaS and API Management to realize innovative projects quickly, fail fast, and adopt changing technologies and business requirements easily. We will use TIBCO Cloud Integration (TCI), TIBCO Mashery and Hyperledger Fabric running as IBM Bluemix cloud service.

IBM Hyperledger Fabric as Bluemix Cloud Service

Hyperledger is a vendor independent open source blockchain project which consists of various components. Many enterprises and software vendors committed to it for building different solutions for various problems and use cases. One example is IBM’s Hyperledger Fabric. The benefit of being a very flexible construction kit makes Hyperledger much more complex for getting started than other blockchain platforms, like Ethereum, which are less flexible but therefore easier to set up and use.

This is the reason why I use the BlueMix BaaS (Blockchain as a Service) to get started quickly without spending days on just setting up my own Hyperledger network. You can start a Hyperledger Fabric blockchain infrastructure with four peers and a membership service with just a few clicks. It takes around two minutes. Hyperledger Fabric is evolving fast. Therefore, a great example for quickly changing features, evolving requirements and (potentially) fast failing projects.

My project uses version Hyperledger Fabric 0.6 as free beta cloud service. I leverage its Swagger-based REST API in my middleware microservice to interconnect other cloud services with the blockchain:

 

However, when I began the project, it was already clear that the REST interface is deprecated and will not be included in the upcoming 1.0 release anymore. Thus, we are aware that an easy move to another API is needed in a few weeks or months.

Cloud-to-Cloud Integration, iPaaS and API Management for Agile and Fail-Fast Development

As mentioned in the introduction, middleware is key for success in blockchain projects to interconnect it with the existing enterprise architecture. This example leverages the following middleware components:

  • iPaaS: TIBCO Cloud Integration (TCI) is hosted and managed by TIBCO. It can be used without any setup or installation to quickly build a REST microservice which integrates with Hyperledger Fabric. TCI also allows to configure caching, throttling and security configuration to ensure controlled communication between the blockchain and other applications and services.
  • API Management: TIBCO Mashery is used to expose the TCI REST microservice for other services consumers. This can be internal, partners or public end users, depending on the use case.
  • On Premise / Cloud-Native Integration Infrastructure: TIBCO BusinessWorks 6 (BW6) respectively TIBCO BusinessWorks Container Edition (BWCE) can be used to deploy and productionize successful TCI microservices in your existing infrastructure on premise or on a cloud-native platform like CloudFoundry, Kubernetes or any other Docker-based platform. Of course, you can also continue to use the services within TCI itself to run and scale the production services in the public cloud, hosted and managed by TIBCO.

Scenario: TIBCO Cloud Integration + Mashery + Hyperledger Fabric Blockchain + Salesforce

I implemented the following technical scenario with the goal of showing agile cloud-to-cloud integration with fail-fast methodology from a technical perspective (and not to build a great business case): The scenario implements a new REST microservice with TCI via visual coding and configuration. This REST service connects to two cloud services: Salesforce and Hyperledger Fabric. The following picture shows the architecture:

 

Here are the steps to realize this scenario:

  • Create a REST service which receives customer ID and customer name as parameters.
  • Enhance the input data from the REST call with additional data from Salesforce CRM. In this case, we get a block hash which is stored in Salesforce as reference to the blockchain data. The block hash allows to doublecheck if Salesforce has the most up-to-date information about a customer while the blockchain itself ensures that the customer updates from various systems like CRM, ERP, Mainframe are stored and updated in a correct, secure, distributed chain – which can be accessed by all related systems, not just Salesforce.
  • Use Hyperledger REST API to validate via Salesforce’ block hash that the current information about the customer in Salesforce is up-to-date. If Salesforce has an older block hash, then update Salesforce to up-to-date values (both, the most recent hash block and the current customer data stored on blockchain)
  • Return the up-to-date customer data to the service consumer of the TCI REST service
  • Configure caching, throttling and security requirements so that service consumers cannot do unexpected behavior
  • Leverage API Management to expose the TCI REST service as public API so that external service consumers can subscribe to a payment plan and use it in other applications or microservices.

Implementation: Visual Coding, Web Configuration and Automation via DevOps

This section discusses the relevant steps of the implementation in more detail, including development and deployment of the chain code (Hyperledger’s term for smart contracts in blockchain), implementation of the REST integration service with visual coding in the IDE of TCI, and exposing the service as API via TIBCO Mashery.

Chain Code for Hyperledger Fabric on IBM Bluemix Cloud

Hyperledger Fabric has some detailed “getting started” tutorials. I used a “Hello World” Tutorial and adopted it to our use case so that you can store, update and query customer data on the blockchain network. Hyperledger Fabric uses Go for chain code and will allow other programming languages like Java in future releases. This is both a pro and con. The benefit is that you can leverage your existing expertise in these programming languages. However, you also have to be careful not to use “wrong” features like threads, indefinite loops or other functions which cause unexpected behavior on a blockchain. Personally, I prefer the concept of Ethereum. This platform uses Solidity, a programming language explicitly designed to develop smart contracts (i.e. chain code).

Cloud-to-Cloud Integration via REST Service with TIBCO Cloud Integration

The following steps were necessary to implement the REST microservice:

  • Create REST service interface with API Modeler web UI leveraging Swagger
  • Import Swagger interface into TCI’s IDE
  • Use Salesforce activity to read block hash from Salesforce’ cloud interface
  • Use Rest Client activity to do an HTTP Request to Hyperledger Fabric’s REST interface
  • Configure graphical mappings between the activities (REST Request à Salesforce à Hyperledger Fabric à REST Response)
  • Deploy microservice to TCI’s runtime in the cloud and test it from Swagger UI
  • Configure caching and throttling in TCI’s web UI

The last point is very important in blockchain projects. A blockchain infrastructure does not have millisecond response latency. Furthermore, every blockchain update costs money – Bitcoin, Ether, or whatever currency you use to “pay for mining” and to reach consensus in your blockchain infrastructure. Therefore, you can leverage caching and throttling in blockchain projects much more than in some other projects (only if it fits into your business scenario, of course).

Here is the implemented REST service in TCI’s graphical development environment:

 

 

And the caching configuration in TCI’s web user interface:

Exposing the REST Service to Service Consumers through API Management via TIBCO Mashery

The deployed TCI microservice can either be accessed directly by service consumers or you expose it via API Management, e.g. with TIBCO Mashery. The latter option allows to define rules, payment plans and security configuration in a centralized manner with easy-to-use tooling.

Note that the deployment to TCI and exposing its API to Mashery can also be done in one single step. Both products are loosely coupled, but highly integrated. Also note that this is typically not done manually (like in this technical demonstration), but integrated into a DevOps infrastructure leveraging frameworks like Maven or Jenkins to automate the deployment and testing steps.

Agile Cloud-to-Cloud Integration with Fail-Fast Methodology

Our scenario is now implemented successfully. However, it is already clear that the REST interface is deprecated and not included in the upcoming 1.0 release anymore. In blockchain frameworks, you can expect changes very frequently.

While implementing this example, IBM announced on its big user conference IBM Interconnect in Las Vegas that Hyperledger Fabric 1.0 is now publicly available. Well, it is really available on Bluemix since that day, but if you try to start the service you get the following message: “Due to high demand of our limited beta, we have reached capacity.” A strange error as I thought we are using public cloud infrastructures like Amazon AWS, Microsoft Azure, Google Cloud Platform or IBM Bluemix for exactly this reason to scale out easily… J

Anyway, the consequence is that we still have to work on our 0.6 version and do not know yet when we will be able or have to migrate to version 1.0. The good news is that iPaaS and cloud-to-cloud integration can realize changes quickly. In the case of our TCI REST service, we just need to replace the REST activity calling Hyperledger REST API with a Java activity and leverage Hyperledger’s Java SDK – as soon as it is available. Right now only a Client SDK for Node.js is available – not really an option for “enterprise projects” where you want to leverage the JVM respectively Java platform instead of JavaScript. Side note: This topic of using Java vs. JavaScript in blockchain projects is also well discussed in “Static Type Safety for DApps without JavaScript”.

This blog post focuses just on a small example, and Hyperledger Fabric 1.0 will bring other new features and concept changes with it, of course. Same is true for SaaS cloud offerings such as Salesforce. You cannot control when they change their API and what exactly they change. But you have to adopt it within a relative short timeframe or your service will not work anymore. An iPaaS is the perfect solution for these scenarios as you do not have a lot of complex setup in your private data center or a public cloud platform. You just use it “as a service”, change it, replace logic, or stop it and throw it away to start a new project. The implicit integration into API Management and DevOps support also allow to expose new versions for your service consumers easily and quickly.

Conclusion: Be Innovative by Failing Fast with Cloud Solutions

Blockchain is very early stage. This is true for platforms, tooling, and even the basic concepts and theories (like consensus algorithms or security enforcement). Don’t trust the vendors if they say Blockchain is now 1.0 and ready for prime time. It will still feel more like a 0.5 beta version these days. This is not just true for Hyperledger and its related projects such as IBM’s Hyperledger Fabric, but also for all others, including Ethereum and all the interesting frameworks and startups emerging around it.

Therefore, you need to be flexible and agile in blockchain development today. You need to be able to fail fast. This means set up a project quickly, try out new innovative ideas, and throw them away if they do not work; to start the next one. The same is true for other innovative ideas – not just for blockchain.

Middleware helps integrating new innovative ideas with existing applications and enterprise architectures. It is used to interconnect everything, correlate events in real time, and find insights and patterns in correlated information to create new added value. To support innovative projects, middleware also needs to be flexible, agile and support fail fast methodologies. This post showed how you can leverage iPaaS with TIBCO Cloud Integration and API Management with TIBCO Mashery to build innovative middleware microservices in innovative cloud-to-cloud integration projects.

The post Agile Cloud-to-Cloud Integration with iPaaS, API Management and Blockchain appeared first on Kai Waehner.

]]>
Cloud Native Middleware Microservices – 10 Lessons Learned (O’Reilly Software Architecture 2017, New York) https://www.kai-waehner.de/blog/2017/04/05/cloud-native-middleware-microservices-10-lessons-learned-oreillysacon/ Wed, 05 Apr 2017 19:22:17 +0000 http://www.kai-waehner.de/blog/?p=1147 I want to share my slide deck and video recordings from the talk "10 Lessons Learned from Building Cloud Native Middleware Microservices" at O'Reilly Software Architecture April 2017 in New York, USA in April 2017.

The post Cloud Native Middleware Microservices – 10 Lessons Learned (O’Reilly Software Architecture 2017, New York) appeared first on Kai Waehner.

]]>
I want to share my slide deck and video recordings from the talk “10 Lessons Learned from Building Cloud Native Middleware Microservices” at O’Reilly Software Architecture April 2017 in New York, USA in April 2017.

Abstract
Microservices are the next step after SOA: Services implement a limited set of functions; services are developed, deployed, and scaled independently; continuous delivery automates deployments. This way you get shorter time to results and increased flexibility. Containers improve things even more, offering a very lightweight and flexible deployment option.

In the middleware world, you use concepts and tools such as an enterprise service bus (ESB), complex event processing (CEP), business process management (BPM), or API gateways. Many people still think about complex, heavyweight central brokers. However, microservices and containers are not only relevant for custom self-developed applications but are also a key requirement to make the middleware world more flexible, Agile, and automated.

Kai Wähner shares 10 lessons learned from building cloud-native microservices in the middleware world, including the concepts behind cloud native, choosing the right cloud platform, and when not to build microservices at all, and leads a live demo showing how to apply these lessons to real-world projects by leveraging Docker, CloudFoundry, and Kubernetes to realize cloud-native middleware microservices.

Slide Deck

Here is the slide deck “10 Lessons Learned from Building Cloud Native Middleware Microservices“:

Video Recordings / Live Demos

Two video recordings which demo how to apply the discussed lessons learned with middleware and open source frameworks:

The post Cloud Native Middleware Microservices – 10 Lessons Learned (O’Reilly Software Architecture 2017, New York) appeared first on Kai Waehner.

]]>
Case Study: From a Monolith to Cloud, Containers, Microservices https://www.kai-waehner.de/blog/2017/02/24/case-study-monolith-cloud-containers-microservices/ Fri, 24 Feb 2017 15:14:12 +0000 http://www.kai-waehner.de/blog/?p=1142 Case Study: How to Move from a (Middleware) Monolith to Cloud, Containers and Microservices leveraging Docker, Cloud Foundry, Kubernetes, Consul, Hystrix, API Management, and others cool things.

The post Case Study: From a Monolith to Cloud, Containers, Microservices appeared first on Kai Waehner.

]]>
The following shows a case study about successfully moving from a very complex monolith system to a cloud-native architecture. The architecture leverages containers and Microservices. This solve issues such as high efforts for extending the system, and a very slow deployment process. The old system included a few huge Java applications and a complex integration middleware deployment.

The new architecture allows flexible development, deployment and operations of business and integration services. Besides, it is vendor-agnostic so that you can leverage on-premise hardware, different public cloud infrastructures, and cloud-native PaaS platforms.

The session will describe the challenges of the existing monolith system, the step-by-step procedure to move to the new cloud-native Microservices architecture. It also explains why containers such as Docker play a key role in this scenario.

A live demo shows how container solutions such as Docker, PaaS cloud platforms such as CloudFoundry, cluster managers such as Kubernetes or Mesos, and different programming languages are used to implement, deploy and scale cloud-native Microservices in a vendor-agnostic way.

Key Takeaways

Key takeaways for the audience:

– Best practices for moving to a cloud-native architecture

– How to leverage microservices and containers for flexible development, deployment and operations

– How to solve challenges in real world projects

– Understand key technologies, which are recommended

– How to stay vendor-agnostic

– See a live demo of how cloud-native applications respectively services differ from monolith applications regarding development and runtime

Slides and Video from Microservices Meetup Mumbai

Here are the slides and video recording. Presented in February 2017 at Microservices Meetup Mumbai, India.

The post Case Study: From a Monolith to Cloud, Containers, Microservices appeared first on Kai Waehner.

]]>
Machine Learning Applied to Microservices https://www.kai-waehner.de/blog/2016/10/20/machine-learning-applied-microservices/ Thu, 20 Oct 2016 19:32:22 +0000 http://www.kai-waehner.de/blog/?p=1102 Build intelligent Microservices by applying Machine Learning and Advanced Analytics. Leverage Apache Hadoop / Spark with Visual Analytics and Stream Processing.

The post Machine Learning Applied to Microservices appeared first on Kai Waehner.

]]>
I had two sessions at O’Reilly Software Architecture Conference in London in October 2016. It is the first #OReillySACon in London. A very good organized conference with plenty of great speakers and sessions. I can really recommend this conference and its siblings in other cities such as San Francisco or New York if you want to learn about good software architectures and new concepts, best practices and technologies. Some of the hot topics this year besides microservices are DevOps, serverless architectures and big data analytics respectively machine learning.

Intelligent Microservices by Leveraging Big Data Analytics

One of the two sessions was about how to apply machine learning and big data analytics to real time event processing. I also included the relation to microservices, i.e. how to leverage microservice concepts such as 12 Factor Apps, Containers (e.g. Docker), Cloud Platforms (e.g. Kubernetes, Cloud Foundry), or DevOps to build agile, intelligent microservices.

Abstract: How to Apply Machine Learning to Microservices

The digital transformation is going forward due to Mobile, Cloud and Internet of Things. Disrupting business models leverage Big Data Analytics and Machine Learning.

“Big Data” is currently a big hype. Large amounts of historical data are stored in Hadoop or other platforms. Business Intelligence tools and statistical computing are used to draw new knowledge and to find patterns from this data, for example for promotions, cross-selling or fraud detection. The key challenge is how these findings can be integrated from historical data into new transactions in real time to make customers happy, increase revenue or prevent fraud. “Fast Data” via stream processing is the solution to embed patterns – which were obtained from analyzing historical data – into future transactions in real-time.

This session uses several real world success stories to explain the concepts behind stream processing and its relation to Hadoop and other big data platforms.  It discusses how patterns and statistical models of R, Spark MLlib, H2O, and other technologies can be integrated into real-time processing by using several different real world case studies. The session also points out why a Microservices architecture helps solving the agile requirements for these kind of projects.

A brief overview of available open source frameworks and commercial products shows possible options for the implementation of stream processing, such as Apache Storm, Apache Flink, Spark Streaming, IBM InfoSphere Streams, or TIBCO StreamBase.

A live demo shows how to implement stream processing, how to integrate machine learning, and how human operations can be enabled in addition to the automatic processing via a Web UI and push events.

How to Build Intelligent Microservices – Slide Deck from O’Reilly Software Architecture Conference

The post Machine Learning Applied to Microservices appeared first on Kai Waehner.

]]>
Comparison Of Log Analytics for Distributed Microservices – Open Source Frameworks, SaaS and Enterprise Products https://www.kai-waehner.de/blog/2016/10/20/comparison-log-analytics-distributed-microservices-open-source-frameworks-saas-enterprise-products/ Thu, 20 Oct 2016 18:57:51 +0000 http://www.kai-waehner.de/blog/?p=1097 Log Analytics is the right framework or tool to monitor for Distributed Microservices. Comparison of Open source, SaaS and Enteprrise Products. Plus relation to big data components such as Apache Hadoop / Spark.

The post Comparison Of Log Analytics for Distributed Microservices – Open Source Frameworks, SaaS and Enterprise Products appeared first on Kai Waehner.

]]>
I had two sessions at O’Reilly Software Architecture Conference in London in October 2016. It is the first #OReillySACon in London. A very good organized conference with plenty of great speakers and sessions. I can really recommend this conference and its siblings in other cities such as San Francisco or New York if you want to learn about good software architectures and new concepts, best practices and technologies. Some of the hot topics this year besides microservices are DevOps, serverless architectures and big data analytics.

I want to share the slide of my session about comparing open source frameworks, SaaS and Enterprise products regarding log analytics for distributed microservices:

Monitoring Distributed Microservices with Log Analytics

IT systems and applications generate more and more distributed machine data due to millions of mobile devices, Internet of Things, social network users, and other new emerging technologies. However, organizations experience challenges when monitoring and managing their IT systems and technology infrastructure. They struggle with distributed Microservices and Cloud architectures, custom application monitoring and debugging, network and server monitoring / troubleshooting, security analysis, compliance standards, and others.

This session discusses how to solve the challenges of monitoring and analyzing Terabytes and more of different distributed machine data to leverage the “digital business”. The main part of the session compares different open source frameworks and SaaS cloud solutions for Log Management and operational intelligence, such as Graylog , the “ELK stack”, Papertrail, Splunk or TIBCO LogLogic). A live demo will demonstrate how to monitor and analyze distributed Microservices and sensor data from the “Internet of Things”.

The session also explains the distinction of the discussed solutions to other big data components such as Apache Hadoop, Data Warehouse or Machine Learning and its application to real time processing, and how they can complement each other in a big data architecture.

The session concludes with an outlook to the new, advanced concept of IT Operations Analytics (ITOA).

Slide Deck from O’Reilly Software Architecture Conference

The post Comparison Of Log Analytics for Distributed Microservices – Open Source Frameworks, SaaS and Enterprise Products appeared first on Kai Waehner.

]]>
Trends at JavaOne 2016: Microservices, Docker, Cloud-Native Middleware https://www.kai-waehner.de/blog/2016/09/23/trends-at-javaone-2016-microservices-docker-cloud-native-middleware/ Fri, 23 Sep 2016 00:48:07 +0000 http://www.kai-waehner.de/blog/?p=1077 JavaOne 2016 Trends: Besides focus on Java platform updates (Java 9, Java EE 8, etc.), I saw three hot topics, which are highly related to each other: Microservices, Docker and Cloud. I also talked about this topic from a middleware perspective. See my slides and lessons learned.

The post Trends at JavaOne 2016: Microservices, Docker, Cloud-Native Middleware appeared first on Kai Waehner.

]]>
Like every year, I attended JavaOne (part of Oracle World) in San Francisco in late September 2016. This is still one of the biggest conferences around the world for technical experts like developers and architects.

I planned to write a blog posts about new trends from the program, exhibition and chats with other attendees. Though, I can make it short: Besides focus on Java platform updates (Java 9, Java EE 8, etc.), I saw three hot topics which are highly related to each other: Microservices, Docker and Cloud. It felt like 80% of non-Java talks were about these three topics. The other 20% were Internet of Things (IoT), DevOps and some other stuff. Middleware was also a hot topic. Not always directly, but I was in several talks focusing on integration, orchestration of microservices, (IoT) gateways.

My Talk at JavaOne 2016: Cloud-Native Microservices and Containers in the Middleware World

No surprise that my talk this year also focused on these hot topics – specifically focusing on middleware. However, the focus was a different one than most other presentations: I talked about the journey, which middleware has to undergo these days.

I discussed the move from a classical middleware – often called Enterprise Service Bus (ESB) – to cloud-native middleware microservices. This sessions explained the relation to new concepts like Docker containers, DevOps, and modern open source cloud platforms like CloudFoundry, Kubernetes or Apache Mesos.

Is Middleware still necessary in the Era of Cloud and Microservices?

Interesting side note: Some attendees asked me: “Is middleware even needed after everybody moves to microservices”? I get the same question often. Not just at conferences, but also from customers.

One of these guys already answered the question by himself before I could respond: “Well, as there are so many independent microservices, different technologies, cloud services and edge devices like IoT, I think the answer is YES, there is still a need for middleware, right?!”.

I added to this: “I can assure you: The answer is YES. You need even more middleware than before. You need to interconnect everything! Within your enterprise, remote edge IoT devices, partner services, with cloud services, and also open it up for the external world, i.e. users you do not even know today…

However, middleware changed in the last years. It is not the heavyweight central platform anymore, but a hybrid integration platform, which serves all the various use cases and different audience (like integration specialists, ad-hoc integrators, and even business users). The tooling got more lightweight, and cloud-native!

10 Lessons Learned from Building Cloud-Native Middleware Microservices

In addition to focusing on many related concepts, technologies and cloud platforms, my session also discussed ten lessons learned from building cloud-native middleware microservices together with our customers in the last months:

  1. On premise will not die. Not everything will or should go to the cloud!
  2. Tools (visual coding) works, even for very complex scenarios. Forget the early 2000 SOA days!
  3. Microservices are not free lunch. They do not fit into every scenario!
  4. Design Microservices with open APIs in mind!
  5. Cloud-Native means much more than a “cloud-washed” application deployed to a cloud provider!
  6. Microservices and Containers are often used together, but also work very well without each other!
  7. Containers are a lower level technology. Only the infrastructure provider should care, but not the application developer!
  8. Be cloud platform agnostic. The world changes fast!
  9. Automation (CI / CD / DevOps) and the related cultural change is key for success. Especially for Cloud-Native Microservices!
  10. Establish a hybrid integration architecture to solve different business requirements!

I will publish a more detailed post about these 10 lessons learned soon.

Slide Deck from JavaOne 2016 about Cloud-Native Middleware Microservices

Here is my slide deck from my JavaOne presentation:

Live Demo: Build and Deploy a Middleware Microservice with Docker, Kubernetes, Cloud Foundry, Consul, Spring Cloud Config

The following 20min live demo shows how to deploy a single (i.e. built just once) TIBCO BusinessWorks Container Edition microservice to different cloud and container platforms: Docker, Kubernetes and Pivotal CloudFoundry. The video also shows how to leverage other cloud-native open source frameworks such as Consul and Spring Cloud Config for distributed configuration management and service discovery of middleware microservices.

As always, I appreciate any comments or feedback…

The post Trends at JavaOne 2016: Microservices, Docker, Cloud-Native Middleware appeared first on Kai Waehner.

]]>
TIBCO’s Hybrid Integration Platform https://www.kai-waehner.de/blog/2016/09/14/introducing-tibcos-hybrid-integration-platform/ Wed, 14 Sep 2016 06:04:43 +0000 http://www.kai-waehner.de/blog/?p=1074 This blog explains the different components of a hybrid integration architecture, their deployment models, when to use them, and their target audience. Afterward, each section shows how TIBCO's Hybrid Integration Platform maps to this.

The post TIBCO’s Hybrid Integration Platform appeared first on Kai Waehner.

]]>
TIBCO Blog]

The IT world is moving forward rapidly. The digital transformation changes complete industries and peels away existing business models. Cloud services, mobile devices, and the Internet of Things establish wild spaghetti architectures through different departments and lines of business. Several different concepts, technologies, and deployment options are used. A single integration backbone is not sufficient in this era anymore.

A hybrid integration platform for core and edge services

“Hybrid Integration Platform (HIP)” is a term coined by Gartner and other analysts. It describes different components of a modern integration architecture. A key for success in today’s complex world is different integration platforms working together seamlessly. Read Gartner’s report “How to Implement a Hybrid Integration Platform to Tackle Pervasive Integration” to understand the new challenges of most enterprises in more detail.

Leveraging a well-conceived hybrid integration architecture allows different stakeholders of an enterprise to react quickly to new requirements. Mission critical core business processes (“core services”) are still operated by the central IT department. However, even these services are being changed much more frequently than they were just a few years ago and the organization must continually push for these to be agile.

On the other side, the line of business needs to try out new or adapt existing business processes quickly in an agile way without the use of delaying—and often frustrating—rules and processes governed by the central IT. Innovation by a “fail-fast” strategy and creating so-called “edge services” are becoming important to enhance or disrupt existing business models.

The following picture shows possible components of a hybrid integration platform:

hipp
TIBCO’s Hybrid Integration Platform

This blog explains the different components of a hybrid integration architecture, their deployment models, when to use them, and their target audience. Afterward, each section shows how TIBCO maps to these components.

Application integration with an enterprise service bus (ESB)

When to use it: 

ESBs are typically used in larger IT projects where mission-critical core business processes have to be integrated with a need for high availability, reliability, and performance within the enterprise (“core services”). This affects many different technologies and applications to integrate standard software (e.g. CRM, ERP), legacy systems (e.g. mainframe), custom applications (e.g. Java, .NET), and external cloud services.

Deployment model and target audience:

Integration specialists with technical experience implement ESB clusters to leverage high performance, high availability, fault-tolerance and guaranteed delivery of transactions. Most of the integration happens on premise in the private data center. This makes sense and will probably not change in the coming decades. The ESB cluster will stay the core integration solution. Therefore, even “microservices do not spell the end of the ESB!”

You can deploy to the cloud, but this is more “cloud-washed”, but not cloud-native deployment and operations! However, this also makes sense sometimes, e.g. for DEV or TEST instances or to use the IaaS capabilities without necessarily looking for leveraging cloud-native features like automatic elasticity or service dynamic discovery.

TIBCO’s offering

TIBCO ActiveMatrix BusinessWorks (BW6) is a leading integration and service delivery platform. A key strength of BW6 is its distributed nature of BW and EMS deployments instead of using a central cluster with several nodes.

Cloud-native application integration

When to use it:

Integration in a cloud-native environment is quite different than your traditional approach to application integration. Usually, companies leverage platforms-as-a-service (PaaS) such as Cloud Foundry, Kubernetes,OpenShift, or Apache Mesos.

The applications created using these cloud-native application integration tools run natively within these PaaS environments and therefore benefit from the tools offered by these platforms. This means the PaaS takes care of provisioning infrastructure and solves challenges such as service discovery, load balancing, elasticity, cluster management, or fail-over out-of-the-box. It also allows more agile development to implement, adopt, and scale new features or innovations quickly and efficiently.

The application development and architecture needs to be adopted to cloud-native concepts to allow agile development and quick innovation. Typically, you prefer to build, develop, deploy, and operate so-called microservices instead of realizing larger applications respectively monoliths on top of a PaaS. This especially includes applying the concepts behind “The Twelve-Factor App” principles, which recommends best practices for cloud-native applications such as stateless services, explicitly declared and isolated dependencies, or environment-independent backing services. Often, you leverage independent containers (e.g. Docker, CoreOS or CloudFoundry’s Warden) for building cloud-native microservices or applications. Automation using DevOps and continuous integration/continuous delivery (CI/CD) is another key concept for successful cloud-native middleware projects.

Deployment model and target audience:

Core IT is adopting these PaaS platforms, but they are doing so to drive the agility for all developers, including integration specialists and ad-hoc integrators. As a result, both the traditional integration teams, as well as the developers within line of business, are able to benefit from these platforms and the capabilities offered by the integration technologies. In this respect, the organization is becoming a cloud provider themselves for the internal developers.

PaaS are widely adopted on-premise, in the public cloud, or hybrid deployments. In addition, several Container-as-a-Service offerings are emerging these days (e.g. Amazon EC2 Container Service). However, many enterprises do not have a complete, long-term strategy yet regarding cloud or hybrid deployment architectures. It is important not to become dependent on a specific cloud platform or container technology, but to develop cloud-platform-agnostic integration services, which can be moved from one to another platform without effort or redevelopment.

In addition, PaaS-based middleware should support and integrate mature open source frameworks for cloud-native environments such as Spring Cloud Config, Consul, Eureka or Hystrix instead of re-inventing the wheel and introducing additional complexity. The article “A Cloud-Native Architecture for Middleware” explains the concepts behind cloud-native platforms and deployment options in much more detail.

TIBCO’s offering:

TIBCO BusinessWorks Container Edition (BWCE) allows cloud agnostic integration projects supporting different PaaS and containers platforms such as Cloud Foundry, Docker, Kubernetes or AWS ECS. BWCE also supports available cloud-native frameworks and design patterns. For example, you can leverage service discovery frameworks such as Consul and Eureka or resiliency patters like circuit breaker with Hystrix (see BWCE 2.1 release notes).

Integration platform as a service (iPaaS)

When to use it:

An iPaaS cloud integration middleware is a pure public cloud offering hosted by a specific vendor—in contrary to a classical ESB or cloud-native PaaS-based integration middleware. Using iPaaS allows the line of business to react quickly to new requirements or innovative ideas without struggling with the core IT team and its long release and quality management processes. Sometimes, enterprises also replace their on-premise deployments completely with iPaaS to outsource the burden and complexity of operations management. Therefore, iPaaS can be used for both mission-critical core services and quickly changing respectively innovative edge services.

Deployment model and target audience:

The vendor takes care of cloud-native features such as provisioning of infrastructure, elasticity or multi-tenancy. Like application integration on top of a PaaS, this needs to be a real cloud-native enterprise-grade runtime and not just a “cloud washed” offering. Otherwise, it is not possible to scale out quickly and easily while still keeping high demands regarding stability and resiliency of integration services.

Target audience for iPaaS is not necessarily the integration specialist with extensive technical experience. Instead, it also allows colleagues with less technical expertise (sometimes called “ad-hoc integrator”) to define, deploy, and monitor services and APIs with its corresponding policies such as security requirements or throttling service level agreements. Ad-hoc integrators often do not use the more powerful IDE but an intuitive and simple-to-use web user interface.

TIBCO’s offering:

TIBCO Cloud Integration (TCI) is TIBCO’s iPaaS offering. iPaaS has to work well together with other integration solutions. This way you can easily develop an innovative iPaaS service and then deploy it on your on-premise ESB cluster later if the user rate increases significantly and the new service is promoted to an important core service.

A key benefit to TIBCO’s offering is we share a common design experience, including the ability to move projects between environments. Each runtime was designed for the specific environment and use case they are intended for.

You can use the same IDE and visual coding concepts for all three offerings and import services and projects from one offering respectively deployment option to another (with a few limitations due to some different concepts).

For example, you can develop an edge service with TCI for trying out a new innovative idea. Later, if the service gets successfully adopted and creates revenue, you can simply move the service to BWCE or BW6 in the cloud or on premise.

Integration software as a service (iSaaS)

When to use it:

iSaaS tools are designed for the end user and make it very easy to integrate data between various cloud services, even though the end user does not think of this as integration at all. They simply need to share information between systems and want to eliminate the redundant, time-consuming copying they use today.

iSaaS offerings are complementary to on-premise, PaaS, and iPaaS middleware and serve “edge services” which are not strategic and mission-critical for the enterprise—but very relevant for the specific business user or groups within a line of business. iSaaS focuses mostly on the integration of cloud services and therefore offer plenty of SaaS connectors. The simplicity does limit what is possible with these integration flows, so if high control (customized integration) is needed then an iPaaS may be required.

For instance, a business user can create a daily automatic flow to synchronize data from a Google Sheet with Salesforce CRM. This removes the need to integrate these updates manually every day and saves a lot of time to focus on other topics.

Deployment model and target audience:

iSaaS is hosted and operated by the specific vendor. In contrary to the above-discussed integration components, iSaaS focuses on business users (also called citizen integrators in this context). These employees can create basic integration flows for personal or departmental interests in a very intuitive web user interface without any technical knowledge or help from IT.

TIBCO’s offering:

TIBCO Simplr allows business users to connect their many cloud services in a very intuitive web user interface without coding skills. It also allows them to create forms in the same way to be able to interact with other humans and go beyond automatic integration scenarios. The Spotfire connector allows users to do data discovery from sources such as Simplr Forms, Marketo, JIRA, etc. to find insights without any technical knowledge.

IoT integration gateway

When to use it:

The Internet of Things (IoT) changes the role of integration. It raises several new challenges not relevant for classical application integration:

  • Devices are not connected to the cloud
  • Devices have low bandwidth to connect
  • Latency of connectivity is significant
  • Connectivity is not reliable
  • Connectivity is not cost-effective

Here you need to integrate data directly on the devices, as not all data should be sent to the private data center or public cloud. An IoT integration gateway interconnects all devices via various IoT standards such as MQTT, CoaP, WebSockets, Bluetooth, ZigBee, NFC, RFID, USB, SPI, I2C, or X-LINE. The runtime has to be very lightweight so that it can be deployed even on very small devices with very low resources and memory.

Deployment model and target audience:

Integration specialists and developers use the IoT integration gateway to filter and aggregate data streams “on site at the edge” and send only relevant information such as errors or alerts to the central integration platform or any service. Development can be done via intuitive web user interface and out-of-the-box connectivity to various IoT standards. Though, these offerings also allow (or sometimes require) some kind of coding.

TIBCO’s offering:

TIBCO’s Project Flogo is implemented with Google’s Go programming language and is fully open source. It offers a very lightweight runtime to build IoT integrations at the edge (e.g. filtering and aggregating data and sending only relevant information to the cloud or enterprise network).

As Project Flogo is very lightweight (in contrary to similar projects which have at least 20x more memory footprint), you can also leverage it to develop microservices with very low resource usage. This is an ideal fit for cloud-native platforms or even serverless architectures.

Project Flogo perfectly completes the integration portfolio together with BW6, BWCE, TCI, and Simplr.

API management

When to use it:

Until now, we have been talking about creating services that bring together two or more applications or sources of data. The trend is heading toward an “Open API Economy“, where services are exposed as API to other internal departments, partners, or public developers. API management provides the controls organizations need to expose or leverage APIs.

Think about examples such as Paypal whose API is integrated into almost every online shop as a payment option or Google Maps whose API is used on almost every website which includes a description of how to get somewhere.

Deployment model and target audience:

The target audience is the line of business, which leverage API portals to think about new digital products to increase revenue, make customers happy, or build new innovative mashups. API management portal and runtime are usually operated in the cloud to leverage elasticity and scalability for the unpredictable and often changing number of API calls. The API gateway is often deployed on premise to ensure security and other service level agreements within the firewall of an enterprise.

The key for success in a hybrid integration architecture is a good cooperation between API Management and the different integration components. This enables developers to reuse services to concentrate on new features, shorter time-to-market and innovation instead of recreating existing services again.

TIBCO’s offering:

TIBCO Mashery is the leading API management solution on the market. It offers out-of-the-box integration with all of TIBCO’s integration components (BW6, BWCE, TCI) via web user interfaces and command line / scripting tools for automation and DevOps. For example, you can develop a new service with TIBCO Cloud Integration and automatically offer it via your API portal to other users.

Complementary Add-Ons: Streaming Analytics and BPM

Streaming analytics and business process management are not part of application integration in the narrower sense, but are relevant for a hybrid integration architecture. Streaming analytics (sometimes called stream processing) is used for integrating massive amounts of data streams and sensor data while the data is still in motion. You use continuous queries for sliding windows and correlations (e.g. for fraud detection or predictive maintenance) instead of single transactions via messaging or request-response service calls (e.g. for a bank transaction or flight booking). See more details and real world use cases for streaming analytics in this InfoQ article: “Real-Time Stream Processing as Game Changer in a Big Data World with Hadoop and Data Warehouse”.

TIBCO StreamBase is a leading offering with enterprise scale, maturity, and ease-of-use. TIBCO Live Datamart offers real-time visual analytics for monitoring and making proactive actions by humans on top of the streaming engine.

Human interaction is relevant in almost every enterprise for exceptional handling and customer communication. Therefore, business process management (BPM) needs to be part of a complete hybrid integration platform and work seamlessly with other integration solutions. Today, BPM is more than just long-running BPM processes with human interaction and calling SOAP or REST web services. More demanding scenarios have to be realized with a BPM component, including case management, intelligent business processes, rapid application development platforms for web and mobile apps, and self-service BPM as SaaS.

TIBCO ActiveMatrix BPM is a leading BPM offering and highly integrated with TIBCO’s integration components. It includes sophisticated support for process modeling, resource planning, case management, rapid application development, and many other pretty cool features.

TIBCO Nimbus Maps can be used as self-service BPM SaaS offering for the business user. It enables teams to define, simplify, share, and change their processes in minutes—without the need to use a powerful BPM engine or extensive modeling standard like BPMN.

The need for a hybrid integration architecture

This blog post showed why a single integration platform is not sufficient anymore in the era of cloud, mobile, big data, and IoT. Differentiation between core services with a focus on mission-critical (but also more agile) enterprise services and edge services with a focus on innovation and very agile development is a key step towards a hybrid integration architecture.

TIBCO offers all key components for such a hybrid integration platform. Each component focuses on its specific use cases and deployment options—instead of just offering “cloud-washed” or “simply containerized” solutions. In addition, all of TIBCO’s components are highly integrated so they work together to reduce complexity and efforts for the whole team, including integration specialists, ad-hoc integrators, and citizen integrators (aka business users).

Are you just beginning your journey with a hybrid integration architecture? Feel free to contact me to discuss your architecture, challenges, and questions. If you want to discover some components on your own, then check our new and growing TIBCO Community Wiki. It already contains plenty of information about the discussed components. You can also ask questions in the answers section to get a response by a TIBCO expert or other community members.

The post TIBCO’s Hybrid Integration Platform appeared first on Kai Waehner.

]]>