Quantcast
Channel: Jean-Baptiste Onofré's Blog
Viewing all 38 articles
Browse latest View live

Dell Vostro with Ubuntu: use the AMD Catalyst drivers

$
0
0

I have a Dell Vostro 3550 with Ubuntu 12.04 since around one year now.

The laptop worked fine, it was pretty fast, building a lot of projects in the same time, etc.

I complained with:

  • the temperature was really hot (sensors said between 80°C and 95°C all of the time). Sometime, the temperature was critical and the system shutted down.
  • due to the previous point, the fan was very noisy
  • the battery autonomy was average

This laptop use dual graphic cards: Intel SGI graphic card (to reduce energy consumption), and AMD Radeon HD 6600M (for 3D enhanced graphics and HD videos).

When I got this laptop, I tried to install the fglrx open source drivers a couple of times. Without success: it seems that the Radeon card was not fully supported.

As the laptop ran really fine (unity was fast, vlc was able to read HD videos without problem, etc), I stayed with the Intel Xorg driver.

Yesterday evening, watching a movie, the laptop was so hot that I have to use a pillow to avoid to burn my legs ;)

So today, I decided to find a solution.

As the fglrx open source drivers didn’t work, I try to proprietary AMD Catalyst drivers.

And now, ALL IS DIFFERENT ;)

The temperature stays around 70/80°C, so the fan are pretty quite, and the system is even faster than before.

More over, the admcccle (Catalyst Console) allows you to tweak the graphic card configuration. You can also choose and switch between the Intel SGI card or the RADEON one (it required a reboot).

To summarize, for all Dell Vostro users on Ubuntu (and more generally on Linux), install the ADM Catalyst graphics drivers ;)


Apache Karaf 2.2.9

$
0
0

The Apache Karaf team is proud to announce a new version on the Karaf 2.2.x branch.

The 2.2.9 version is a major bug fixes release.

We fixed 52 issues in this release, and we are inline with our release cycle (around every 2 months).

Especially, it includes:

  • Work with the latest JRE/JDK 1.6.0 and 1.7.0 updates: some changes in the Java Virtual Machine resulted to issues in previous Karaf releases. It’s now fixed in Karaf 2.2.9, allowing Karaf to work with Java 1.6.0_33 or 34 for instance, but fixed in Karaf itself and an upgrade to Aries Proxy 0.3.1.
  • New OSGi frameworks: this release upgrade to the latest minor versions of the OSGi frameworks, especially Apache Felix Framework 3.2.2.
  • Dependencies minor version updates: in order to fix several minor issues, some dependencies have been upgraded (javax.mail 1.4.5, ServiceMix bundles, etc).
  • Stopping the shell console doesn’t stop the OSGi framework: previously, when you stopped the Karaf shell console bundle, it also stopped the OSGi framework. No, the bundles are no more coupled, allowing you to stop the shell console but let your application bundles running in the OSGi framework.
  • MBeans and shell commands are consistent: some options present in the shell commands (for instance, -c and -r options in features:install command) were not present for the same action using MBeans. Now, you can find the same options in both commands and MBeans.
  • Features installation optimisation: Karaf now avoid to resolve the same bundles many times when installing features in cascade. It improves the features installation time.

You can have more information and download this release here:

http://karaf.apache.org/index/community/download/karaf-2.2.9-release.html

Now, we are working on Karaf 2.3.0 and 3.0.0-RC1. Karaf 2.3.0 should arrive very soon.

Apache Karaf 2.3.0 released !

$
0
0

Waiting for Karaf 3.0.0, we worked hard in the Karaf team to provide Apache Karaf 2.3.0.

The Karaf 2.2.x branch is now only in maintenance mode: it means that no new features will be implemented in this branch, only major bug fixes.

The new “stable” branch is now Karaf 2.3.x which is a perfect transition branch between Karaf 2.2.x (heavely used) and the future Karaf 3.x (which should arrive very soon).

What’s new in this 2.3.0 release:

* OSGi r4.3: Karaf 2.2.x branch was powered by OSGi frameworks implementing OSGi r4.2 norm. Karaf 2.3.0 is now powered by the new OSGi r4.3 framework (Apache Felix 4.0.3 and Equinox 3.8.x), for both OSGi core and compendium. It provides new features like weaving, etc.
* Aries Blueprint 1.0.x: Karaf 2.3.0 uses the new Aries Blueprint version at different level (core, JMX, etc).
* Update to ASM 4.0: in order to work with Aries proxies, we updated to new ASM bundle. We also provided configuration that allows you to enable/disable weaving.
* OSGi Regions and SCR support: Karaf 2.3.0 provides both Regions and SCR support.
* JMX improvement: the previous MBeanRegistrer from Karaf 2.2.x has been removed to be replaced by Aries JMX. It allows an easier way to integrate MBeans by registering OSGi services. The MBeans have been improved to provide new operations and options corresponding with what you can do using the shell commands.
* Complete itest framework: Karaf 2.3.0 provides a new tool: Karaf exam. This tool provides a framework to very easily implements integration tests. It’s able to download and bootstrap a Karaf version on which you can run your commands, deploy your features and bundles, etc. It allows you to run a complete integration tests chain from Maven.
* Dependencies upgrade: a lot of dependencies have been updated. Karaf 2.3.0 uses Pax Logging 1.7.0 including bug fixes and SLF4J 1.7.1 support, new Pax Web and Jetty version for the web container, new JLine, SSHD and Mina versions which especially fix weird behavior on Windows for certain keys, etc.
* KAR improvements: if Karaf 3.x will provide a lot of enhancements around the KAR files, Karaf 2.3.0 already provides fixes in the KAR lifecycle.
* JAAS commands improvements: the jaas:* commands have been enhanced to allow you a fine-grained management of the realms and login modules.

You can find the Karaf 2.3.0 content details on the Release Notes.

The Karaf team is proud to provide this release to you. We hope you will enjoy it !

Apache Karaf Cellar 2.2.5 released !

$
0
0

During the ApacheCon EU, I made a demo of Karaf and Cellar all together. During this demo, I used Cellar 2.2.5-SNAPSHOT.

Now, Cellar 2.2.5 is released ! But, what’s new in this version ?

Groups are now persistent

In Cellar 2.2.4, the empty groups disappear after a restart.

You created a new cluster group without any member (empty group) with:


karaf@root> cluster:group-create foobar
karaf@root> cluster:group-list|grep -i foobar
foobar []

If you restart Cellar (or Karaf), the empty groups were lost:


karaf@root> cluster:group-list|grep -i foobar

To avoid this, in Cellar 2.2.5, the cluster groups are now persistent on each node. We introduced a new groups property in etc/org.apache.karaf.cellar.groups.cfg to store the list of groups. Cellar now reads this property as startup to populate the cluster groups not present on the cluster.

On the other hand, the groups property in etc/org.apache.karaf.cellar.node.cfg defines the group membership of the local node.

If you restart Karaf (or Cellar), this group disappeared from

Cluster producers, consumers, and handlers persistency

Like for groups, with Cellar 2.2.4, the status of cluster event producers, consumers and handlers was not persistent. It means that if you stop the cluster event producer, for instance, after a restart, the producer was start again. So we loosed the status before the restart.

In Cellar 2.2.5, to avoid that, the status of cluster event producers, consumers and handlers is now persistent in etc/org.apache.karaf.cellar.node.cfg (it’s the current status on the local node). Cellar now reads the properties from this file at startup to set the previous status (before the restart).

Bundles blacklist and whitelist

In Cellar 2.2.4, the bundles blacklist and whitelist were not correct by default in etc/org.apache.karaf.cellar.groups.cfg: all bundles were blocked (inbound and outbound). If you tried to install a bundle on the cluster, you saw a “Bundle xxxx is BLOCKED …” in the log.

We changed the default setup to allow all bundle cluster events.

Config sync enhancement

In Cellar 2.2.4, to avoid infinite loop, we introduced a karaf.cellar.sync property appended to all synchronized configuration PID. This property contained the timestamp of the last Cellar configuration synchronization. This mechanism has two issues:

  • it pollutes the configuration PID (it can be confusing for the users to see a “not usable” property)
  • if a configuration change occurs between the timestamp and the Cellar configuration timeout, it’s not synchronized on the cluster

We changed the configuration synchronization mechanism in Cellar 2.2.5. The karaf.cellar.sync property has been removed. Now we compare the dictionary of configuration PID in the cluster (distributed map) and the local.

Bundle state, name, and symbolic name

The bundle distributed map stored only the bundle name. It was a little bit restrictive.

In Cellar 2.2.5, both bundle name and symbolic name are stored in the cluster distributed map.

It allows the users to select a bundle (on the cluster) using both name and symbolic name.

Improvements on the cluster:* commands and MBeans

In order to mimic the Karaf core commands and MBeans, the Cellar commands and MBeans have been improved.

The cluster:feature-install (and the corresponding MBeans) now supports norefresh and noclean options, as supported by features:install Karaf command.

The cluster:bundle-list supports the -l option (to display the bundle location) and -s option (to display the bundle symbolic name), as the bundles:list/osgi:list Karaf command.

The cluster:config-list command now allows to display directly a configuration PID dictionary.

A new command has been introduced in Cellar 2.2.5: cluster:sync. This command forces a synchronization on the local node. It’s particulary interesting when the node loosed the communication with the other nodes (for instance, due to a network issue), the cluster:sync forces the resynchronization of the node and the cluster (in both direction).

Restart issues

Cellar uses a LocalBundleListener to listen for changes on the local bundles, and broadcast these changes as a bundle cluster event.

In Cellar 2.2.4, this listener was a simple BundleListener. The problem was that this listener get the “bundle stop” local event when stopping the framework and broadcast it to the cluster (including to the local node). It means that the “latest” state on the bundle was “stopped”. At restart, the OSGi framework reset the bundle in “stopped” status (instead of “started”).

In Cellar 2.2.5, the listener has been changed to a SynchronousBundleListener. Thanks to this listener, we are able to get the stopping event from the OSGi framework. When the framework stops, Cellar disable the bundle listener in order to avoid to change the bundle states.

Like this, we restart the bundle in the correct state.

We hope that you will like this new Cellar 2.2.5 release. We mostly focused on the bug fixes to provide a more stable cluster solution for Karaf.

Now, we are preparing Cellar 2.2.6 with new bug fixes, new feature, etc. In the mean time, Cellar 2.3.0 is in preparation, supporting Karaf 2.3.x and new Hazelcast version.

How to enable HTTPS certificate client auth with Karaf

$
0
0

I received many times messages from users asking how we can “trust” HTTP clients in Karaf.

The purpose is to exchange certificates and allow only “trusted” clients to use the Karaf HTTP service.

Enable HTTP client auth

First of all, we have to enable the HTTP client auth support in Karaf.

When you install the HTTP feature, Karaf leverages Pax-Web to provide HTTP OSGi service:


karaf@root> features:install http

Now, we have to add a custom etc/org.ops4j.pax.web.cfg file:


org.osgi.service.http.port=8181

org.osgi.service.http.port.secure=8443
org.osgi.service.http.secure.enabled=true
org.ops4j.pax.web.ssl.keystore=./etc/keystores/keystore.jks
org.ops4j.pax.web.ssl.password=password
org.ops4j.pax.web.ssl.keypassword=password
#org.ops4j.pax.web.ssl.clientauthwanted=false
org.ops4j.pax.web.ssl.clientauthneeded=true

NB: clientauthwanted and clientauthneeded properties are valid for Karaf 2.2.x which use Pax Web 1.0.x.

Thanks to the clientauthneeded property, we “force” the client to be trusted.

Create the trusted client certificate

We are going to use keytool (provided with the JDK) to manipulate the keys and certificates.

The first step is to create two key pairs:

  • one for the server side (use for SSL)
  • one as a example of client side (use for “trust”, should be performed for each client, on the client side)


mkdir -p etc/keystores
cd etc/keystores
keytool -genkey -keyalg RSA -validity 365 -alias serverkey -keypass password -storepass password -keystore keystore.jks
keytool -genkey -keyalg RSA -validity 365 -alias clientkey -keypass password -storepass password -keystore client.jks

NB: these key are self-signed. In a production system, you should use a Certificate Authority (CA).

Now, we can export the client certificate to be imported in the server keystore:


keytool -export -rfc -keystore clientKeystore.jks -storepass password -alias clientkey -file client.cer
keytool -import -trustcacerts -keystore keystore.jdk -storepass password -alias clientkey -file client.cer

We can now check that the client certificate is trusted in our keystore:


keytool -list -v -keystore keystore.jks
...
Alias name: clientkey
Creation date: Dec 12, 2012
Entry type: trustedCertEntry
...

and we can now remove the client.cer certificate.

Start Karaf and test with WebConsole

Now we can start Karaf:


bin/karaf

and install the WebConsole feature:


karaf@root> features:install webconsole

If we try to access to the WebConsole (using a simple browser) using https://localhost:8443/system/console, we have:


An error occurred during a connection to localhost:8443.

SSL peer cannot verify your certificate.

(Error code: ssl_error_bad_cert_alert)

which is normal as the browser doesn’t have any trusted certificate.

Now, we can add the client certificate in the browser.

Firefox supports the import of PKCS12 keystore. So, we are going to “transform” the JKS keystore into a PKCS12 keystore:


keytool -importkeystore -srckeystore clientKeystore.jks -srcstoretype JKS -destkeystore client.pfx -deststoretype PKCS12
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
Entry for alias clientkey successfully imported.
Import command completed: 1 entries successfully imported, 0 entries failed or cancelled

Now, we can import the client certificate in Firefox. To do so, open the Preferences window (in Edit menu), and click on the Advanced tab.
You can go in Encryption tab and click on “View Certificates” button.

In “Your Certificates” tab, you can click on the Import button and choose the client.pfx keystore file.

If you try to access to https://localhost:8443/system/console again, you will have access as a trusted client and use it.

Conclusion

It’s the same with any kind of HTTP client that try to use the HTTPs layer of Karaf.

Now, we can disable the HTTP support in Karaf (to force the usage of HTTPs), and we can allow only “trusted” clients to use the HTTPs layer of Karaf.

It’s a simple mechanism if you want to limit access to HTTP resources only for trusted clients.

Create custom log4j appender for Karaf and Pax Logging

$
0
0

Karaf leverages Pax Logging for the logging layer. Pax Logging provides an abstraction service for most popular logging frameworks, like SLF4J, Log4j, commons-logging, etc.

Karaf provides a default logging configuration in etc/org.ops4j.pax.logging.cfg file.

By default, all INFO log messages (rootLogger) are send into a file appender (in data/log/karaf.log). The file appender “maintains” one file of 1MB, and store up to 10 backup files.

Adding a new appender configuration, example with Syslog appender

We can add new appender configuration in the Karaf logging module.

For instance, we can add a syslog appender in etc/org.ops4j.pax.logging.cfg:


log4j.rootLogger = INFO, out, syslog, osgi:*
...
# Syslog appender
log4j.appender.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.syslog.layout=org.apache.log4j.PatternLayout
log4j.appender.syslog.layout.ConversionPattern=[%p] %c:%L - %m%n
log4j.appender.syslog.syslogHost=localhost
log4J.appender.syslog.facility=KARAF
log4j.appender.syslog.facilityPrinting=false
...

We create the syslog appender configuration, and we use this appender for the rootLogger.

Pax Logging provides all default Log4j appenders.

Creating a custom appender

It’s also possible to create your own appender.

For instance, you want to create MyJDBCAppender, extending the standard Log4J JDBCAppender. MyJDBCAppender has a better management of the quote in the SQL query for a DB2 backend for instance:


package org.apache.karaf.blog.logging.appender;

import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.jdbc.JDBCAppender;

/**
* Override apache log4j JDBCAppender for DB2 use (escaping of ' char in data)
* Need proper substitution of the ' char by {@link SQL_APOS} in the writing of the log4j sql property
*/
public class MyJDBCAppender extends JDBCAppender {

private static final String SQL_APOS = "{sql_apos}";
private static final String XML_APOS = "'";

/** {@inheritDoc} */
@Override
protected String getLogStatement(LoggingEvent event) {
String sqlLayout = getLayout().format(event);
// escape ' as standard sequence (') in the sql statement after layout
sqlLayout = sqlLayout.replace("'", XML_APOS);
// revert specific sequence as ' to have final executable sql statement
sqlLayout = sqlLayout.replace(SQL_APOS, "'");
return sqlLayout;
}

}

We put the MyJDBCAppender java file in a src/main/java/org/apache/karaf/blog/logging folder.

We package this appender as an OSGi bundle. This bundle is a fragment to the Pax Logging service bundle:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>org.apache.karaf.blog.logging.appender</groupId>
  <artifactId>org.apache.karaf.blog.logging.appender.jdbc</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>bundle</packaging>

  <dependencies>
    <dependency>
      <groupId>org.ops4j.pax.logging</groupId>
      <artifactId>pax-logging-service</artifactId>
      <version>1.6.9</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <version>2.3.7</version>
        <extensions>true</extensions>
        <configuration>
          <instructions>
            <Bundle-SymbolicName>org.apache.karaf.blog.logging.appender.jdbc</Bundle-SymbolicName>
            <Export-Package>org.apache.karaf.blog.logging.appender</Export-Package>
            <Import-Package/>
            <Private-Package>org.apache.log4j.jdbc</Private-Package>
            <Fragment-Host>org.ops4j.pax.logging.pax-logging-service</Fragment-Host>
            <_failok>true</_failok>
          </instructions>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>

We can use our appender in etc/org.ops4j.pax.logging.cfg file, for instance:


log4j.rootLogger = INFO, out, myappender, osgi:*
...
log4j.appender.myappender=org.apache.karaf.blog.logging.appender.MyJDBCAppender
log4j.appender.myappender.url=jdbc:db2:....
log4j.appender.myappender.driver=com.ibm.db2.jcc.DB2Driver
log4j.appender.myappender.user=username
log4j.appender.myappender.password=password
log4j.appender.myappender.sql=insert into logs values({{sql_apos}%x{sql_apos}, {sql_apos}%d{sql_apos}, {sql_apos}%C{sql_apos}, {sql_apos}%p{sql_apos}, {sql_apos}%m{sql_apos})
log4j.appender.myappender.layout=org.apache.log4j.PatternLayout

In order to be loading very early in the Karaf bootstrap, our appender bundle should be present in the system folder and defined in etc/startup.properties.

The system folder has a “Maven repo like” structure. So you have to copy with:


system/groupId/artifactId/version/artifactId-version.jar

In our example, it means:


mkdir -p $KARAF_HOME/system/org/apache/karaf/blog/logging/appender
cp target/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar $KARAF_HOME/system/org/apache/karaf/blog/logging/appender/org.apache.karaf.blog.logging.appender.jdbc/1.0-SNAPSHOT/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar

and in etc/startup.properties, we define the appender bundle just after the pax-logging-service bundle:


...
org/ops4j/pax/logging/pax-logging-api/1.6.9/pax-logging-api-1.6.9.jar=8
org/ops4j/pax/logging/pax-logging-service/1.6.9/pax-logging-service-1.6.9.jar=8
org/apache/karaf/blog/logging/appender/org.apache.karaf.blog.logging.appender.jdbc/1.0-SNAPSHOT/org.apache.karaf.blog.logging.appender.jdbc-1.0-SNAPSHOT.jar=8
...

You can now start Karaf, it will use our new custom appender.

Multiple HTTP connectors in Apache Karaf

$
0
0

Installing the http feature in Karaf leverages Pax Web to embed a Jetty webcontainer.

By default, Karaf create a Jetty connector on the 8181 http port (and 8443 for https). You can change this port number by providing etc/org.ops4j.pax.web.cfg file.

But, you can also create new connector in the embedded Jetty.

You may see several advantages for multiple connectors:

  • you can isolate a set of applications, CXF services, Camel routes on a dedicated port number
  • you can setup a different configuration for each connector. For instance, you can create two SSL connectors, each with a different keystore, truststore, …

You can find etc/jetty.xml configuration file where you can create custom Jetty configuration.

NB: if you want to have both etc/org.ops4j.pax.web.cfg and etc/jetty.xmll, don’t forget to reference jetty.xml in org.ops4j.pax.web.cfg using the org.ops4j.pax.web.config.file property pointing to the jetty.xml, for instance:


# in etc/org.ops4j.pax.web.cfg
org.ops4j.pax.web.config.file=${karaf.home}/etc/jetty.xml

To configure a new connector, you can add a addConnector call in this configuration. For instance, we can create a new connector on 9191 http port number (and 9443 https port number):


  <Call name="addConnector">
    <Arg>
      <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
        <Set name="host">0.0.0.0</Set>
        <Set name="port">9191</Set>
        <Set name="maxIdleTime">300000</Set>
        <Set name="Acceptors">1</Set>
        <Set name="statsOn">false</Set>
        <Set name="confidentialPort">9443</Set>
        <Set name="name">myConnector</Set>
      </New>
    </Arg>
  </Call>

Now, Karaf will listen on 8181 and 9191 (for http), 8443 and 9443 (for https).

You can also define a connector dedicated to https with dedicated configuration for this connection, especially keystore, truststore, and client authentication:


  <Call name="addConnector">
    <Arg>
      <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector">
        <Set name="port">9443</Set>
        <Set name="maxIdleTime">30000</Set>
        <Set name="keystore">./etc/keystore</Set>
        <Set name="password">password</Set>
        <Set name="keyPassword">password</Set>
      </New>
    </Arg>
  </Call>

By default, the web application will be bind on all connectors. If you want that your web application use a specific connector, you have to define it in the MANIFEST using the following properties:


Web-Connectors: myConnector
Web-VirtualHosts: localhost

If you use CXF services or Camel routes, if you use a connetor hostname and port number in the endpoint, it will use the corresponding connector.

For instance, the following CXF endpoint of a Camel route will use myConnector:


...
  <cxf:cxfEndpoint id="cxfEndpoint" address="http://localhost:9191/services/myservice" wsdlUrl="..."/>
...

Karaf allows you a fine grained Jetty configuration. Karaf becomes a real complete WebContainer, with custom configuration on several connectors. It’s especially interesting for SSL connector where each connector can have a dedicated keystore and truststore, and client authentication configuration.

Load balancing with Apache Karaf Cellar, and mod_proxy_balancer

$
0
0

Thanks to Cellar, you can deploy your applications, CXF services, Camel routes, … on several Karaf nodes.

When you use Cellar with web applications, or CXF/HTTP endpoints, a “classic” need is to load balance the HTTP requests on the Karaf nodes.

You have different ways to do that:
- using Camel Load Balancer EIP: it’s an interesting EIP, working with any kind of endpoints. However, it requires to have a Karaf running the Load Balancer routes, so it’s not always possible depending of the user security policy (for instance, putting it in DMZ or so)
- using hardware appliances like F5, Juniper, Cisco: it’s a very good solution, “classic” solution in network teams. However, it requires expensive hardwares, not easy to buy and setup for test or “small” solution.
- using Apache httpd with mod_proxy_balancer: it’s the solution that I’m going to detail. It’s a very stable solution, powerful and easy to setup. And it costs nothing ;)

For instance, you have three Karaf nodes, exposing the following services and the hostname:
- http://192.168.134.3:8040/services
- http://192.168.134.4:8040/services
- http://192.168.134.5:8040/services

We want to load balance those three nodes.

On a dedicated server (it could be installed on one hosting Karaf), we just install Apache httpd:


# on Debian/Ubuntu system
aptitude install apache2


# on RHEL/CentOS/Fedora system
yum install httpd
# enable network connect on httpd
/usr/sbin/setsebool -P httpd_can_network_connect 1

Apache httpd comes with mod_proxy, mod_proxy_http, and mod_proxy_balancer modules. Just check if those modules are loaded in the main httpd.conf.

You can now create a new configuration for your load balancer (directly in the main httpd.conf or by creating a conf file in etc/httpd/conf.d):


<Proxy balancer://mycluster>
  BalancerMember http://192.168.134.3:8040
  BalancerMember http://192.168.134.4:8040
  BalancerMember http://192.168.134.5:8040
</Proxy>
ProxyPass /services balancer://mycluster

The load balancer will proxy the /services requests to the different Karaf nodes.

By default, the mod_proxy_balancer module uses a byrequests algorithm: all nodes will receive the same number of requests.
You can switch to bytraffic (using the lbmethod=bytraffic in the proxy configuration): in that case, all nodes will receive the same amount of traffic (by KB).

The mod_proxy_balancer module is able to support session “affinity” if your application needs it.
When a request is proxied to some back-end, then all following requests from the same user should be proxied to the same back-end.
For instance, you can use the cookie in header to define the session affinity:


Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://mycluster>
  BalancerMember http://192.168.134.3:8040 route=1
  BalancerMember http://192.168.134.4:8040 route=2
ProxySet stickysession=ROUTEID
</Proxy>
ProxyPass /myapp balancer://mycluster

The mod_proxy_balancer module also provide a web manager allowing you to see if your Karaf nodes are up or not, the number of requests received by each node, and the current lbmethod in use.

To enable this balancer manager, you just have to add a dedicated handler:


<Location /balancer-manager>
  SetHandler balancer-manager
  Order allow,deny
  Allow from all
</Location>

Point your browser to http://host/balancer-manager and you will see the manager page.

You can find more information about mod_proxy_balancer here: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html.

Apache httpd with mod_proxy_balancer is an easy and good HTTP load balancer solution in front of Karaf and Cellar.


Upgrade to Ubuntu 13.04

$
0
0

Saturday, I decided to upgrade to Ubuntu 13.04.

I used Ubuntu 12.04 LTS for a long time (since the release date). So the first step was to upgrade to Ubuntu 12.10: no problem with this upgrade, it works straight forward.

After that I upgraded to 13.04, and I had the following issues.

Upgrade to AMD Catalyst 13.4 driver

Ubuntu 13.04 uses Linux kernel 3.8.0. As I used the AMD Catalyst driver for my Radeon GPU, I had to recompile the kernel module. I first tried AMD Catalyst 13.1 driver but it doesn’t work, as the kernel headers structure has changed (for instance the version.h header has changed).
Fortunately, the AMD Catalyst driver 13.4 supports Linux kernel 3.8.0, so I upgraded to this version.

Downgrade Intel video driver

Unfortunately, even using AMD Catalyst 13.4 driver, X started in low graphics mode or on the Intel GPU, not on the Radeon one.

When I forced the usage of the Radeon GPU (using amdconfig --px-dgpu), X didn’t start at all. The X log file showed me an error with the following message:


(EE) fglrx(0): [intel] Failed to allocate video resources for front buffer 1366x768 at depth

It sounded weird to me, as the Intel GPU should not be used at all in Discrete GPU (dgpu) mode.

After digging, I found that the issue was in the Intel xorg video driver that “intercept” all signals to the Intel GPU.

As it worked before (with Ubuntu 12.10), I decided to downgrade the Intel video driver to 2.20.2. I downloaded the deb pkg from launchpad and install:


dpkg -i xserver-xorg-video-intel_2.20.2-1ubuntu1_amd64.deb

Now, I have my two GPU units working fine.

Reset Unity environment

As I came from Ubuntu 12.04, when I logon Unity, it looks completly empty: no launcher, no panel bar, nothing. I was able to create a terminal (with CTRL-ALT-t), and launch applications from this terminal, but no window decoration as well.

I decided to completly reset the Unity environment for my user, using:


dconf reset -f /org/compiz/
unity --reset-icons

Now, my Unity desktop is back to normal (I just had to reconfigure it, but it’s not a big deal ;) ).

I installed unity-tweak-tool to be able to change the fonts size. In order to use a wallpaper, you have to allow icons on the desktop (in the settings).

Downgrade libQtWebKit for Skype crash

The last thing that I fixed is a crash of Skype 4.1.

When I launched Skype 4.1, it crashed with a Segmentation Fault.

Again, as it worked with Ubuntu 12.10, I take a look on the changes. I found that libQt4WebKit has been upgraded:


aptitude show libQtWebKit4

which provides:


ls -l /usr/lib/i386-linux-gnu/|grep -i libQtWeb
-rw-r--r-- 1 root root 35230640 Mar 28 18:57 libQtWebKit.so.4.10.0

To avoid “unmet dependencies”, I took the previous deb (2.2.1) that I uncompress in a folder:


dpkg -x libqtwebkit4_2.2.1-4ubuntu1_i386.deb /tmp

I updated the lib folder like this:


jbonofre@vostro:~$ ls -l /usr/lib/i386-linux-gnu/|grep -i libqtwe
lrwxrwxrwx 1 root root 20 Apr 28 08:03 libQtWebKit.so.4 -> libQtWebKit.so.4.9.0
lrwxrwxrwx 1 root root 20 Apr 28 08:03 libQtWebKit.so.4.10 -> libQtWebKit.so.4.9.0
-rw-r--r-- 1 root root 35230640 Mar 28 18:57 libQtWebKit.so.4.10.0
-rw-r--r-- 1 root root 24258276 Apr 28 08:02 libQtWebKit.so.4.9.0

Now, Skype 4.1 works like a charm ;)

Apache Karaf Cellar 2.3.0 released

$
0
0

The latest Cellar release (2.2.5) didn’t work with the new Karaf branch and release: 2.3.0.

If the first purpose of Cellar 2.3.0 is to be able to work with Karaf 2.3.x, actually, it’s more than that.

Let’s take a tour in the new Apache Karaf Cellar 2.3.0.

Apache Karaf 2.3.x support

Cellar 2.3.0 is fully compatible with Karaf 2.3.x branch.

Starting from Karaf 2.3.2, Cellar can be install “out of the box”.
If you want to use Cellar with Karaf 2.3.0 or Karaf 2.3.1, in order to avoid some Cellar bootstrap issue, you have to add the following property in etc/config.properties:


org.apache.aries.blueprint.synchronous=true

Upgrade to Hazelcast 2.5

As you may know, Cellar is clustered provision tool powered by Hazelcast.

We did a big jump: from Hazelcast 1.9 to Hazelcast 2.5.

Hazelcast 2.5 brings a lot of bug fixes and interesting new features. You can find more details here: http://www.hazelcast.com/docs/2.5/manual/multi_html/ch18s04.html.

In Cellar, all Hazelcast configuration is performed using an unique file: etc/hazelcast.xml.

Hazelcast 2.5 gives you more properties to configure your cluster, and the behaviors of the cluster events. The default configuration is largely enough for most use cases, but thanks to this Hazelcast version, you have the possibility to perform fine tuning.

More over, some new features are interesting for Cellar, especially:

  • IPv6 support
  • more complete backup support, when a node is disconnected from the cluster
  • better security and encryption support
  • higher tolerancy to connection failures
  • parallel IO support

Cluster groups persistence

In previous Cellar versions, the cluster groups were not store, and relay only on the cluster states. It means that it was possible to loose an existing cluster group if the group didn’t have any node.

Now, each node stores the cluster groups list, and its membership.

Like this, the cluster groups are persistent and we can restart the cluster, we won’t loose the “empty” cluster groups.

Cluster event producers, consumers, handlers status persistence

A Cellar node uses different components to manage cluster events:

  • the producer (one per node) is responsible to broadcast a cluster event to the other nodes
  • the consumer (one per node) receives cluster events and delegates the handling of the event to a handler
  • handlers (one per resource) handles a specific cluster events (features, bundles, etc) and update the node local states

The user has a complete control on producer, consumer, handlers. It means that it can stop or start the node producer, consumer, or handler.

The problem is that the current state of the producer/consumer/handler was not persistent. It means that a restart of the node will reset producer/consumer/handler to the default state (and not the previous one).
To avoid this issue, the producer/consumer/handler state is now persistent on the local node.

Smart synchronization

The synchronization of the different resources supported by Cellar is now better than before. Cellar now checks the local state of the node. Cellar checks a kind of diff between the local state and the state on the cluster. If the states differ, Cellar updates the local state as described on the cluster.

For the config especially, to avoid important CPU consumption, some properties are not considered during the synchronization because they are local to the node (for instance, service.factoryPid).

A new command has been introduced (cluster:sync) to “force” the synchronization of the local node with the cluster. It’s interesting when the node has been disconnected from the cluster, and you want to re-sync as soon as possible.

Improvement on Cellar Cloud support

My friend Achim (Achim Nierbeck) did a great job on the Cellar Cloud support.
First, he fixes some issues that we had on this module.

He gave a great demo during JAX: Integration In the Cloud With Camel, Karaf and Cellar.

Improvement on the cluster:* commands and MBeans

In order to be closer to the Karaf core commands, the cluster:* commands (and MBeans) now provide exactly the same options that you can find in the Karaf core commands.

And more is coming …

The first purpose of Cellar 2.3.0 is to provide a version ready to run on Karaf 2.3.x, and insure the stability. So I postponed some new features and improvements to Cellar 2.3.1.

In the mean time, I also released a new Cellar 2.2.6 release, containing mostly bug fixes (for the ones that still use Karaf 2.2.x with Cellar 2.2.x).

Apache Hadoop and Karaf, Article 1: Karaf as HDFS client

$
0
0

Maybe some of you remember that, a couple of months ago, I posted some messages on the Hadoop mailing list about OSGi support in Hadoop (http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201202.mbox/%3C4F3285F1.2000704@nanthrax.net%3E).

In order to move forward on this topic, instead of an important refactoring, I started to work on standalone and atomic bundles that we can deploy in Karaf. The purpose is to avoid to change Hadoop core, but provides a good Hadoop support directly in Karaf.

I worked on Hadoop trunk (3.0.0-SNAPSHOT) and prepared patches (https://issues.apache.org/jira/browse/HADOOP-9706).

I also deployed bundles on my Maven repository to give users the possibility to directly deploy karaf-hadoop in a running Karaf instance.

The purpose is to explain what you can do, the values about this, and maybe you will vote to “include” it in Hadoop directly ;)

To explain exactly what you can do, I prepared a serie of blog posts:

  • Article 1: Karaf as HDFS client. This is this first post. We will see the hadoop-karaf bundle installation, the hadoop and hdfs Karaf shell commands, and how you can use HDFS to store bundles or features using the HDFS URL handler.
  • Article 2: Karaf as MapReduce job client. We will see how to run MapReduce jobs directly from Karaf, and the “hot-deploy-and-run” of MapReduce jobs using the Hadoop deployer.
  • Article 3: Exposing Hadoop, HDFS, Yarn, and MapReduce features as OSGi services. We will see how to use Hadoop features programmatically thanks to OSGi services.
  • Article 4: Karaf as a HDFS datanode (and eventually namenode). Here, more than using Karaf as a simple HDFS client, Karaf will be part of HDFS acting as a datanode, and/or namenode.
  • Article 5: Karaf, Camel, Hadoop all together. In this article, we will use the Hadoop OSGi services now available in Karaf inside Camel routes (plus the camel-hdfs component).
  • Article 6: Karaf as complete Hadoop container. I will explain here what I did in Hadoop to add a complete support of OSGi and Karaf.

Karaf as HDFS client

Just a reminder about HDFS (Hadoop Distributed FileSystem).

HDFS is composed by:
- a namenode hosting the metadata of the filesystem (directories, blocks location, file permissions or modes, …). There is only one namenode per HDFS, and the metadata are stored in memory by default.
- a set of datanode hosting the file blocks. Files are composed by blocks (like in all filesystems). The blocks are located on different datanodes. The blocks can be replicated.

A HDFS client connects to the namenode to execute actions on the filesystem (ls, rm, mkdir, cat, …).

Preparing HDFS

The first step is to set up the HDFS filesystem.

I gonna use a “pseudo-cluster”: a HDFS with the namenode and only one datanode on a single machine.
To do so, I configure the $HADOOP_INSTALL/etc/hadoop/core-site.xml file like this:


<configuration>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/</value>
  </property>

</configuration>

For a pseudo-cluster, we setup only one replica per block (as we have only one datanode) in the $HADOOP_INSTALL/etc/hadoop/hdfs-site.xml file:

<configuration>

  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

</configuration>

Now, we can format the namenode:


$HADOOP_INSTALL/bin/hdfs namenode -format

and start the HDFS (both namenode and datanode):


$HADOOP_INSTALL/sbin/start-dfs.sh

Now, we can connect to the HDFS and create a first folder:


$HADOOP_INSTALL/bin/hadoop fs -mkdir /bundles
$HADOOP_INSTALL/bin/hadoop fs -ls /
Found 1 items
drwxr-xr-x - jbonofre supergroup 0 2013-07-07 22:18 /bundles

Our HDFS is up and running.

Configuration and installation of hadoop-karaf

I created the hadoop-karaf bundle as standalone. It means that it embeds a lot of dependencies internally (directly in the bundle classloader).

The purpose is to:

  1. avoid to alter anything in Hadoop core. Thanks to this approach, I can provide hadoop-karaf bundle for different Hadoop versions, and I don’t need to alter Hadoop itself.
  2. ship all dependencies in the same bundle classloader. Of course it’s not ideal in term of OSGi, but to provide a very easy and ready to use bundle, I gather most of dependencies in the hadoop-karaf bundle.

I worked on trunk directly (for now, if you are interested I can provide hadoop-karaf for existing Hadoop releases): Hadoop 3.0.0-SNAPSHOT.

Before deploying the hadoop-karaf bundle, we have to prepare the Hadoop configuration. In order to be integrated in Karaf, I implemented a mechanism to create and populate the Hadoop configuration from OSGi ConfigAdmin.
The only requirement for the user is to create a org.apache.hadoop PID in the Karaf etc folder containing the Hadoop properties. Actually, it means to just create a $KARAF_INSTALL/etc/org.apache.hadoop.cfg file containing:


fs.default.name = hdfs://localhost/

If you don’t want to compile hadoop-karaf bundle yourself, you can use the artifact that I deployed on my Maven repository (http://maven.nanthrax.net/org/apache/hadoop/hadoop-karaf/3.0.0-SNAPSHOT/hadoop-karaf-3.0.0-20130708.050912-1.jar).

To do this, you have to edit my Maven repository in etc/org.ops4j.pax.url.mvn.cfg and add my repository in the org.ops4j.pax.url.mvn.repositories property:


org.ops4j.pax.url.mvn.repositories = \
  http://maven.nanthrax.net/@snapshots@id=maven, \
  http://repo1.maven.org/maven2@id=central, \
  ...

Now, we can start Karaf as usual:


$KARAF_INSTALL/bin/karaf

NB: I use Karaf 2.3.1.

We can now install the hadoop-karaf bundle:


karaf@root> osgi:install -s mvn:org.apache.hadoop/hadoop-karaf/3.0.0-SNAPSHOT
karaf@root> la|grep -i hadoop
[ 54] [Active ] [Created ] [ 80] Apache Hadoop Karaf (3.0.0.SNAPSHOT)

hadoop:* and hdfs:* commands

The hadoop-karaf bundle comes with new Karaf shell commands.

For this first blog post, we are going to use only one command: hadoop:fs.

The hadoop:fs command allow you to use a HDFS directly in Karaf (it’s a wrapper to hadoop -fs):


karaf@root> hadoop:fs -ls /
Found 1 items
drwxr-xr-x - jbonofre supergroup 0 2013-07-07 22:18 /bundles
karaf@root> hadoop:fs -df
Filesystem Size Used Available Use%
hdfs://localhost 5250875392 307200 4976799744 0%

HDFS URL handler

Another thing provided by the hadoop-karaf bundle is an URL handler to support directly hdfs URL.

It means that you can use hdfs URL in Karaf commands, as osgi:install, features:addurl, ….

It also means that you can use HDFS to store your Karaf bundles, features, or configuration files.

For instance, we can copy an OSGi bundle in the HDFS:


$HADOOP_INSTALL/bin/hadoop fs -copyFromLocal ~/.m2/repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.commons-lang/2.4_6/org.apache.servicemix.bundles.commons-lang-2.4_6.jar /bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar

The commons-lang bundle is now available in the HDFS. We can check that directly in Karaf using the hadoop:fs command:


karaf@root> hadoop:fs -ls /bundles
Found 1 items
-rw-r--r-- 1 jbonofre supergroup 272039 2013-07-07 22:18 /bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar

Now, we can install the commons-lang bundle in Karaf directly from HDFS, using a hdfs URL:


karaf@root> osgi:install hdfs:/bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar
karaf@root> la|grep -i commons-lang
[ 55] [Installed ] [ ] [ 80] Apache ServiceMix :: Bundles :: commons-lang (2.4.0.6)

If we list the bundles location, we can the hdfs URL support:


karaf@root> la -l
...
[ 53] [Active ] [Created ] [ 30] mvn:org.apache.karaf.management.mbeans/org.apache.karaf.management.mbeans.dev/2.3.1
[ 54] [Active ] [Created ] [ 80] mvn:org.apache.hadoop/hadoop-karaf/3.0.0-SNAPSHOT
[ 55] [Installed ] [ ] [ 80] hdfs:/bundles/org.apache.servicemix.bundles.commons-lang-2.4_6.jar

Conclusion

This first blog post shows how to use Karaf as a HDFS client. The big advantage is that the hadoop-karaf bundle doesn’t change anything from Hadoop core, and so I can provide it for Hadoop 0.20.x, 1.x, 2.x, or trunk (3.0.0-SNAPSHOT).
In Article 3, you will see how to leverage directly HDFS as OSGi services (and so use in your bundles, Camel routes, …).

Again, if you think that this articles serie is interesting, and you would like to see the Karaf support in Hadoop, feel free to post a comment, a message on the Hadoop mailing list, and whatever to promote it ;)

Pax Logging: loggers log level

$
0
0

As you probably know, Apache Karaf uses Pax Logging as logging system.

Pax Logging is an OPS4j project (Open Participation Software 4 Java) which provide a fully OSGi compliant framework for logging. Pax Logging leverages a bunch of logging frameworks like slf4j, logback, log4j, avalong, etc. It gathers all the configuration and the actual logging mechanisms in a central way. It means that, in your applications/bundles, you can use slf4j or log4j, it doesn’t matter, behind the hood you will use Pax Logging.

Karaf provides a bunch of shell commands and MBean for logging:

  • log:display to see the log
  • log:display-exception to see only the exceptions
  • log:tail to display and “follow on the fly” the log
  • log:set to change the log level of a particular logger (or the rootLogger)
  • log:get to get the current log level of a particular logger (or the rootLogger)

The default configuration is a log4j configuration described in etc/org.ops4j.pax.logging.cfg. It’s where you especially define the loggers with the level and the appenders with the the conversion pattern.

However, sometimes, you may want to disable logging in a particular class or package. A typical example is when you use the Karaf webcontainer (provided by Pax Web), and you have a monitoring tool (like Naggios or Zabbix) which access to a URL in a “bad manner”. By “bad manner”, I mean that the monitoring tool send just a “ping” most of the time, not a complete valid HTTP request.

In that case, you may see “WARNING” messages in the log, coming from the Jetty web server. The messages look like:


22:25:20,948 | WARN | tp2029485198-177 | pse.jetty.servlet.ServletHandler 514 | 54 - org.eclipse.jetty.util - 7.6.7.v20120910 | /system/console/bundles
java.lang.reflect.UndeclaredThrowableException
    at org.ops4j.pax.web.service.internal.$Proxy10.service(Unknown Source)[71:org.ops4j.pax.web.pax-web-runtime:1.1.4]
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)[62:org.eclipse.jetty.servlet:7.6.7.v20120910]
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:447)[62:org.eclipse.jetty.servlet:7.6.7.v20120910]
...

As you know the source of this warn message, you may want to “increase” the log level to ERROR (to avoid to see WARN messages), or to completely disable the log messages coming from the Jetty ServletHandler.

To change the log level, in etc/org.ops4j.pax.logging.cfg, you can create a new logger dedicated to jetty, and define the log level for this logger:


log4j.logger.org.eclipse.jetty=ERROR

or you can completely disable the logging coming from the servlet handler:


log4j.logger.org.eclipse.jetty.servlet.ServletHandler=OFF

OFF is a “special” log level which disable the logging.

Another “use case” for this is about the sshd server embedded in Karaf. You may know that you can access to Karaf using a simple ssh client (OpenSSH on Unix, Putty on Windows, or the client provided with Karaf). By default, the Karaf sshd server log all session connections in DEBUG. So if you turn the rootLogger to DEBUG, you will see a lot of “noise” in the log. So, it makes sense to change the sshd server log level to INFO, just for the channel session:


log4j.logger.org.apache.mina.sshd.server.channel.ChannelSession=INFO

Karaf and Pax Web: disabling reverse lookup

$
0
0

Karaf can be a full WebContainer just by installing the war feature:


features:install war

The war feature will install Pax Web and Jetty web server. You can configure Pax Web using a configuration file etc/org.ops4j.pax.web.cfg. In this configuration, you can define a Jetty configuration file (like jetty.xml) using the following property:


org.ops4j.pax.web.config.file=${karaf.base}/etc/jetty.xml

Now, using the etc/jetty.xml, you have a complete access to the Jetty configuration, especially, you can define the Connector configuration.

In the “default” connector (bound to port 8181 by default), you can set “advanced” configuration.

An interesting configuration is the reverse lookup. Depending of your network, the DNS resolution may not work. By default, Jetty will try to do reverse DNS resolution, and if you can’t use a DNS server on the machine, you may encounter “bad response time”, because you will have to wait the timeout for each DNS lookup.
So, in that case, it makes sense to disable the reverse lookup. You can disable reverse lookup per Jetty connector, using the etc/jetty.xml and adding the resolveNames option on the connector:

  <Call name="addConnector">
    <Arg>
      <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
        <Set name="host"><Property name="jetty.host" /></Set>
        <Set name="port"><Property name="jetty.port" default="8040"/></Set>
        <Set name="maxIdleTime">300000</Set>
        <Set name="Acceptors">2</Set>
        <Set name="statsOn">false</Set>
        <Set name="confidentialPort">8443</Set>
        <Set name="lowResourcesConnections">20000</Set>
        <Set name="lowResourcesMaxIdleTime">5000</Set>
        <Set name="resolveNames">false</Set>
      </New>
    </Arg>
  </Call>

Apache ActiveMQ 5.7, 5.9 and Master-Slave

$
0
0

With my ActiveMQ friends (especially Dejan and Claus), I’m working on ActiveMQ 5.9 next release.

Today, I focus on the HA with ActiveMQ, and especially Master-Slave configuration.

Update of the documentation

The first thing that I noticed is that the documentation is not really up to date.

If you do a search on the ActiveMQ website about Master-Slave, you will probably find these two links:

On the first link (about KahaDB), we can see a note “This is under review – and not currently supported”. It’s confusing for the users as this mechanism is the prefered one !
On the other hand, the second link should be flagged as deprecated as this mechanism is no more maintained.

I sent a message on the dev mailing list to updated these pages.

Lease Database Locker to avoid “dual masters”

In my test cases, I used a JDBC database backend (MySQL) for HA (instead of using KahaDB).

I have two brokers, that use the following configuration:


  <persistenceAdapter>
    <jdbcPersistenceAdapter dataDirectory="${activemq.data}" dataSource="#mysql-ds" />
  </persistenceAdapter>

Broker1 starts, connects to MySQL, and acquires the lock. Broker1 is the master.

Broker2 starts, connects to MySQL, and waits for the lock (as the lock is hold by Broker1). Broker2 is a slave.

Now, I stop MySQL, for instance to do a cold backup. My backup is very fast, and I start MySQL server again, very quickly.

The lock is available in the database, so Broker2 get the lock, whereas Broker1 didn’t yet release it. So I’m in a bad situation where I have two “masters”.

ActiveMQ 5.7.0 introduced the change on locking strategies for shared storage master/slave topologies. Previously storage locking (and thus master election) was hard-coded directly in the particular store. So KahaDB has only the option to use shared file lock, while JDBC was using database lock.

Now, the storage locking is separated from the store, so you can implement your own locking strategies if necessary (or tune existing ones). Of course, every store has its own default locker.

In our previsou use case, to solve the “dual master” issue, we can use a new locker: the lease database locker.

To use it, we update the configuration of each locker like this:


  <persistenceAdapter>
    <jdbcPersistenceAdapter dataDirectory="${activemq.data}" dataSource="#mysql-ds" lockKeepAlivePeriod="5000">
      <locker>
        <lease-database-locker lockAcquireSleepInterval="10000"/>
      </locker>
    </jdbcPersistenceAdapter>
  </persistenceAdapter>

Lease database locker solves master/slave problem of the default database locker. Master acquires a lock only for a certain period and must extend it’s lease from time to time. Slave also checks periodically to see if the lease has expired. The lease can survive a db replica failover.

The lease based lock is acquired by blocking at start and retained by the keepAlivePeriod. To retain, the lease is extended by the lockAcquireSleepInterval, so in theory the master is always (lockAcquireSleepInterval-lockKeepAlivePeriod) ahead of the slave w.r.t the lease. It is imperative that lockAcquireSleepInterval > lockKeepAlivePeriod, to ensure the lease is always current.

In the simplest case, the clocks between master and slave must be in sync for this solution to work properly. If the clocks cannot be in sync, the locker can use the system time from the database CURRENT TIME and adjust the timeouts in accordance with their local variance from the db system time. If maxAllowableDiffFromDBTime is > 0 the local periods will be adjusted by any delta that exceeds maxAllowableDiffFromDBTime.

How to know who is the master ?

The “new” mechanism for Master/Slave is great and very easy to set up. You don’t really define who is the master, and who are the slaves. The first broker which get the lock will be the master.

So, a fair question is: how can I know which broker is the master ?

Actually, you already have the response on the JMX layer.

If you connect a JMX client (for instance jconsole) on the broker, and you take a look on the org.apache.activemq:BrokerName=Broker2,Type=Broker MBean, you can see the Slave attribute.

If Slave is true, it means that this broker is a slave. If Slave is false, it’s the master.

Another way to get this information is to use directly the activemq command with bstat argument (instead of JMX):


bin/activemq bstat
...
Connecting to pid: 563
BrokerVersion = 5.9-SNAPSHOT
TempLimit = 53687091200
Persistent = true
MemoryLimit = 67108864
TempPercentUsage = 0
SslURL =
StorePercentUsage = 0
TransportConnectors = {openwire=tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600}
Type = Broker
StompSslURL =
OpenWireURL = tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600
Uptime = 3 minutes
DataDirectory = /home/jbonofre/broker2/data
StoreLimit = 107374182400
BrokerName = broker2
VMURL = vm://broker2
StompURL =
MemoryPercentUsage = 0
Slave = true

You can see the Slave attribute there.

If you want to “script” this and get only the Slave attribute, you can use the query argument:


bin/activemq query --objname Type=Broker --view Slave
...
Slave = true

Coming in Karaf 3.0.0: JAAS users, groups, roles, and ACLs

$
0
0

This week I worked with David Booschaert. David proposed a patch for Karaf 3.0.0 to add the notion of groups and use ACL for JMX.

He posted a blog entry about that: http://coderthoughts.blogspot.fr/2013/10/jmx-role-based-access-control-for-karaf.html.

David’s blog is very detailed, mostly in term of implementation, the usage of the interceptor, etc. This blog is more about the pure end-user usage: how to configure group, JMX ACL, etc.

JAAS users, groups, and roles

Karaf uses JAAS for user authentication and authorisation. By default, it uses the PropertiesLoginModule, which use the etc/users.properties file to store the users.

The etc/users.properties file has the following format:


user=password,role

For instance:


karaf=karaf,admin

that means we have an user karaf, with password karaf, and admin for role.

Actually, the roles are not really used in Karaf: for instance, when you use ssh or JMX, Karaf checks the principal and credentials (basically the username and password) but it doesn’t really use the roles. All users have exactly the same permissions (basically all permissions): they can execute any shell commands, access to any MBeans and call any operation on these MBeans.

More over, the roles are “only” assigned by users. So, it means that we had to define the same roles list for two different users: it was the only way to assign the same roles list to different users.

So, in addition of users and roles, we introduced JAAS groups.

An user can be a member of a group or have roles assigned directly (as previously).

A groups has typically one or more roles assigned. An user that is part of that group will get these roles associated too.
Finally, an user has the union of the roles associated with his groups, togeher with his own roles.

Basically, the etc/users.properties file doesn’t change in terms of format. We just introduced a prefix to identify a group: _g_. An “user” with the _g_: prefix is actually a group.
So a group is defined as an user, and it’s possible to use a group in the list of roles of an user:


# users
karaf = karaf,_g_:admingroup
manager = manager,_g_:managergroup
other = other,_g_:managergroup,otherrole

#groups
_g_\:admingroup = admin,viewer,manager
_g_\:managergroup = viewer,manager

We updated the jaas:* shell commands to be able to manage groups, roles, and users:


karaf@root> jaas:realm-manage --realm karaf
karaf@root> jaas:group-add managergroup
karaf@root> jaas:group-add --help
karaf@root> jaas:user-add joe joe
karaf@root> jaas:group-add joe managergroup
karaf@root> jaas:group-role-add managergroup manager
karaf@root> jaas:group-role-add managergroup viewer
karaf@root> jaas:update
karaf@root> jaas:realm-manage --realm karaf
karaf@root> jaas:user-list
User Name | Group | Role
----------------------------------
karaf | admingroup | admin
karaf | admingroup | manager
karaf | admingroup | viewer
joe | managergroup | manager
joe | managergroup | viewer

Thanks to the groups, it’s possible to factorise the roles, and easily share different roles between the different users.

Define JMX ACLs based on roles

As explained before, the roles were not really used by Karaf. Especially, on the JMX layer, for instance, using jconsole with karaf user, you were able to see all MBeans and perform all operations.

So, we introduced the support of ACL (AccessLists) on JMX.

Now, whenever a JMX operation is invoked, the roles of the current user are checked against the required roles for this operation.

The ACL are defined using configuration files in the Karaf etc folder.

The ACL configuration file is prefixed with jmx.acl and completed with the MBean ObjectName that it applies to.

For example, to define the ACL on the MBean foo.bar:type=Test, you will create a configuration file named etc/jmx.acl.foo.bar.Test.cfg.
It’s possible to define more generic configuration file: on the domain (using jmx.acl.foo.bar.cfg) applied to all MBeans in this domain , or the most generic (jmx.acl.cfg) applied to all MBeans.

A very simple configuration file looks like:


# operation = roles
test = admin
getVal = manager,viewer

The configuration file supports different syntax to provide fine-grained operation ACL:

  • Specific match for the invocation, including arguments value:

    test(int)["17"] = role1

    It means that only users with role1 assigned will be able to invoke the test operation with 17 as argument value.
  • Regex match for the invocation:

    test(int)[/[0-9]/] = role2

    It means that only users with role2 assigned will be able to invoke the test operation with argument between 0 and 9.
  • Signature match for the invocation:

    test(int) = role3

    It means that only users with role3 assigned will be able to invoke test operation.
  • Method name match for the invocation:

    test = role4

    It means that only the users with role4 assigned will be able to invoke any test operations (whatever the list of arguments is).
  • A method name wildcard match:

    te* = role5

    It means that only the users with role5 assigned will be able to invoke any operations matching te* expression.

Karaf looks for required roles using the following process:

  1. The most specific configuration file is tried first (etc/jmx.acl.foo.bar.Test.cfg).
  2. If no matching definition is found in the specific configuration file, a more generic configuration file is inspected. In our case, Karaf will use etc/jmx.acl.foo.bar.cfg.
  3. If no matching definition is found in the domain specific configuration file, the most generic configuration file is inspected, etc/jmx.acl.cfg.

The ACLs work for any kind of MBeans including the one from the JVM itself. For instance, it’s possible to create etc/jmx.acl.java.lang.Memory.cfg configuration file containing:


gc = manager

It means that only the users with manager role assigned will be able to invoke the gc operation of the JVM Memory MBean.

It’s also possible to define more advanced configuration. For instance, we want that bundles with ID between 0 and 49 can be stopped only by an admin, the other bundles can be stopped by a manager. To do so, we create etc/jmx.acl.org.apache.karaf.bundle.cfg configuration file containing:


stop(java.lang.String)[/([1-4])?[0-9]/] = admin
stop = manager

etc/jmx.acl.cfg configuration file is a global configuration for the invocations of any MBean that doesn’t have a more specific ACL.
By default, we define this configuration:


list* = viewer
get* = viewer
is* = viewer
set* = admin
* = admin

We introduced a new MBean: org.apache.karaf:type=security,area=jmx.
The purpose of this MBean is to check whether the current user can access a certain MBean or invoke a specific operation on it.
This MBean can be used by management clients to decide whether to show certain MBeans or operations to the end user.

What’s next ?

Now, David and I are working on ACL/RBAC for:

  • shell commands: as we have ACL for MBeans, it makes sense to apply the same for shell commands.
  • OSGi services: the same can be applied to any OSGi service.

I would like to thank David for this great job. It’s a great addition to Karaf and a new very strong reason to promote Karaf 3 ;)


Coming in Karaf 3.0.0: subshell and completion mode

$
0
0

If you are a Karaf user, you probably know that Karaf is very extensible: you can add features in Karaf to provide new functionalities.

For instance, you can install Camel, ActiveMQ, CXF, Cellar, etc in your Karaf runtime.

Most of these features provide new commands:
- Camel provides camel:* commands to manipulate the Camel Context, the routes, etc.
- CXF provides cxf:* commands to manipulate the CXF buses, endpoints, etc.
- ActiveMQ provides activemq:* commands to manipulate brokers.
- Cellar provides cluster:* commands to manipulate cluster nodes, cluster groups, etc.
- and so on

If you install some features like this, the number of commands available in the Karaf shell console is really impressive. And it’s not always easy to find the one that we need.

That’s why subshell support has been introduced.

Subshell

Karaf now uses commands scope to create “on the fly” a subshell: the commands are grouped by subshell. As you will see later, depending of the completion mode that you will use, you will be able to see the commands only in the current subshell, and change from one subshell to another.

Let take an exemple. In Karaf itself, we have commands to manipulate bundle and commands to manipulate feature, for instance:

  • bundle:list list the bundles
  • bundle:start start bundles
  • bundle:stop stop bundles
  • feature:list list the Karaf features
  • feature:repo-list list the Karaf features repositories

In previous Karaf version, to list bundles and features, you did something like this:


karaf@root> osgi:list
...
karaf@root> features:list
...

In Karaf 3.0.0, you can still do the same (just using the new name of the commands):


karaf@root()> bundle:list
...
karaf@root()> feature:list
...

But you can also use subshell:


karaf@root()> bundle
karaf@root(bundle)> list
...
karaf@root(bundle)> feature
karaf@root(feature)> list
...

or


karaf@root()> bundle
karaf@root(bundle)> list
...
karaf@root(bundle)> exit
karaf@root()> feature
karaf@root(feature)> list
...

We can note several things here:

  • You have commands to go into a subshell. These commands are created on the fly by Karaf using the scope of the commands. Here, we use the bundle and feature commands to go into the bundle and feature subshell.
  • You can see your current subshell location directly in the prompt:

    karaf@root(bundle)>

    We can see here that we are in the bundle subshell.
  • We can switch directly from one subhsell to another using the subshell command:

    karaf@root(bundle)> feature
    karaf@root(feature)>
  • You have a new exit command to get out from the current subhsell and return to the root level.

You have the choice between different completion mode, depending the behaviour that you prefer.

Completion Mode

The completion mode defines the behaviour of the TAB key to complete commands.

You have three different modes available:

  • GLOBAL
  • FIRST
  • SUBSHELL

You can define your default completion mode using the completionMode property in etc/org.apache.karaf.shell.cfg file. By default, you have:


completionMode = GLOBAL

But, you can also change the completion mode “on the fly” (while using the Karaf shell console) using a new command: shell:completion:


karaf@root()> shell:completion
GLOBAL
karaf@root()> shell:completion FIRST
karaf@root()> shell:completion
FIRST

shell:completion can inform you about the current completion mode used. You can also provide the new completion mode that you want.

GLOBAL completion mode

GLOBAL completion mode is the default one in Karaf 3.0.0 (mostly for transition purpose).

GLOBAL mode doesn’t really use subshell: it’s the same behavior as in previous Karaf versions.

When you type the TAB key, whatever in which subshell you are, the completion will display all commands and all aliases:


karaf@root()> <TAB>
karaf@root()> Display all 273 possibilities? (y or n)
...
karaf@root()> feature
karaf@root(feature)> <TAB>
karaf@root(feature)> Display all 273 possibilities? (y or n)
...

FIRST completion mode

FIRST completion mode is an alternative to the GLOBAL completion mode.

If you type the TAB key on the root level subshell, the completion will display the commands and the aliases from all subshells (as in GLOBAL mode). However, if you type the TAB key when you are in a subshell, the completion will display only the commands of the current subshell:


karaf@root()> shell:completion FIRST
karaf@root()> <TAB>
karaf@root()> Display all 273 possibilities? (y or n)
...
karaf@root()> feature
karaf@root(feature)> <TAB>
karaf@root(feature)>
info install list repo-add repo-list repo-remove uninstall version-list
karaf@root(feature)> exit
karaf@root()> log
karaf@root(log)> <TAB>
karaf@root(log)>
clear display exception-display get log set tail

SUBSHELL completion mode

SUBSHELL completion mode is the real subshell mode (to be honest, it’s my prefered one ;) ).

If you type the TAB key on the root level, the completion displays the subshell commands (to go into a subshell), and the global aliases. Once you are in a subshell, if you type the TAB key, the completion displays the commands of the current subshell:


karaf@root()> shell:completion SUBSHELL
karaf@root()> <TAB>
karaf@root()>
* bundle cl config dev feature help instance jaas kar la ld lde log log:list man package region service shell ssh system
karaf@root()> bundle
karaf@root(bundle)> <TAB>
karaf@root(bundle)>
capabilities classes diag dynamic-import find-class headers info install list refresh requirements resolve restart services start start-level stop
uninstall update watch
karaf@root(bundle)> exit
karaf@root()> camel
karaf@root(camel)> <TAB>
karaf@root(camel)>
backlog-tracer-dump backlog-tracer-info backlog-tracer-start backlog-tracer-stop context-info context-list context-start context-stop endpoint-list route-info route-list route-profile route-reset-stats
route-resume route-show route-start route-stop route-suspend

Tips

The “old” full qualified command names are still valid. So, you don’t have to change anything in your scripts, you can use:


karaf@root()> feature:install
karaf@root()> ssh:ssh
...

You have the choice: use the completion mode that you prefer, you can always change the mode when you want using the shell:completion command.

My preference is for the SUBSHELL completion mode. Using this mode, you don’t see a bunch of commands on the root level, just the subshell switch commands. I think it’s clear and straight forward. When you “extend” your Karaf runtime with a lot of additional features, it’s interesting to have commands grouped by subshell.

Talend ESB Continous Integration, part1: Using Camel Test Kit

$
0
0

Introduction

In this serie of articles, I will show how to setup a Continuous Integration solution mixing Talend ESB tools, Maven, and Jenkins.

The purpose is to decouple the design (performed in the studio), the tests (both unit and integration tests), and the deployment of the artifacts.

The developers that use the studio should never directly upload to the Maven repository (Archiva in my case).

I propose to implement the following steps:

  1. the developers use the studio to design their routes: the metadata (used to generate the code) are stored in the subversion. The studio “only” checkouts and commits on subversion: it never directly upload to the artifact repository.
  2. a continuous integration tool (Jenkins in my case) uses Maven. The Maven POM leverages the Talend commandline (a studio without the GUI) to checkout, generate the code, and publish to the artifact repository. The Maven POM is also used to execute unit tests, eventually integration tests, and cleanly cut off the releases.
  3. the Talend runtimes (Karaf) deploy (using JMX or Talend Administration Center) the routes from the artifact repositories.

With this approach, we have a much cleaner isolation of concerns and tasks.

To demonstrate, I used Talend Enterprise edition 5.3.1, but you can do the same using the Open Studio edition.

In this first part, I will show how to use the Camel Test Kit with routes designed by the Talend studio, and how to periodically execute these tests using Jenkins.
To simplify, I will directly publish the routes on Archiva (my artifacts repository) using the studio. As I said before, it should not be done this way: only the Talend commandline (called from Jenkins) should be able to upload to the artifacts repository.

Camel Test Kit benefits

There are multiple reaons to use the Camel Test Kit and to write unit tests “outside” of the Talend studio:

  • it’s a step forward to continuous integration: the unit tests can be periodically executed by Jenkins. Thanks to that, it’s a good way to detect regressions: some changes performed in the studio may break the routes and so the unit tests.
  • it allows you to test components that you can’t run in the studio: for instance, you can’t run routes using vm component directly in the studio (you can but it’s not really useful). Thanks to mock and producer template, we can test the route and the vm endpoints.
  • it allows you to test even if you don’t have the actual dependent systems: in your route, you will probably use endpoints like CXF (for WebServices), file, FTP, JMS/ActiveMQ, etc. It’s not always easy to test route using such components directly in the studio: you may not want to really communicate with a FTP server, or to create local filesystem, etc. The Camel Test Kit allows you to mock some endpoints and mimic the actual endpoint without having really it.
  • Simulate errors: most of the time, in the studio, you test the “happy path”. But, especially when you use “custom” error handling, you may want to see if your error hanlder reacts correctly. The mock component is a good way to generate errors.

Talend Studio for the design

In the Talend Studio, using the Mediation perspective, you can design Camel routes.

The Studio should be used only for the design: not the deployment, the tests, or the releases (even if you can do all in the studio ;) ).

Using the Mediation perspective, I created a simple route:

Talend Studio screenshot

We have two routes in this design:

  • from(“vm:start”).to(“log:cLog1″).to(“direct:start”)
  • from(“direct:start”).to(“log:cLog2″).choice().when(simple(“${in.header.type} == ‘region’”)).to(“vm:region”).otherwise().to(“vm:zipcode”)
  • a DeadLetter ErrorHandler which catch any exception and send to vm:errorhandling

The first step is to publish the route on the artifact repository (Apache Archiva or Sonatype Nexus for instance). You configure the location of the artifact repository in the Talend preferences of the studio.

A right click on the route show a menu containing the “Publish” button: it will upload (deploy) the route to the artifact repository. The “Publish” button is available only in Enterprise edition. If you use the OpenStudio edition, you have to export the route as a kar file, explode the kar file and use the Maven deploy plugin to upload to the artifact repository.

The publish window allows you to define the Maven groupId, artifactId, version, etc.

The route jar file (which is an OSGi bundle) contains two “special jar files” that you have to upload to the artifact repository. This step has to be done only one time per Talend studio version. The jar files are located into the lib folder of the route jar, so you can do:

jar xvf ShowUnitTest-0.1.0-SNAPSHOT.jar lib
mvn deploy:deploy-file -DgroupId=org.talend -DartifactId=systemRoutines -Dversion=5.3.1 -Dfile=lib/systemRoutines.jar -Dpackaging=jar -Durl=http://tadmin:tadmin@localhost:8082/archiva/repository/repo-release/
mvn deploy:deploy-file -DgroupId=org.talend -DartifactId=userBeans -Dversion=5.3.1 -Dfile=lib/userBeans.jar -Dpackaging=jar -Durl=http://tadmin:tadmin@localhost:8082/archiva/repository/repo-release/

NB: if systemRoutines artifact doesn’t really change, the userBeans artifact should be uploaded “per route” and updated when you modify or create a new bean that you use in your route.

We have now all the artifacts on our artifact repository to create the unit tests.

Using Camel Test Kit

The Camel Test (provided by the camel-test.jar) provides:

  • JUnit extensions: you can create very easily unit tests by extend the CamelTestSupport and CamelSpringTestSupport abstract classes
  • Producer/Consumer template: you can “inject” exchanges/messages at any point of a route. It allows you to test exactly a route at a given point, and create messages which mimic the actual messages
  • Mock component: you can mock actual endpoints, simulate errors, and set expectations on the mock.

Now, we can create a Maven project that will gather our unit tests. We start by creating the POM:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>utests</artifactId>
    <version>0.1.0-SNAPSHOT</version>

    <properties>
        <camel.version>2.10.4</camel.version>
        <talend.version>5.3.1</talend.version>
        <commandline.path>/home/jbonofre/Talend/Talend-Studio-r104014-V5.3.1</commandline.path>
    </properties>

    <repositories>
        <repository>
            <id>local.archiva.snapshot</id>
            <name>Local Maven Archiva for Snapshots</name>
            <url>http://localhost:8082/archiva/repository/repo-snapshot/</url>
            <releases>
                <enabled>false</enabled>
            </releases>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>local.archiva.release</id>
            <name>Local Maven Archiva for Releases</name>
            <url>http://localhost:8082/archiva/repository/repo-release/</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.example</groupId>
            <artifactId>ShowUnitTest</artifactId>
            <version>${project.version}</version>
            <scope>test</scope>
        </dependency>

        <!-- Talend dependencies -->
        <dependency>
            <groupId>org.talend</groupId>
            <artifactId>systemRoutines</artifactId>
            <version>${talend.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.talend</groupId>
            <artifactId>userBeans</artifactId>
            <version>${talend.version}</version>
            <scope>test</scope>
        </dependency>

        <!-- Camel dependencies -->
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-test-spring</artifactId>
            <version>${camel.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-jdk14</artifactId>
            <version>1.6.6</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

</project>

On this Maven POM, we can note:

  • We define the Maven artifact repositories (in my case Apache Archiva) location in the <repositories> element.
  • The first <dependencies> is the route jar file itself.
  • We define the “Talend” dependencies, especially systemRoutines and userBeans.
  • Finally, we define the “Camel” dependencies: the Camel Test Kit itself, and a slf4j provider to have the log messages during the execution of the unit tests.

We are now ready to write the unit test itself. To do so, we create the src/test/java folder. In this folder, we create directly the unit test class. In my case, I create the ShowUnitTestTest class:

package test.showunittest_0_1;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.test.CamelTestSupport;
import org.junit.Test;

import java.io.IOException;

/**
 * Test on the ShowUnitTest routes
 */
public class ShowUnitTestTest extends CamelTestSupport {

    @Override
    public String isMockEndpoints() {
        return "*";
    }

    @Override
    protected RouteBuilder createRouteBuilder() throws Exception {
        ShowUnitTest route = new ShowUnitTest();
        route.initUriMap();
        return route;
    }

    @Test
    public void testRegionRouting() throws Exception {
        MockEndpoint regionMock = getMockEndpoint("mock:vm:region");
        MockEndpoint zipcodeMock = getMockEndpoint("mock:vm:zipcode");

        // we expect to receive one message on the JMS queue:region, and no message on the JMS queue:zipcode
        regionMock.setExpectedMessageCount(1);
        zipcodeMock.setExpectedMessageCount(0);

        // send a message with the region header
        template.sendBodyAndHeader("vm:start", "Foobar", "type", "region");

        // check the assertion
        assertMockEndpointsSatisfied();
    }

    @Test
    public void testZipCodeRouting() throws Exception {
        MockEndpoint regionMock = getMockEndpoint("mock:vm:region");
        MockEndpoint zipcodeMock = getMockEndpoint("mock:vm:zipcode");

        regionMock.setExpectedMessageCount(0);
        zipcodeMock.setExpectedMessageCount(1);

        // send a message with the region header
        template.sendBodyAndHeader("vm:start", "Foobar", "type", "zipcode");

        // check the assertion
        assertMockEndpointsSatisfied();
    }

    @Test
    public void testNoHeaderRouting() throws Exception {
        MockEndpoint regionMock = getMockEndpoint("mock:vm:region");
        MockEndpoint zipcodeMock = getMockEndpoint("mock:vm:zipcode");

        regionMock.setExpectedMessageCount(0);
        zipcodeMock.setExpectedMessageCount(1);

        // send a message with the region header
        template.sendBody("vm:start", "Foobar");

        // check the assertion
        assertMockEndpointsSatisfied();
    }

    @Test
    public void testErrorHandler() throws Exception {
        MockEndpoint zipcodeMock = getMockEndpoint("mock:vm:zipcode");
        MockEndpoint errorhandlingMock = getMockEndpoint("mock:vm:errorhandling");

        // raise an exception at the cLog processor step
        zipcodeMock.whenAnyExchangeReceived(new Processor() {
            @Override
            public void process(Exchange exchange) throws Exception {
                throw new IOException("Test Error Handler");
            }
        });

        // the error handling route should have received a message
        errorhandlingMock.setExpectedMessageCount(1);

        // send a message, it should call the error handler
        template.sendBody("vm:start", "Foobar");

        // check the assertion
        assertMockEndpointsSatisfied();
    }

}

In this class, we said to Camel to be able to mockup any endpoint (overriding the isMockEndpoints() method). To find the Camel URI generated by the studio, you can switch to the source tab in the studio and take a look on the initUriMap() method: this method contains all URI of the route endpoints.

We also override the createRouteBuilder() method to load the route designed in the studio. To do it, we create the route object, call the initUriMap() method, and finally return this object.

Of course, we created four different tests:

  • the testRegionRouting() tests the route, and especially the content base router when setting the header ‘type’ to ‘region’. We mock up the vm:region and vm:zipcode endpoints. We use the producer template to send at the vm:start endpoint step.
  • the testZipCodeRouting() tests the route, and especially the content base router when setting the header ‘type’ to ‘zipcode’.
  • the testNoHeaderRouting() tests the route, and especially the content base router when the header ‘type’ is not set.
  • the testErrorHandler() tests the route, simulate an error to check if the error handler reacts correctly.

Special cases: JMS, context variables, cTalendJob,…

Depending of components that you use, Talend Studio manipulates the CamelContext for you. For instance, when you use the cJMS component, you have to create a cJMSConnectionFactory.

The Talend Studio generates the code to handle the CamelContext and “inject” the JMS connection factory into the Camel JMS component.

Unfortunately, it’s a done in a private method, so not callable directly from the test createRouteBuilder method (as we do with the initUriMap() method).

The workaround is to create the CamelContext in the test and copy the code generated by the studio here. Here’s an example how to use the “custom” JMS component (as the Studio does):

    @Override
    protected CamelContext createCamelContext() throws Exception {
        DefaultCamelContext camelContext = (DefaultCamelContext) super.createCamelContext();

        RouteName_Registry contextRegister = new RouteName_Registry(camelContext.getRegistry());
        camelContext.setRegistry(contextRegister);

        javax.jms.ConnectionFactory jmsConnectionFactory = new org.apache.activemq.ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
        camelContext.addComponent("cJMSConnectionFactory1", org.apache.camel.component.jms.JmsComponent.jmsComponent(jmsConnectionFactory));

        return camelContext;
    }

Another typical use case is about the Talend context variables. Thanks to the Talend Studio, you can define context variables that you can use in any place of your route.

In the route definition (in the studio), you can create multiple contexts.

In the unit test, you can decide which context you want to use for the test. To do so, you can use the readContextValues() method when you instanciate the route:

    @Override
    public RouteBuilder createRouteBuilder() throws Exception {
        RouteToTestName route = new RouteToTestName();
        route.readContextValues("Default");
        route.initUriMap();
        return route;
    }

Another feature provided in Talend ESB is that you can call Data Integration jobs in your Camel routes. To do so, Talend ESB registers a Camel component with “talend:” as URI prefix.
You have to load this component in the test CamelContext:

        TalendComponent talendComponent = new TalendComponent();
        camelContext.addComponent("talend", talendComponent);

Complete test

To summarize, if we take a look on the required resources, we need two things.

The first thing is a Maven POM containing all the resources and artifacts required for the route execution. Here’s a complete example:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 
  <modelVersion>4.0.0</modelVersion>

  <groupId>org.example</groupId>
  <artifactId>test</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>
  <name>My Route Test</name>

  <properties>
    <talend.version>5.3.1</talend.version>
    <camel.version>2.10.4</camel.version>
  </properties>

  <dependencies>
    <!-- Route itself -->
    <dependency>
      <groupId>org.example</groupId>
      <artifactId>MyRoute</artifactId>
      <version>1.0-SNAPSHOT</version>
      <scope>test</scope>
    </dependency>
    <!-- Eventually job used in the route (via cTalendJob) -->
    <dependency>
      <groupId>org.example</groupId>
      <artifactId>MyRouteJob</artifactId>
      <version>1.0-SNAPSHOT</version>
      <scope>test</scope>
    </dependency>

    <!-- Eventually Camel components used in the route -->
    <!-- camel-ftp -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-ftp</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- camel-http -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-http</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- camel-xmljson -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-xmljson</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- camel-cxf -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-cxf</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- camel-jms and dependencies -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-jms</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.geronimo.specs</groupId>
      <artifactId>geronimo-jms_1.1_spec</artifactId>
      <version>1.1.1</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.activemq</groupId>
      <artifactId>activemq-core</artifactId>
      <version>5.7.0</version>
      <scope>test</scope>
    </dependency>
    <!-- camel-mail and mock-javamail -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-mail</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.jvnet.mock-javamail</groupId>
      <artifactId>mock-javamail</artifactId>
      <version>1.7</version>
      <scope>test</scope>
      <exclusions>
        <exclusion>
          <groupId>junit</groupId>
          <artifactId>junit</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- Talend dependencies -->
    <dependency>
      <groupId>org.talend</groupId>
      <artifactId>systemRoutines</artifactId>
      <version>${talend.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.talend</groupId>
      <artifactId>userBeans</artifactId>
      <version>${talend.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.talend.camel</groupId>
      <artifactId>camel-talendjob</artifactId>
      <version>${talend.version}</version>
      <scope>test</scope>
    </dependency>

    <!-- Camel dependencies -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-test-spring</artifactId>
      <version>${camel.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-jdk14</artifactId>
      <version>1.6.6</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

</project>

The second resource is the unit test itself (in src/test/java). Here’s a complete example, including registration of “custom” JMS component, Talend component, some custom beans registration:

package main.myroute_1_0;

import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;
import org.talend.camel.TalendComponent;

import java.util.*;

public class MyRoute_Test extends CamelTestSupport {

    @Override
    public String isMockEndpoints() {
        return "cJMSConnectionFactory1:*";
    }

    @Override
    public RouteBuilder createRouteBuilder() throws Exception {
        MyRoute route = new MyRoute();
        route.readContextValues("Default");
        route.initUriMap();
        return route;
    }

    @Override
    public CamelContext createCamelContext() throws Exception {
        DefaultCamelContext camelContext = (DefaultCamelContext) super.createCamelContext();
        MyRoute_Registry contextRegister = new MyRoute_Registry(camelContext.getRegistry());

        // custom MyBean
        beans.MyBean myBean = new beans.MyBean();
        contextRegister.register("myBean", myBean);

        // CXF_PAYLOAD_HEADER_FILTER bean required by cxf endpoint generated by the Studio
        CxfConsumerSoapHeaderFilter cxfConsumerSoapHeaderFilter = new CxfConsumerSoapHeaderFilter();
        registry.register("CXF_PAYLOAD_HEADER_FILTER", cxfConsumerSoapHeaderFilter);

        camelContext.setRegistry(contextRegister);

        // "custom" JMS component as generated by the Studio
        javax.jms.ConnectionFactory jmsConnectionFactory = new  org.apache.activemq.ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
        camelContext.addComponent("cJMSConnectionFactory1", org.apache.camel.component.jms.JmsComponent.jmsComponent(jmsConnectionFactory));

        // Talend component
        TalendComponent talendComponent = new TalendComponent();
        camelContext.addComponent("talend", talendComponent);        

        return camelContext;
    }

    @Test
    public void testRouteWithMyHeader() throws Exception {
        MockEndpoint queueMock = getMockEndpoint("mock:cJMSConnectionFactory1:queue:OUTPUT_QUEUE");

        queueMock.setMinimumExpectedMessageCount(1);

        String testHeader = "MyHeader";

        // construct the body
        List<String> body = new ArrayList<String>();
        body.add("foo");
        body.add("bar");

        Map<String, Object> camelHeaders = new HashMap<String, Object>();
        camelHeaders.put("MyHeader", testHeader);
        camelHeaders.put("CamelFileName", "/tmp/foobar.csv");
        template.sendBodyAndHeaders("cJMSConnectionFactory1:queue:INPUT_QUEUE", body, camelHeaders);

        assertMockEndpointsSatisfied();

        assertTrue(queueMock.getExchanges().get(0).getIn().getBody() instanceof List<String>);
    }

    class CxfConsumerSoapHeaderFilter extends org.apache.camel.component.cxf.common.header.CxfHeaderFilterStrategy {
        public boolean applyFilterToCamelHeaders(String headerName, Object headerValue, org.apache.camel.Exchange exchange) {
            if (org.apache.cxf.headers.Header.HEADER_LIST.equals(headerName)) {
                return true;
            }
            return super.applyFilterToCamelHeaders(headerName, headerValue,
                    exchange);
        }

        public boolean applyFilterToExternalHeaders(String headerName, Object headerValue, org.apache.camel.Exchange exchange) {
            if (org.apache.cxf.headers.Header.HEADER_LIST.equals(headerName)) {
                return true;
            }
            return super.applyFilterToExternalHeaders(headerName, headerValue,
                    exchange);
        }
    }

}

Integration with Jenkins

Now, we can periodically execute these unit tests.

To do so, I installed Jenkin in a Tomcat, and setup the Maven POM:

screen2

screen3

screen4

Next step

Unit test is the first step to a complete continuous integration process using Talend.

In the next article, I will deal with the usage of the Talend commandline via Maven, and integrate this in Jenkins.

Talend ESB Continous Integration, part2: Maven and commandline

$
0
0

In the first part of the “Talend ESB Continuous Integration” serie, we saw how to test the Camel routes created by the studio, by leveraging Camel Test Kit. We saw how to have automatic testing using Jenkins.

The Maven POM that we did assumes that the route has been deployed (on the local repository or on a remote repository like Apache Archiva).

But, it’s not so elegant that a Studio directly publish to the Archiva repository, especially from a continuous integration perspective.

In this second article, I will show how to use the Talend commandline with Maven, and do nightly builds using Jenkins.

Talend CommandLine

CommandLine introduction

The Talend commandline is the Talend Studio without the GUI. Thanks to the commandline, you can do a lot of actions, like checkout, export route, publish route, execute route. Actually, you can do all actions except the design itself ;)

You can find commandline*.sh scripts directly in your Talend Studio installation, or you can launch the commandline using:

./Talend-Studio-linux-gtk-x86_64 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace

You can use the commandline in different mode:

  • Shell Mode:
    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace shell
    

    Using this mode, the commandline starts a shell. You can execute the action directly in this shell. Type quit to exit from the commandline.

  • Script Mode:
    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace scriptFile /path/to/script
    

    Using this mode, the commandline starts and executes the actions (commands) listed in the script file.

  • Server Mode:
    ./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace startServer -p 8002
    

    Using this mode, the commandline starts a server. You can execution actions (commands) on the commandline using telnet (telnet localhost 8002). Type stopServer (eventually –force) to exit from the commandline.

The help command provides a list of all commands that you can execute in the commandline.

The first action to perform in the commandline is to init the Talend repository (containing the metadata). The repository can be local or remote.

To init a local repository, simply execute the following command in the Talend commandline:

talend> initLocal

To init a remote repository, you have to use the initRemote command, providing the location of a Talend Administration Center:

talend> initRemote http://localhost:8080/org.talend.administrator

As the commandline performs the actions asynchronously, you can see all commands (and the status) executed by the commandline, using listCommand:

talend> listCommand -a
0:COMPLETED InitRemoteCommand initRemote

Once the repository initialized, we can list the project in the repository:

talend> listProject
CI (CI) java desc=[Continuous Integration Sample] storage=[Local]
1

If you don't have existing project, you can create a new project:

1
talend> createProject -pn "CI" -pd "Continuous Integration Sample" -pl java -pa "jbonofre@talend.com"
talend> listCommand -a
1:COMPLETED CreateProjectCommand createProject -pn 'CI' -pd 'Continuous Integration Sample' -pl 'java' -pa 'jbonofre@talend.com'  name CI description Continuous Integration Sample language java author jbonofre@talend.com

Now, you can logon a project:

talend> logonProject -pn CI -ul "jbonofre@talend.com" [-up "password"]
talend> listCommand -a
2:COMPLETED LogonProjectCommand log on CI

Once logged on a project, you can list routes, jobs, services in this project:

talend> listRoute
talend> listJob
talend> listService

If you use a remote repository, once logged on the project, you will have all jobs, routes, and services checked out from the svn.

If you initialized a local repository, you may want to import items (jobs, routes, services) that you export from a studio.

talend> importItem /home/jbonofre/MyRoute.zip
talend> listCommand -a
3:COMPLETED ImportItemsCommand

Now, you can see the items that you imported:

talend> listRoute
[Samples]
  MyRoute

Now, we can use the command line the create the route kar file:

talend> exportRoute MyRoute -dd /home/jbonofre
talend> listCommand -a
4:COMPLETED ExportRouteServerCommand exportRoute 'MyRoute' -dd '/home/jbonofre'

We have the MyRoute.kar file created:

jbonofre@vostro:~$ ls -lh|grep -i kar
-rw-r--r--  1 jbonofre jbonofre 231K Oct 24 17:29 MyRoute.kar

Using the Talend Enterprise Edition, instead of creating the kar file locally, we can publish the route features (and all dependencies) directly to a Maven repository (Apache Archiva in my case):

talend> publishRoute MyRoute -pv 0.1.0-SNAPSHOT -g net.nanthrax -a MyRoute -r "http://localhost:8082/archiva/repository/repo-snapshot" -u tadmin -p foo 

We gonna use a combination of these commands on a commandline invoked by Maven.

Prepare the commandline

For our build, we use the script mode on the commandline.

To simplify, we create a commandline-script.sh in the Talend Studio installation directory. The commandline-script.sh contains:

./Talend-Studio-linux-gtk-x86_64 -nosplash -application org.talend.commandline.CommandLine -consoleLog -data commandline-workspace scriptFile $1

Publish script

We can now create a publish script called by the commandline-script.sh. This script performs the following action:

  1. Initialize the repository (local or remote, for this example, I use a remote repository)
  2. Logon on the project
  3. Publish a route

This script uses properties that we will filter with Maven using the resource plugin.

We place the script in src/scripts/commandline folder, with the publish name:

initRemote ${tac.location}
logonProject -pn ${talend.project} -ul "${tac.user}" -up ${tac.password}
publishRoute ${project.artifactId} -r "${repo.snapshot}" -u ${repo.user} -p ${repo.password} -pv ${project.version} -g ${project.groupId} -a ${project.artifactId}

We are now ready to call the commandline using Maven.

Maven deploy using commandline

To call the commandline with Maven, we use the exec-maven-plugin from codehaus.

Our Maven POM does:

  1. Disable the “default” deploy plugin.
  2. Use the maven-resource-plugin to filter the commandline scripts.
  3. Execute the commandline-script.sh at the deploy phase, using the filtered script files.

Finally, the Maven POM looks like:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>net.nanthrax</groupId>
    <artifactId>MyRoute</artifactId>
    <version>0.1.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>MyRoute</name>

    <properties>
        <talend.project>MAIN</talend.project>
        <tac.location>http://localhost:8080/org.talend.administrator</tac.location>
        <tac.user>jbonofre@talend.com</tac.user>
        <tac.password>foobar</tac.password>
        <commandline.location>/opt/talend/commandline</commandline.location>
        <commandline.executable>./commandline-script.sh</commandline.executable>
        <repo.release>http://localhost:8082/archiva/repository/repo-release/</repo.release>
        <repo.snapshot>http://localhost:8082/archiva/repository/repo-snapshot/</repo.snapshot>
        <repo.user>admin</repo.user>
        <repo.password>foobar</repo.password>
    </properties>

    <repositories>
        <repository>
            <id>archiva.repo.release</id>
            <name>Aarchiva Artifact Repository (release)</name>
            <url>${repo.release}</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>archiva.repo.snapshot</id>
            <name>Archiva Artifact Repository (snapshot)</name>
            <url>${repo.snapshot}</url>
            <releases>
                <enabled>false</enabled>
            </releases>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
    </repositories>

    <build>
        <resources>
            <resource>
                <directory>${project.basedir}/src/scripts/commandline</directory>
                <filtering>true</filtering>
                <includes>
                    <include>**/*</include>
                </includes>
            </resource>
        </resources>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-deploy-plugin</artifactId>
                <version>2.7</version>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.2.1</version>
                <executions>
                    <execution>
                        <id>export</id>
                        <phase>deploy</phase>
                        <goals>
                            <goal>exec</goal>
                        </goals>
                        <configuration>
                            <executable>${commandline.executable}</executable>
                            <workingDirectory>${commandline.location}</workingDirectory>
                            <arguments>
                                <argument>${project.build.directory}/classes/publish</argument>
                            </arguments>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>

By leveraging the commandline, our Maven POM does:

  1. Checkout the Talend metadata repository.
  2. Generate the code using the metadata.
  3. Compile the generated code.
  4. Create the artifact, the Karaf features XML, and deploy on the Archiva repository.

Nightly builds using Jenkins

Now that we have our Maven POM, we can creat the job in Jenkins. Like this, we will have nightly builds including the latest changes performed by the developers.

Of course, we can “couple” this deploy phase with the unit tests that we did in the first article. We can merge both in the same Maven POM.

It’s interesting to note that we can leverage Maven features (like pluginManagement, profiles, etc), and especially reactor (multiple Maven modules) with this, allowing to build a set of jobs, routes, or services in a row.

Some books review: Instant Apache Camel Messaging System,Learning Apache Karaf, and Instant Apache ServiceMix How-To

$
0
0

I’m pleased to be reviewer on new books published by Packt:

I received a “hard” copy from Packt (thanks for that), and I’m now able to do the review.

Instant Apache Camel Messaging System, by Evgeniy Sharapov. Published by Packt publishing in September 2013

This book is a good introduction to Camel. It covers Camel fundamentals.

What is Apache Camel

It’s a quick introduction about Camel, in only four pages. We have a good overview about Camel basics: what is a component, routes, contexts, EIPs, etc.

We have to see that as it is: it’s just a quick introduction. Don’t expect a lot of details about the Camel basics, it just provides a very high level overview.

Installation

To be honest, I don’t like this part. It focus mostly on using Maven with Camel: how to use Camel with Maven, integrate Camel in your IDE (Eclipse, or IntelliJ), usage of the archetypes.

I think it’s too much restrictive. I would have prefered a quick listing of the differents ways to install and use Camel: in a Karaf/ServiceMix container, in a Spring application context, in Tomcat or another application server, etc.

I’m afraid that some users will take “bad habits” reading this part.

Quickstart

This part goes in bit deeper about CamelContext and RouteBuilder. It’s a good chapter, but I would have focus a bit more about the DSL (at least Java, Spring, and Blueprint).

The example used is interesting as it uses different components, transformation, predicates and expressions.

It’s a really good introduction.

Conclusion

It’s a good introduction book, only for new Camel users. If you already know Camel, I’m afraid that you will be a disapointed and you won’t learn a lot.

If you are a Camel rookie rider, and you want to move forward quickly, with a “ready to use example”, this book is good one.

I would have expects more details on some key Camel features, especially the EIPs, and some real use cases on EIP with some components.

Learning Apache Karaf, by Jamie Goodyear, Johan Edstrom, Heath Kesler. Published by Packt publishing in October 2013

I helped a lot on this book and I would like to congratulate my friends Jamie Goodyear, Johan Edstrom, Heath Kesler. You did a great job guys !

It’s the perfect book to start with Apache Karaf. All Karaf features are introduced, and more, like Karaf Cellar.

It’s based on Karaf 2.x (an update will be required for Karaf 3.0.0 as a lot of commands, etc changed).

The global content is great for beginner. If you already know Karaf, you probably know most of the content, however, the book can be helpful to discover some features like Cellar.

Good job guys !

Instant Apache ServiceMix How-To, by Henryk Konsek. Published by Packt publishing in June 2013

This book is a good complement from the Camel and Karaf ones. Unfortunately, some chapters are a bit redondent: you will find the same information in both books.

However, as Apache ServiceMix is powered by Karaf, starting from Learning Apache Karaf makes sense and give you details about the core of ServiceMix (the “ServiceMix Kernel”, which is the genesis of Karaf ;) ).

This book is a good jump to ServiceMix.

I would have expect some details about some ServiceMix NMR (naming for instance), the different distributions.

ServiceMix is more than an umbrella project gathering Karaf, Camel, CXF, ActiveMQ, etc. It also provides some interesting features like Naming, etc. It would have been great to introduce this.

Conclusion

These three books are great for beginners, especially the Karaf one.

I was really glad and pleased to review these books. It’s a really a tough job to write this kind of books, and we have to congratulate the authors for their job.

It’s a great work guys !

Coming in Karaf 3.0.0: RBAC support for OSGi services and console commands

$
0
0

In a previous post, we saw a new Karaf feature: support of user groups and Role-Based Access Controle (RBAC) for the JMX layer.

We extended the RBAC support to the OSGi services, and by side effect to the console commands (as a console command is also an OSGi service).

RBAC for OSGi services

The JMX RBAC support uses a MBeanServerBuilder. The KarafMBeanServerBuilder “intercepts” the call to the MBeans, checks the definition (defined in etc/jmx.acl.*.cfg configuration files) and defines if the call can be performed or not.

Regarding the RBAC support for OSGi services, we use a similar mechanism.

The Karaf Service Guard provides a service listener which intercepts the service calls, and check if the call to the service can be performed or not.

The list of “secured” OSGi service is defined in the karaf.secured.services property in the etc/system.properties (using a LDAP syntax filter).

By default, we only “intercept” (and so secure) the command OSGi services:

karaf.secured.services = (&(osgi.command.scope=*)(osgi.command.function=*))

The RBAC definition itself are stored in etc/org.apache.karaf.service.acl.*.cfg configuration files, similar to the etc/jmx.acl*.cfg configuration files used for JMX. The syntax in this file is the same.

RBAC for console commands

As the console commands are actually OSGi services, the direct application of the OSGi services RBAC support is to secure the console commands.

By default, we secure only the OSGi services associated to the console commands (as explained early in the karaf.secured.services).

The RBAC definition on the console commands are defined in the etc/org.apache.karaf.commands.acl.*.cfg configuration files.

You can define one configuration file by command scope. For instance, the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file defines the RBAC for the bundle:* commands.

For instance, in the etc/org.apache.karaf.commands.acl.bundle.cfg configuration file, we can define:

install = admin
refresh[/.*[-][f].*/] = admin
refresh = manager
restart[/.*[-][f].*/] = admin
restart = manager
start[/.*[-][f].*/] = admin
start = manager
stop[/.*[-][f].*/] = admin
stop = manager
uninstall[/.*[-][f].*/] = admin
uninstall = manager
update[/.*[-][f].*/] = admin
update = manager
watch = admin

The format is command[option]=role.

For instance, in this file we:

  • limit bundle:install and bundle:watch commands only for the users with the admin role
  • limit bundle:refresh, bundle:restart, bundle:start, bundle:stop, bundle:uninstall, bundle:update commands with the -f option (meaning executing these commands for “system” bundles) only for the users with the admin role
  • all other commands (not matching the two previously defined rules) can be executed by the users with the manager role

By default, we define RBAC for:

  • bundle:* commands (in the etc/org.apache.karaf.command.acl.bundle.cfg configuration file)
  • config:* commands (in the etc/org.apache.karaf.command.acl.config.cfg configuration file)
  • feature:* commands (in the etc/org.apache.karaf.command.acl.feature.cfg configuration file)
  • jaas:* commands (in the etc/org.apache.karaf.command.acl.jaas.cfg configuration file)
  • kar:* commands (in the etc/org.apache.karaf.command.acl.kar.cfg configuration file)
  • shell:* commands (in the etc/org.apache.karaf.command.acl.shell.cfg configuration file)
  • system:* commands (in the etc/org.apache.karaf.command.acl.system.cfg configuration file)

This RBAC rules apply on both “local” console and remote SSH console.

As you don’t really logon the “local” console, we have to define the “roles” that we can use in the “local” console.

These “local” roles are defined in the karaf.local.roles in the etc/system.properties configuration file:

karaf.local.roles = admin,manager,viewer

We can see that, when we use the “local” console, the “implicit local user” will have the admin, manager, and viewer roles.

Viewing all 38 articles
Browse latest View live