Few weeks back I changed my OS from Windows 7 to Mint Linux 15 due to unavoidable circumstances :-). I was trying to install "vim" editor using shell and I saw the following as the output for my command.
eranda@eranda ~ $ sudo apt-get install vim
[sudo] password for eranda:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package vim is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'vim' has no installation candidate
Really....!!! "vim" has no installable candidate for Mint? I realize that this should be a problem with my aptitude. So as my first step I tried executing aptitude update (as everybody else may do) and got the following output.
eranda@eranda ~ $ sudo apt-get update
Hit http://dl.google.com stable Release.gpg
Hit http://dl.google.com stable Release
Hit http://dl.google.com stable/main amd64 Packages
Get:1 http://packages.linuxmint.com olivia Release.gpg [198 B]
Hit http://dl.google.com stable/main i386 Packages
Ign http://security.ubuntu.com olivia-security Release.gpg
Get:2 http://packages.linuxmint.com olivia Release [18.5 kB]
Ign http://security.ubuntu.com olivia-security Release
Ign http://archive.ubuntu.com olivia Release.gpg
Ign http://archive.canonical.com olivia Release.gpg
Get:3 http://packages.linuxmint.com olivia/main amd64 Packages [23.5 kB]
Ign http://archive.ubuntu.com olivia-updates Release.gpg
Ign http://archive.canonical.com olivia Release
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Get:4 http://packages.linuxmint.com olivia/upstream amd64 Packages [9,249 B]
Ign http://archive.ubuntu.com olivia Release
Get:5 http://packages.linuxmint.com olivia/import amd64 Packages [39.3 kB]
Ign http://archive.ubuntu.com olivia-updates Release
Get:6 http://packages.linuxmint.com olivia/main i386 Packages [23.5 kB]
Get:7 http://packages.linuxmint.com olivia/upstream i386 Packages [9,237 B]
Get:8 http://packages.linuxmint.com olivia/import i386 Packages [40.2 kB]
Err http://archive.canonical.com olivia/partner amd64 Packages
404 Not Found [IP: 91.189.92.191 80]
Err http://archive.canonical.com olivia/partner i386 Packages
404 Not Found [IP: 91.189.92.191 80]
Ign http://packages.linuxmint.com olivia/import Translation-en_US
Ign http://packages.linuxmint.com olivia/import Translation-en
Ign http://packages.linuxmint.com olivia/main Translation-en_US
Ign http://packages.linuxmint.com olivia/main Translation-en
Ign http://archive.canonical.com olivia/partner Translation-en_US
Ign http://packages.linuxmint.com olivia/upstream Translation-en_US
Ign http://packages.linuxmint.com olivia/upstream Translation-en
Ign http://archive.canonical.com olivia/partner Translation-en
Err http://security.ubuntu.com olivia-security/main amd64 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/restricted amd64 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/universe amd64 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/multiverse amd64 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/main i386 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/restricted i386 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/universe i386 Packages
404 Not Found [IP: 91.189.91.15 80]
Err http://security.ubuntu.com olivia-security/multiverse i386 Packages
404 Not Found [IP: 91.189.91.15 80]
Ign http://security.ubuntu.com olivia-security/main Translation-en_US
Ign http://security.ubuntu.com olivia-security/main Translation-en
Ign http://security.ubuntu.com olivia-security/multiverse Translation-en_US
Ign http://security.ubuntu.com olivia-security/multiverse Translation-en
Ign http://security.ubuntu.com olivia-security/restricted Translation-en_US
Ign http://security.ubuntu.com olivia-security/restricted Translation-en
Ign http://security.ubuntu.com olivia-security/universe Translation-en_US
Ign http://security.ubuntu.com olivia-security/universe Translation-en
Err http://archive.ubuntu.com olivia/main amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/restricted amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/universe amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/multiverse amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/main i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/restricted i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/universe i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia/multiverse i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Ign http://archive.ubuntu.com olivia/main Translation-en_US
Ign http://archive.ubuntu.com olivia/main Translation-en
Ign http://archive.ubuntu.com olivia/multiverse Translation-en_US
Ign http://archive.ubuntu.com olivia/multiverse Translation-en
Ign http://archive.ubuntu.com olivia/restricted Translation-en_US
Ign http://archive.ubuntu.com olivia/restricted Translation-en
Ign http://archive.ubuntu.com olivia/universe Translation-en_US
Ign http://archive.ubuntu.com olivia/universe Translation-en
Err http://archive.ubuntu.com olivia-updates/main amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/restricted amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/universe amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/multiverse amd64 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/main i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/restricted i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/universe i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Err http://archive.ubuntu.com olivia-updates/multiverse i386 Packages
404 Not Found [IP: 91.189.92.202 80]
Ign http://archive.ubuntu.com olivia-updates/main Translation-en_US
Ign http://archive.ubuntu.com olivia-updates/main Translation-en
Ign http://archive.ubuntu.com olivia-updates/multiverse Translation-en_US
Ign http://archive.ubuntu.com olivia-updates/multiverse Translation-en
Ign http://archive.ubuntu.com olivia-updates/restricted Translation-en_US
Ign http://archive.ubuntu.com olivia-updates/restricted Translation-en
Ign http://archive.ubuntu.com olivia-updates/universe Translation-en_US
Ign http://archive.ubuntu.com olivia-updates/universe Translation-en
Fetched 164 kB in 26s (6,152 B/s)
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/olivia-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]
W: Failed to fetch http://archive.canonical.com/ubuntu/dists/olivia/partner/binary-amd64/Packages 404 Not Found [IP: 91.189.92.191 80]
W: Failed to fetch http://archive.canonical.com/ubuntu/dists/olivia/partner/binary-i386/Packages 404 Not Found [IP: 91.189.92.191 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/olivia-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80]
E: Some index files failed to download. They have been ignored, or old ones used instead.
As the output say it should be a problem of my aptitude source list. Content of my /etc/apt/sources.list was as below.
Active apt sources in file: /etc/apt/sources.list.d/official-package-repositories.list
deb http://packages.linuxmint.com olivia main upstream import backport
deb http://archive.ubuntu.com/ubuntu olivia main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu olivia-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu/ olivia-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ olivia partner
So I further look for a solution and I found the LinuxMint bug /etc/apt/source.list specifies "olivia" rather than "raring" for all sources: Mint 15 Cinnamon 64-bit No Codecs ISO install. As specified in the bug I changed my /etc/apt/sources.list was as below.
Active apt sources in file: /etc/apt/sources.list.d/official-package-repositories.list
deb http://packages.linuxmint.com olivia main upstream import backport
deb http://archive.ubuntu.com/ubuntu raring main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu raring-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu/ raring-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ raring partner
After changing this I was able to install popular tools for Linux using the aptitude. Hopes this helps you as well.
Wednesday, December 25, 2013
Thursday, October 24, 2013
WSO2 Governance Registry 4.6.0 Released...!!!
WSO2 Governance Registry 4.6.0, a comprehensive enterprise governance registry and repository which comes under Apache Licence v2 released. Comparing to the last release WSO2 Governance Registry 4.5.3 this comes with whole lot of new features related to governance aspect and claim to be a stable release. As in the release note of 4.6.0 new features can be listed as follows.
- First-class support for WADL
- REST API for Registry
- CMIS Specification Support
- Enhanced RXT functionalities
- Notification for Approvals
- Asset models for ESB
- Lifecycle state transition through Governance API
- Offline WSDL Validation
- Sample Data Populator
- RXT Lifecycle Workflow Integration
- LifeCycle in RXT Definition
- Pagination for Registry
- Enhanced UDDI Support
- Registry Check In-client Improvements
Wednesday, October 16, 2013
Minimum jars needed to run a WS-API client agaist WSO2 Governance Registry 4.5.3
We can use WSO2 Governance Registry's WS-API to read/write/delete/edit the resources/artifacts. Normally we run the "ant run" command inside the $GREG_HOME/lib and point to the $GREG_HOME/repository/lib as the client library. But in $GREG_HOME/repository/lib there are additional jars which are used by other API clients + registry checkin-client.
Here is the list of the minimal jars needed to invoke the WS-API.
- axiom_1.2.11-wso2v3.jar
- axis2_1.6.1-wso2v7.jar
- commons-codec_1.4.0-wso2v1.jar
- commons-httpclient_3.1.0-wso2v2.jar
- httpcore_4.1.0-wso2v1.jar
- neethi_2.0.4-wso2v4.jar
- org.wso2.carbon.authenticator.stub_4.0.0.jar
- org.wso2.carbon.base_4.0.0.jar
- org.wso2.carbon.core.common_4.0.0.jar
- org.wso2.carbon.governance.api_4.0.5.jar
- org.wso2.carbon.logging_4.0.0.jar
- org.wso2.carbon.registry.api_4.0.0.jar
- org.wso2.carbon.registry.core_4.0.5.jar
- org.wso2.carbon.registry.ws.client_4.0.2.jar
- org.wso2.carbon.registry.ws.stub_4.0.0.jar
- org.wso2.carbon.user.api_4.0.3.jar
- org.wso2.carbon.user.core_4.0.5.jar
- org.wso2.securevault_1.0.0-wso2v2.jar
- woden_1.0.0.M8-wso2v1.jar
- wsdl4j_1.6.2-wso2v4.jar
- XmlSchema_1.4.7-wso2v2.jar
Monday, July 15, 2013
[Java] [Performance] String Concatenation then Replace or Replace then Concatenation: Which is faster?
This is something related to the java performance: answer for a where, which and what code to use problem. Let me start with introducing the problem.
When we coding java we are normally use string concatenation and string replacements in our code. But when we encounter a java based solution which need string concatenation and string replacement we never think about the order which we should execute them in ordered to maximize the performance. So the problem here is do we need to concat first or replace string first to high performance? (Please note that this is not valid for all the cases where you need to concat and string replace)
So I thought about a method to measure the performance of in each case.
Case 1 - Concatenation then Replace
Here is the method I used to measure the performance. k is the number of concatenations.
public long testConcatReplace(int k){ long t = new Date().getTime(); for(int j=0;j<10000;j++) { String str = "I am the best"; StringBuffer s = new StringBuffer(str); for(int i=0;i<k;i++){ s = s.append(s); } s.toString().replaceAll(" ", ""); } return new Date().getTime() - t; }
Here is the average time value for executing this method for different k values.
k | time |
---|---|
1 | 11 |
2 | 25 |
3 | 42 |
4 | 60 |
5 | 148 |
6 | 231 |
7 | 565 |
8 | 1001 |
Case 2 - Replace then Concatenation
Here is the method I used to measure the performance. k is the number of concatenations.
Here is the method I used to measure the performance. k is the number of concatenations.
Here is the average time value for executing this method for different k values.
public long testReplaceConcat(int k){ long t = new Date().getTime(); for(int j=0;j<10000;j++) { String str = "I am the best"; StringBuffer s = new StringBuffer(str.replaceAll(" ", "")); for(int i=0;i<k;i++){ s = s.append(str.replaceAll(" ", "")); } } return new Date().getTime() - t; }
Here is the average time value for executing this method for different k values.
k | time |
---|---|
1 | 20 |
2 | 26 |
3 | 41 |
4 | 47 |
5 | 57 |
6 | 67 |
7 | 84 |
8 | 92 |
Here is the comparison of both results in a graph.
Observation:
String replacement is a expensive process and the time for the operation is exponentially increase with the length of the string.
Concatenation is somewhat expensive but the time consume for concatenation of any two string is almost equal.
Conclusion:
You can increase the the string replace performance of your java code by executing the string replacement before concatenation.
Environment:
OS - Mint Linux 12
CPU - Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz
RAM - 8 GB
Sunday, June 30, 2013
WSO2 Governance Registry Checkin client - Dump Registry Resources To A File System
In WSO2 Governance Registry you can dump the registry content into a file system or a single file where you can use it to restore the registry to that SNAPSHOT. In addition to that when you dump into a file system you can do changes to it and commit. Here I am talking about how you can do it.
Dump the registry resources into a file system.
Command -
Linux : sh checkin-client.sh co https://localhost:9443/registry/ -u admin -p admin
Windows : checkin-client.bat co https://localhost:9443/registry/ -u admin -p admin
In the following content I will be describing the additional functionalities we introduced after Governance Registry version 4.5.3.
1. Add - Use of this is to add new resource in the client side. This will be reflected in the registry when committing. In earlier versions of checkin client there were no option like this and whatever we put in the checked in location will be committed. But now the resource must be add to reflect in the registry.
Command -
Linux - sh checkin-client.sh add x.xml
Windows checkin-client.bat add x.xml
If you want to specify the mediatype when adding you can follow the following commands,
Linux - sh checkin-client.sh add x.xml -mediatype application/policy+xml
Windows checkin-client.bat add x.xml -mediatype application/policy+xml
[NOTE] Though we are specifying the mediatype it won't go through the registry handlers when we committing to the registry.
2. Delete - Use of this is to delete a resource which is checked in using checkin client. This will be reflected in the registry when committing and will not delete the local resource until then (this method is for the purpose of reverting). In earlier resource can be deleted using OS delete but in the current version it won't delete the resource in the registry untill the resource deleted using the delete command.
Command -
Linux - sh checkin-client.sh delete x.xml
Windows checkin-client.bat delete x.xml
3. Add/Update property - Using this method you can add, update a resource property. You can set any number of properties in one command as key value pairs.
Command -
Linux - sh checkin-client.sh propset x.xml property1 value1 property2 value2
Windows checkin-client.bat propset x.xml property1 value1 property2 value2
4. Delete Property - Using this command you can delete a resource property. You can delete any number of properties in one command giving key set.
Command -
Linux - sh checkin-client.sh propdelete x.xml property1 property2
Windows checkin-client.bat propdelete x.xml property1 property2
Check in the changes
Command-
Linux - sh checkin-client.sh ci -u admin -p admin
Windows - checkin-client.bat ci -u admin -p admin
[NOTE] In the current model when checkin to the registry all the resource won't restored to the registry unless there are changes in the local resources. Change in the content or property, resource add and resource delete will be consider as change of property and will only checkin those changed resources to the registry.
There are still certain limitations which we are working on overcome them and will be fixed in next release.
Command -
Linux : sh checkin-client.sh co https://localhost:9443/registry/ -u admin -p admin
Windows : checkin-client.bat co https://localhost:9443/registry/ -u admin -p admin
In the following content I will be describing the additional functionalities we introduced after Governance Registry version 4.5.3.
1. Add - Use of this is to add new resource in the client side. This will be reflected in the registry when committing. In earlier versions of checkin client there were no option like this and whatever we put in the checked in location will be committed. But now the resource must be add to reflect in the registry.
Command -
Linux - sh checkin-client.sh add x.xml
Windows checkin-client.bat add x.xml
If you want to specify the mediatype when adding you can follow the following commands,
Linux - sh checkin-client.sh add x.xml -mediatype application/policy+xml
Windows checkin-client.bat add x.xml -mediatype application/policy+xml
[NOTE] Though we are specifying the mediatype it won't go through the registry handlers when we committing to the registry.
2. Delete - Use of this is to delete a resource which is checked in using checkin client. This will be reflected in the registry when committing and will not delete the local resource until then (this method is for the purpose of reverting). In earlier resource can be deleted using OS delete but in the current version it won't delete the resource in the registry untill the resource deleted using the delete command.
Command -
Linux - sh checkin-client.sh delete x.xml
Windows checkin-client.bat delete x.xml
3. Add/Update property - Using this method you can add, update a resource property. You can set any number of properties in one command as key value pairs.
Command -
Linux - sh checkin-client.sh propset x.xml property1 value1 property2 value2
Windows checkin-client.bat propset x.xml property1 value1 property2 value2
4. Delete Property - Using this command you can delete a resource property. You can delete any number of properties in one command giving key set.
Command -
Linux - sh checkin-client.sh propdelete x.xml property1 property2
Windows checkin-client.bat propdelete x.xml property1 property2
Check in the changes
Command-
Linux - sh checkin-client.sh ci -u admin -p admin
Windows - checkin-client.bat ci -u admin -p admin
[NOTE] In the current model when checkin to the registry all the resource won't restored to the registry unless there are changes in the local resources. Change in the content or property, resource add and resource delete will be consider as change of property and will only checkin those changed resources to the registry.
There are still certain limitations which we are working on overcome them and will be fixed in next release.
Monday, June 17, 2013
Configure WSO2 API Manager to send responses to a client which does not support chunk transfer encoding
Some client applications which used the API Manager to invoke APIs may not support chunk transfer encoding and some client applications may need Content-length HTTP header for their application usage. Since by default API Manager gateway sends the response with chunk transfer encoding in either case aforementioned you need to disable the chunk transfer encoding in the response. In this post I am going to show you how to do that.
When creating an API, API Manager will automatically crate a proxy for it in the API gateway which accept the requests on behalf of the actual API. The configuration of that proxy can be found in $CARBON_HOME/repository/deployment/server/synapse-configs/default/api/${PUBLISHER}--${API_NAME}_v${API_VERSION}.xml. In my example I used a API named MovieAPI and version 1.0.0 which published by the publisher eranda and my proxy configuration file is $CARBON_HOME/repository/deployment/server/synapse-configs/default/api/eranda--MovieAPI_v1.0.0.xml.
If you check the configuration you will see a configuration which similar to a ESB proxy configuration. So to make it disable the chunk transfer encoding you need to add it to the out sequence as in the following configuration.
Additionally if your backend API does not support chunk encoding you need to add aforementioned two additional properties to the inSequence.
Thursday, June 6, 2013
How to set port offset of WSO2 API Manager 1.4.0
WSO2 API Manager is a complete solution for managing API through its lifecycle, powered by WSO2 Enterprise Service Bus, WSO2 Identity Server, WSO2 Governance Registry and WSO2 Business Activity Monitor.
Today I am going to talk about changing the port offset of WSO2 API Manager. Basically my requirement here is to start two servers in the same server without clashing each other but everybody who need to have a port offset in WSO2 API Manager should follow this.
WSO2 API Manager is bit different than the other WSO2 servers from setting the offset since we it has little more work than other servers where in other WSO2 servers we only have to set the offset value in the carbon.xml. Following steps shows you how to set port offset in WSO2 API Manager.
- Set offset in the $CARBON_HOME/repository/conf/carbon.xml to any value you want.
2
- Set the Thrift client and server ports in $CARBON_HOME/repository/conf/api-manager.xml to (10397 + portOffSet) as shown in below.
ThriftClient 10399 10000 10399 true
- This is the most valuable configurations where without this configuration API Store will never work with port offset. Now change the endpoint configuration in the following files to suit your port offset
- $CARBON_HOME/repository/deployment/server/synapse-configs/default/api/_LoginAPI_.xml - This configuration involves with user login to the API Store.
- $CARBON_HOME/repository/deployment/server/synapse-configs/default/api/_TokenAPI_.xml - This configuration involves with generating OAuth token.
- $CARBON_HOME/repository/deployment/server/synapse-configs/default/api/_AuthorizeAPI_.xml - This configuration involves with authorize a OAuth token.
Here is a sample configuration for _AuthorizeAPI_.xml where my offset is 2.
<endpoint> <address uri="https://localhost:9445/oauth2/authorize"/> </endpoint>
-
Now you are done and your WSO2 API Manager is ready to start with the port offset.
Tuesday, May 28, 2013
ERROR 1005 (HY000): Can't create table : How to Tackle
It's bit hard to identify the referential integrity issues in MySQL InnoDB engine by looking at the error. It just throw the error in abstract, "Can't create table". But when we altering a table or adding a table with several foreign key constraints we are more prone to do such errors. So here I am going to explain a method which can be use to identify the error without going trough all the table syntax.
It's very simple. When you executing a bad foreign key sql query InnoDB engine will automatically saved it in its activities. So what you need to do is, asking the InnoDB engine to show its activities. Here is the sql query to dump out the InnoDB engine activities.
mysql> SHOW engine InnoDB STATUS;
When you execute it you will get a big output. You can go through it and find a topic "LATEST FOREIGN KEY ERROR" which has a clear description for your the error. Here is a sample output, I extracted from a error I was faced.
----------------------------------------------
LATEST FOREIGN KEY ERROR
----------------------------------------------
130529 3:55:45 Error in foreign key constraint of table mydatabase/mytable:
FOREIGN KEY (`CountryName` )
REFERENCES `mydatabase`.`country` (`countryName` )
ON DELETE NO ACTION
ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=utf8:
Cannot resolve table name close to:
(`Country` )
ON DELETE NO ACTION
ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=utf8
As in the above description say, the fault is, the InnoDB engine cannot find a table. Basically the country table which has a column countryName which reference by one of the columns in mytable table, does not exists in the database. Like wise you can get an idea about the error using the output of that command. Then you can go for a appropriate solution.
Hope you use this for tackle your referential integrity issues.
Sunday, May 19, 2013
Exposing JMS queue via JAX-RS Service
Today I am going to show you how to expose a JMS queue as a JAX-RS Service. Here I am using few tools for this.
You can view the content and other metadata of those messages by clicking the queue name. Following figure shows an example.
Now let's start with creating the JAX-RS service. Here I am using WSO2 Developer Studio where you can download from here.
Unzip the pack and start it as it is normal Eclipse IDE.
Then click on Developer Studio ---> Open Dashboard to get the Developer Studio dashboard. You can see a dashboard like below.
Now click on JAX-RS Service Project and create a project by filling the details as below.
Developer Studio will create a template project for you. Since we are going to access ActiveMQ within this service we need to add some dependencies.
To do that open the pom.xml file and add the following to the dependencies.
Now we can edit the DequeueService to consume the JMS queue in Apache ActiveMQ. Following is sample java code which I wrote.
Now you can go to the project home and build it using Maven using the command mvn clean install. It will create a web archive (WAR file) in the target folder. Now we need to deploy it in a Application Server, here I am going to use the WSO2 Application Server. Before deploying it, we need to add the dependency jars. So add the following list of jars (you can find them inside the ActiveMQ library) to the $CARBON_HOME/repository/components/lib. No need to restart the Application Server, these jars will be hot deployed.
- WSO2 Developer Studio 3.0.0
- WSO2 Application Server
- Apache ActiveMQ
Here I am assuming that the Apache ActiveMQ up and running. You can start ActiveMQ with its management console (using the command ./activemq console) and view the queue details using from URL http://localhost:8161/admin/queues.jsp. Here is an screenshot the list of queue in my ActiveMQ server. Here I have one queue, SampleStockQuoteProvider with three messages in it.
You can view the content and other metadata of those messages by clicking the queue name. Following figure shows an example.
Now let's start with creating the JAX-RS service. Here I am using WSO2 Developer Studio where you can download from here.
Unzip the pack and start it as it is normal Eclipse IDE.
Then click on Developer Studio ---> Open Dashboard to get the Developer Studio dashboard. You can see a dashboard like below.
Now click on JAX-RS Service Project and create a project by filling the details as below.
Developer Studio will create a template project for you. Since we are going to access ActiveMQ within this service we need to add some dependencies.
To do that open the pom.xml file and add the following to the dependencies.
<dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-core</artifactId> <version>5.7.0</version> </dependency> <dependency> <groupId>org.apache.geronimo.specs</groupId> <artifactId>geronimo-jms_1.1_spec</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>org.apache.ws.commons.axiom</groupId> <artifactId>axiom-api</artifactId> <version>1.2.14</version> </dependency>
Now we can edit the DequeueService to consume the JMS queue in Apache ActiveMQ. Following is sample java code which I wrote.
package org.wso2.carbon.sample.dequeu.rest.service; import org.apache.activemq.ActiveMQConnectionFactory; import javax.jms.*; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import javax.xml.stream.XMLStreamException; @Path("/dequeue/") public class DequeueService { @GET @Path("{queueName}") @Produces(MediaType.APPLICATION_XML) public String dequeue(@PathParam("queueName") String queueName) throws Exception { System.out.println(queueName); Connection conn = null; Session session = null; MessageConsumer consumer = null; try { ConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616"); conn = factory.createConnection(); conn.start(); session = conn.createSession(false, Session.CLIENT_ACKNOWLEDGE); Destination queue = session.createQueue(queueName); consumer = session.createConsumer(queue); Message message = consumer.receive(1000); if(message == null) { throw new Exception("No message available in the queue"); } message.acknowledge(); return ((TextMessage)message).getText(); } catch (JMSException e) { throw new Exception("Dequeuing not succeeded" ,e); } catch (XMLStreamException e){ throw new Exception("Unexpected message found in the queue", e); } finally { if (consumer != null){ consumer.close(); } if (session != null) { session.close(); } if (conn != null) { conn.close(); } } } }[NOTE] Here I use the return mediatype as xml since messages in the queue I am going to pull has xml content. Instead you can use any mediatype listed in javax.ws.rs.core.MediaType.
Now you can go to the project home and build it using Maven using the command mvn clean install. It will create a web archive (WAR file) in the target folder. Now we need to deploy it in a Application Server, here I am going to use the WSO2 Application Server. Before deploying it, we need to add the dependency jars. So add the following list of jars (you can find them inside the ActiveMQ library) to the $CARBON_HOME/repository/components/lib. No need to restart the Application Server, these jars will be hot deployed.
- activemq-core-5.7.0.jar
- geronimo-j2ee-management_1.1_spec-1.0.1.jar
- geronimo-jms_1.1_spec-1.1.1.jar
- hawtbuf-1.9.jar
Now log into WSO2 Application Management console on default URL https://localhost:9443/carbon and using the default username and passwords admin, admin. Then add the web application we just created. It will take 2-5 seconds and then it will appear on the deployed web application list as below.
Now the dequeuing JAX-RS service is ready to use. When you send a http GET request to http://localhost:9768/JMSDequeueService/services/dequeue_service/dequeue/SampleStockQuoteProvider , you will get the top message of the queue, SampleStockQuoteProvider.
Friday, May 17, 2013
Dequeue from a JMS queue and send as a SOAP message using WSO2 ESB
When you providing a solution for a enterprise you may need to dequeue from a JMS queue and send it to a SOAP endpoint. For this solution the service operation we going to invoke should be one way. So here I am explaining how you can do that easily using WSO2 ESB. Its all about a matter of creating a proxy service. You can download WSO2 ESB from here.
Before starting the WSO2 ESB you need to enable the JMS listener by setting the configuration details of JMSListener in $WSO2ESB_HOME/repository/conf/axis2/axis2.xml. WSO2 ESB supports most of the JMS message brokers and its a just a matter of simple configuration. Here are some configuration document links.
Start the WSO2 ESB using executing the following command using command line inside $WSO2ESB_HOME/bin.
- Windows - wso2server.bat
- Linux - sh wso2server.sh
Now lest see a short diagram of what we are going to do.
Here red lines show the message flow.
Now lets see how to create the proxy service to achieve this functionality. First go to WSO2 ESB Management Console which you can access using the URL https://localhost:9443/carbon which is the default URL.
Now you need to login to the Management Console and the default username and password are admin and admin.
Now select the Main ==> Manage ==> Services ==> Add Proxy Service ==> Custom Proxy as shown in the figure below.
In the first page of the proxy configuration add a Proxy name, set the transport to jms and add the following Service Parameters categorize under General Settings.
transport.jms.ContentType :
<rules>
<jmsProperty>contentType</jmsProperty>
<default>application/xml</default>
</rules>
transport.jms.ConcurrentConsumers : 1
transport.jms.ConnectionFactory : myTopicConnectionFactory
transport.jms.SessionTransacted : false
transport.jms.Destination : myQueue
After filling the page will look like as following figure.
Now you are done with the first page. Click Next to go to the next page.
Here you need to configure the inSequence to send the SOAP message to your service endpoint. Here we are going to add it in line. Click Edit in Define Inline in Define In Sequence.
And add a Log mediator. You can do it by click on Add Child ==> Core ==> Log and set to log level to full. Which will log the messages comes to this Proxy endpoint.
Then add the Send mediator where we invoke the endpoint of our service. This can be done by clicking Add Child ==> Core ==> Send. Then edit the Send mediator configuration by defining the In Line endpoint and selecting address endpoint and add your service endpoint there.
Now it will be look like as following figure.
Save and Close the In sequence, then click Next.
Then click Finish and we are done with the configuration.
So if we add a message to the myQueue, then this proxy will dequeue it from the queue and send it to your endpoint.
Here is my full configuration in xml format and my service endpoint is http://localhost:7611/sample/ep1
<proxy xmlns="http://ws.apache.org/ns/synapse" name="jmsReaderProxy" transports="jms" statistics="disable" trace="disable" startOnLoad="true"> <target> <inSequence> <log level="full"/> <send> <endpoint> <address uri="http://localhost:7611/sample/ep1"/> </endpoint> </send> </inSequence> </target> <parameter name="transport.jms.ContentType"> <rules> <jmsProperty>contentType</jmsProperty> <default>application/xml</default> </rules> </parameter> <parameter name="transport.jms.ConcurrentConsumers">1</parameter> <parameter name="transport.jms.ConnectionFactory">myTopicConnectionFactory</parameter> <parameter name="transport.jms.SessionTransacted">false</parameter> <parameter name="transport.jms.Destination">myQueue</parameter> <parameter name="transport.jms.MaxConcurrentConsumers">1</parameter> <description></description> </proxy>
Exception sending context initialized event to listener instance of class org.springframework.web.util.Log4jConfigListener
There can be special cases where you may not able to deploy a web application in your application server because of the $subject exception but it may worked in Apache Tomcat. So you may misunderstand that its a bug in your web app or bug in the application server. Fortunately its not, but its a problem of your configuration. Will explain the solution later, but lets see how I get to the solution.
So in the first case I have no idea about this and I was digging in to bit further for a solution. As in the exception say it should be a problem of initialization of log4j. Hmmm.... what could be the course of this? In my case the web application was successfully deployed in Apache Tomcat, but not in WSO2 Application Server. Knowing the fact that WSO2 Application Server build on top of Apache Tomcat it should be something done to the web application archive by WSO2 Application Server before deploying it in the Apache Tomcat. So what is could be? Oh.. it is exploded....
Now I found the difference between deploy in the Apache Tomcat and Deploy in the WSO2 Server, So I need to find the solution how the exploding causing the above problem. Then I search on it and found the following.
In a web archive (.war), Tomcat expect everything to be at the same place. If it it exploded then all the files have their own paths. Tomcat itself explode the web archive, but before that it reads some of the configurations from the war file like log4j listeners.
When the war file have the following configuration in its web.xml,
<context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.xml</param-value> </context-param>
Apache Tomcat expect log4.xml in its a war file in the given location. But since the war exploded it could not find the log4j.xml. So it fails.
Now what we need to be done is we need to change the configuration so that it will read from a file system. For that you need to change the above configuration to the following.
<context-param> <param-name>log4jExposeWebAppRoot</param-name> <param-value>false</param-value> </context-param>
My problem Solved... Hope you too solve yours...
Saturday, March 16, 2013
Lifecycle transition using WSO2 Governance Registry - Governance API
[NOTE] This article is not recommended for a beginner of WSO2 Governance Registry. If you are a beginner you can refer WSO2 Governance Registry documentation and come back to this article.
In SOA Governance lifecycle management of a SOA artifact is playing a major role. In WSO2 Governance Registry 4.6.0 which is oncoming release there are several ways of manipulating Lifecycle of an artifact.
- Using Governance Registry Management console
- Using set of administrative services
- LifeCycleManagementService
- CustomLifecyclesChecklistAdminService
- Governance API
Here I am discussing about lifecycle manipulation using Governance API which is introduced for the release. For all the details related to Governance API you can visit WSO2 Governance Registry documentation here.
Here I am using the term artifact for any SOA Governance artifact defined in WSO2 Governance Registry. E.g. Service, WSDL, Policy, Schema, WADL, URI, API which is already there default and any other generic artifact defined by the users.
You can use Governance API to invoke the following operations,
You can use Governance API to invoke the following operations,
- Associating a lifecycle to a artifact
- Checking associated lifecycle name of an artifact
- Checking the current state of the lifecycle associated with the artifact
- Get the checklist item list
- Checking checklist item
- Checking whether a checklist item is checked
- Unchecking checklist item
- Get voting event list
- Voting for an event
- Checking whether the current user already voted for an event
- Unvoting for an event, current user already voted
- Get all action list
- Invoking action
Now lets look at what each operation do and how it can be done.
Associating a lifecycle to a artifact
This method is use to associate a lifecycle to a artifact. If you want to know about creating a lifecycle please refer this.
artifact.attachLifecycle(lifecycleName);
Here you need to set the lifecycle name as the parameter. This will throw GovernanceException if the operation failed.
Checking associated lifecycle name of an artifact
This method is use to get the lifecycle name of the associated lifecycle of an artifact.
This method is use to get the current state of the artifact in its lifecycle.
This method is use to get all the check list item names of the current state of the associated artifact.
Checking checklist item
This method is use to check a check list item of the current state of the associated artifact.
Checking whether a checklist item is checked
This method is use to check whether a checklist item of the current state is clicked or not.
Unchecking checklist item
This method is use to reverse a checked check list item of the current state of the associated artifact.
Get voting event list
This method is use to get the event list which need certain amount of votes before it invoke.
Voting for an event
This method is use to vote for an event.
Checking whether the current user already voted for an event
This method is use to check whether the current user is already voted to an event.
Unvoting for an event, current user already voted
This method is use to reverse the vote for an event.
Get all action list
This methods is use to get all the actions for current lifecycle state.
Invoking action
These methods are use to invoke an action in current lifecycle state.
So, that's all with the lifecycle transitions using WSO2 Governance Registry Governance API.
Checking associated lifecycle name of an artifact
This method is use to get the lifecycle name of the associated lifecycle of an artifact.
String lifecycleName = artifact.getLifecycleName();Checking the current state of the lifecycle associated with the artifact
This method is use to get the current state of the artifact in its lifecycle.
String lifecycleState = artifact.getLifecycleState();Get the checklist item list
This method is use to get all the check list item names of the current state of the associated artifact.
String[] checklistItems = artifact.getAllCheckListItemNames();
Checking checklist item
This method is use to check a check list item of the current state of the associated artifact.
artifact.checkLCItem(checkListItemIndex);Here checkListItemIndex is the index of the checklist name in the checklist items list returned by artifact.getLCCheckListItemNames() method.
Checking whether a checklist item is checked
This method is use to check whether a checklist item of the current state is clicked or not.
boolean lcItemChecked = artifact.isLCItemChecked(checkListItemIndex);Here checkListItemIndex is the index of the checklist name in the checklist items list returned by artifact.getAllCheckListItemNames() method.
Unchecking checklist item
This method is use to reverse a checked check list item of the current state of the associated artifact.
artifact.uncheckLCItem(checkListItemIndex);Here checkListItemIndex is the index of the checklist name in the checklist items list returned by artifact.getAllCheckListItemNames() method.
Get voting event list
This method is use to get the event list which need certain amount of votes before it invoke.
String[] votingEvents = artifact.getAllVotingItems();
Voting for an event
This method is use to vote for an event.
artifact.vote(eventIndex);Here eventIndex is the index of the voting event in the voting events list returned by artifact. artifact.getAllVotingItems() method.
Checking whether the current user already voted for an event
This method is use to check whether the current user is already voted to an event.
boolean currentUserVoted = artifact.isVoted(eventIndex);Here eventIndex is the index of the voting event in the voting events list returned by artifact. artifact.getAllVotingItems() method.
Unvoting for an event, current user already voted
This method is use to reverse the vote for an event.
artifact.unvote(eventIndex);Here eventIndex is the index of the voting event in the voting events list returned by artifact. artifact.getVotingItems() method.
Get all action list
This methods is use to get all the actions for current lifecycle state.
artifact.getAllLifecycleActions();
Invoking action
These methods are use to invoke an action in current lifecycle state.
artifact.invokeAction(action);
artifact.invokeAction(action, parameterMap);Here the second method allows to parse a parameter map which can be use in transition executors.
So, that's all with the lifecycle transitions using WSO2 Governance Registry Governance API.
Monday, January 21, 2013
Registry based Deployment Synchronizer for WSO2 ESB
Deployment Synchronizer or Dep-Sync is use to synchronize artifacts within nodes in a cluster. Here we are going to see how we can use registry based deployment synchronizer to sync artifacts within two WSO2 ESB nodes. There are other synchronization methods available in WSO2 servers such as Subversion (svn) based deployment synchronizer and git based deployment synchronizer, but they need third party software and bit hard to configure compared to registry based deployment synchronizer.
Before doing the actual registry based deployment synchronizer configurations we first need to mount ESB's /_system/config collection to a single database. To get a clear idea on registry mounting read this.
[NOTE] You can't use Embedded H2 database for this since H2 allows one connection at a time in its Embedded mode. So I am using MySQL here to do the registry mounting.
To be more clear for you, here I am pasting the configurations I used for my setup.
Node 1: Master
$CARBON_HOME/repository/conf/datasources/masterdatasource.xml
Local datasource configuration.
Mounted datasource configuration.WSO2_CARBON_DB The datasource used for registry and user manager jdbc/WSO2CarbonDB jdbc:mysql://x.x.x.x:3306/master?autoReconnect=true root root123 com.mysql.jdbc.Driver 50 60000 true SELECT 1 30000
$CARBON_HOME/repository/conf/registry.xmlWSO2_CARBON_MOUNTED_DB The shared datasource used for registry jdbc/WSO2CarbonSharedDB jdbc:mysql://x.x.x.x:3306/shared?autoReconnect=true root root123 com.mysql.jdbc.Driver 50 60000 true SELECT 1 30000
SharedRegistry wso2registry false true / root@jdbc:mysql://x.x.x.x:3306/shared SharedRegistry /_system/config SharedRegistry /_system/governance
Node 2: Worker
$CARBON_HOME/repository/conf/datasources/masterdatasource.xml
Local datasource configuration.
WSO2_CARBON_DB The datasource used for registry and user manager jdbc/WSO2CarbonDB jdbc:mysql://x.x.x.x:3306/worker?autoReconnect=true root root123 com.mysql.jdbc.Driver 50 60000 true SELECT 1 30000
Mounted datasource configuration.
WSO2_CARBON_MOUNTED_DB The shared datasource used for registry jdbc/WSO2CarbonSharedDB jdbc:mysql://x.x.x.x:3306/shared?autoReconnect=true root root123 com.mysql.jdbc.Driver 50 60000 true SELECT 1 30000
$CARBON_HOME/repository/conf/registry.xml
SharedRegistry wso2registry true true / root@jdbc:mysql://x.x.x.x:3306/shared SharedRegistry /_system/config SharedRegistry /_system/governance
[NOTE] If the registry mount configured correctly you can see following type of server start up log.
Now registry mounting is done. So let's do the registry based deployment synchronizer configurations.
In deployment synchronize setup not all nodes can configure but one or two which we called them as Master nodes, while others change their configurations according to the Master nodes which we called them as Worker nodes.
In registry based dep-sync mechanism it uses mounted registry and registry checkin-checkout functionality to synchronize artifacts. Following figure shows you how the registry based deployment synchronizer works.
[NOTE] Red arrows direct how the data flow to worker node.
[NOTE] Any number of worker nodes can be connected via JDBC mount.
Now lets see how to configure Master and Worker nodes.
In both nodes we need to enable to tribes clustering which is use to communicate within nodes. To do that you need to set the enable attribute of the clustering to true in $CARBON_HOME/repository/axis2 /axis2.xml as follows.
[NOTE] Any number of worker nodes can be connected via JDBC mount.
Now lets see how to configure Master and Worker nodes.
In both nodes we need to enable to tribes clustering which is use to communicate within nodes. To do that you need to set the enable attribute of the clustering to true in $CARBON_HOME/repository/axis2 /axis2.xml as follows.
You can alter the other cluster configuration values to fine tune the cluster. E.g. Membership Scheme- multicast or well known address, domain, Maximum retries (More details on these can be found in the axis2.xml)
Configure Manager Node
Enable the following configuration with the correct values as follows.
Enable the following configuration with the correct values as follows.
true true false
Here Enable is set to true to indicate that the registry based deployment synchronizer is enabled. AutoCommit is set to true to indicate that this node can commit the changes to registry. AutoCheckout is set to false to indicate that this node is not needed to checkout since this is the only node which do the changes to the configurations.
[NOTE] : AutoCheckout should be set to true if there are more than one Manager nodes.
Configure Worker Node
Enable the following configuration with the correct values as follows.
true false true
Here Enable is set to true to indicate that the registry based deployment synchronizer is enabled. AutoCommit is set to false to indicate that this node cannot commit the changes to registry. AutoCheckout is set to true to indicate that this node needed to checkout since this node needed to sync up with the Manager node.
That's it with the configuration... Quite simple.
Now lets see how this works.
First start Manager node.
If the cluster well configured in this node, a log similar to following will be appeared.
Then start Worker node.
If the cluster well configured in this node, a log similar to following will be appeared.
As you can see in the log the Manager node should appear as a Member of the cluster.
Now the servers are up and ready. Lets add a endpoint to the Manager node.
[NOTE]: You should always add/update/delete artifacts using the Manager node.
Every 10 seconds (This value is default and can be configured) of time worker check whether there are changes in the deployed artifacts. If there are changes then they will commit to the registry and send a cluster message asking worker nodes to synchronize. Following is a log message which logged when the synchronization message sent.
When the worker receive that message it will update the artifacts from the registry. When updating Worker node logs following log messages.
New artifact addedregistry.xml
Artifact updated
Artifact deleted
Here I only tried with endpoints. But you can try with other artifacts such as sequence, templates, message stores etc...
So here is the end result in the worker node......
[NOTE] This is only recommend for small deployments only.
Subscribe to:
Posts (Atom)