Thursday, November 6, 2014

[Solved] External Hard Drive Not Detected on Mac OS X

Today I removed my external hard drive by clicking ejected, which was not succeeded after all. Afterwards the external hard wasn't detected when I plug it into my Mac Book Pro. Then I tried with two of my friend's machines. First was as same as my machine, external hard drive was not detected. The second was running Ubuntu and it detected the external hard drive and show all the files/directories. Of course it made me happy, at least the hard drive is working. Then I google around and try to find a solution for this issue. And this is how I ended up with solving the problem.



First I go to Disc Utility, Application -> Utilities -> Disc Utility. There I saw that my disc is detected and listed there but I couldn't do anything. Actually the external hard drive shown in yellow color.


Then I understood that this should be a issue of not correctly unmounting. So I decided to search on unmounting external hard drive in Mac. Then I ended up with the following command where some of the people who had this kind of problem solved after executing following commands. Yes you need to open a Terminal and execute the commands :).

First I use the discutil list command to list down all the drives connected to my machine. This command show me two drives, first one is the hard drive of my machine and the second one is my external hard drive.

diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS Macintosh HD            250.1 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *1.0 TB     disk1
   1:             Windows_FAT_32 Transcend               1.0 TB     disk1s1

Now I executed diskutil unmountDisk command against the /dev/disk1, which identified as my disc in list command. But it failed with the following message.

diskutil unmountDisk /dev/disk1
Unmount of disk1 failed: at least one volume could not be unmounted

Then as I have seen in a post I tried the same command with force. Again it didn't work.

diskutil unmountDisk force /dev/disk1
Forced unmount of disk1 failed: at least one volume could not be unmounted

Then I have research more and found that I can use hdiutil which can be use to manipulate disc images. So I have tried to detach the external hard drive using hdiutil.

hdiutil detach /dev/disk1
"disk1" unmounted.
"disk1" ejected.

It did work....... At last I have a solution.
You can refer the manual page of hdiutil here.
Hope you solve your problem. Thanks for reading.

Friday, October 10, 2014

Integrating WSO2 API Manager with BAM and CEP - (Real time data analyzing for WSO2 API Manager) : Part 2 - Configuring CEP to send a mail when there are more than 5 API calls within a minute by a user

Hope you did follow all the instruction given in part 1.

To clear on what I am doing next you better take a look on the basic flow of events through the components in CEP and best if you can go through [1].

CEP-Flow-New-Page.png

API Manager stat publisher publishing three event streams.

  1. org.wso2.apimgt.statistics.request
  2. org.wso2.apimgt.statistics.response
  3. org.wso2.apimgt.statistics.fault

Here I am only show you how to configure for org.wso2.apimgt.statistics.request event stream and you can do the same for other two event streams as well.

Now we need to define two event stream definition for import and export the stream to the execution plan. You can create a event stream in Main -> Manage -> Event Processor -> Event Streams.

Import Event stream

7.png


Full definition

{
 "name": "org.wso2.apimgt.statistics.request",
 "version": "1.0.0",
 "nickName": "API Manager Request Data",
 "description": "Request Data",
 "metaData": [
   {
     "name": "clientType",
     "type": "STRING"
   }
 ],
 "payloadData": [
   {
     "name": "consumerKey",
     "type": "STRING"
   },
   {
     "name": "context",
     "type": "STRING"
   },
   {
     "name": "api_version",
     "type": "STRING"
   },
   {
     "name": "api",
     "type": "STRING"
   },
   {
     "name": "resource",
     "type": "STRING"
   },
   {
     "name": "method",
     "type": "STRING"
   },
   {
     "name": "version",
     "type": "STRING"
   },
   {
     "name": "request",
     "type": "INT"
   },
   {
     "name": "requestTime",
     "type": "LONG"
   },
   {
     "name": "userId",
     "type": "STRING"
   },
   {
     "name": "tenantDomain",
     "type": "STRING"
   },
   {
     "name": "hostName",
     "type": "STRING"
   },
   {
     "name": "apiPublisher",
     "type": "STRING"
   },
   {
     "name": "applicationName",
     "type": "STRING"
   },
   {
     "name": "applicationId",
     "type": "STRING"
   }
 ]
}

Now its all about configuring CEP in order to send a mail when there are more than 5 API calls within a minute by a user

Lets start with adding a Out Event Adaptor
Event Adaptor Name : AMEmailEventAdaptor
Event Adaptor Type : email

3.png

Now we can create the output email event stream

{
 "name": "org.wso2.apimgt.statistics.request.email",
 "version": "1.0.0",
 "nickName": "",
 "description": "",
 "payloadData": [
   {
     "name": "myCount",
     "type": "LONG"
   },
   {
     "name": "api_version",
     "type": "STRING"
   },
   {
     "name": "api",
     "type": "STRING"
   },
   {
     "name": "consumerKey",
     "type": "STRING"
   }
 ]
}

13.png

Now we have the out event stream. So we need to compose the mail out of this stream. We can do that by defining a Event Formatter by clicking Out-Flows in stream list view.

Event Formatter Name : AMStatEmailEventFormatter
Output Event Adaptor Name : AMEmailEventAdaptor
Subject : More than 5 API invocations happen within 1min
Email Address : eranda@abc.com
Output Type : text
Text Mapping :
API name : {{api}},
Version : {{api_version}},
ConsumerKey : {{consumerKey}}

18.png

Only thing left to do is creating the Event Execution Plan. That can be done in Main -> Manage -> Event Processor -> Execution Plans.

Execution Plan Name : APIMEmailExecutionPlan

Import Stream
Stream Id : org.wso2.apimgt.statistics.request:1.0.0
As : AMRequest

Siddi Query :
from AMRequest#window.time(1 min)
select count(consumerKey) as myCount,api_version,api,consumerKey
group by consumerKey,api_version,api
having (myCount > 5)
insert into outStream;

Export Stream
Value Of : outStream
Stream Id : org.wso2.apimgt.statistics.request.email:1.0.0

20.png

Done.

References :

Integrating WSO2 API Manager with BAM and CEP - (Real time data analyzing for WSO2 API Manager) : Part 1 - Configuring Servers

First let me introduce to the servers

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for scalably routing API traffic. It leverages proven, production-ready integration, security, and governance components from the WSO2 Enterprise Service Bus, WSO2 Identity Server, and WSO2 Governance Registry. In addition, it leverages the WSO2 Business Activity Monitor for Big Data analytics, giving you instant insight into APIs behavior.

WSO2 Complex Event Processor identifies the most meaningful events within the event cloud of an organization, analyzes their impacts, and acts on them in real time. Built to be extremely high performing and massively scalable, it offers significant time saving and affordable acquisition.

WSO2 Business Activity Monitor (BAM) is designed to address a wide range of monitoring requirements in business activities and processes. It is a flexible framework allowing to model your own key performance indicators to suit different stakeholders, may it be business users, dev ops, CxOs etc. WSO2 BAM achieves this level of flexibility, while facilitating technologies such as Big Data storage, Big Data analytics and high performance data transfer.

Now with the three identified components let me explain what I am going to do...
I am going to publish the API Manager API invocation statistics to WSO2 Business Activity Monitor through WSO2 Complex Event Processor where Complex Event Processor will send a mail to a given e-mail address if there is a matching pattern of the events. In this post I am taking the matching pattern as, "user invokes an API 5 times within a minute". 

In order to do this first we need to do some configuration to each server.

API Manager

Configure the analysed statistics database (BAM API statistic database) for API Manager. 
File : $AM_HOME/repository/conf/datasources/master-datasources.xml

         <datasource>
            <name>WSO2AM_STATS_DB</name>
            <description>The datasource used for getting statistics to API Manager</description>
            <jndiConfig>
                <name>jdbc/WSO2AM_STATS_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://localhost:3306/apimstat?autoReconnect=true&relaxAutoCommit=true</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
         </datasource>          

Configure statistic publishing host/port (CEP) and analysed statistic datasource. 
File : $AM_HOME/repository/conf/api-manager.xml

   
     <APIUsageTracking>
        <!--
            Enable/Disable the API usage tracker.
        -->
        <Enabled>true</Enabled>
        <!--
            API Usage Data Publisher.
        -->
        <PublisherClass>org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher</PublisherClass>
        <!--
            Thrift port of the remote BAM server.
        -->
        <ThriftPort>7613</ThriftPort>
        <!--
            Server URL of the remote BAM/CEP server used to collect statistics. Must
            be specified in protocol://hostname:port/ format.

             An event can also be published to multiple Receiver Groups each having 1 or more receivers. Receiver
            Groups are delimited by curly braces whereas receivers are delimited by commas.
            Ex - Multiple Receivers within a single group
                 tcp://localhost:7612/,tcp://localhost:7613/,tcp://localhost:7614/
            Ex - Multiple Receiver Groups with two receivers each               
                 {tcp://localhost:7612/,tcp://localhost:7613},{tcp://localhost:7712/,tcp://localhost:7713/} 
        -->
        <BAMServerURL>{tcp://localhost:7612/},{tcp://localhost:7613/}</BAMServerURL>
        <!--
            Administrator username to login to the remote BAM server.
        -->
        <BAMUsername>admin</BAMUsername>

         <!--
            Administrator password to login to the remote BAM server.
        -->
        <BAMPassword>admin</BAMPassword>

         <!--
            JNDI name of the data source to be used for getting BAM statistics.This data source should
            be defined in the master-datasources.xml file in conf/datasources directory.
        -->
        <DataSourceName>jdbc/WSO2AM_STATS_DB</DataSourceName>
        <!--
            Google Analytics publisher configuration. Create Google Analytics account and obtain a
            Tracking ID. 
            Reffer http://support.google.com/analytics/bin/answer.py?hl=en&answer=1009694
        -->
        <GoogleAnalyticsTracking>
             <!--
                 Enable/Disable Google Analytics Tracking
             -->
             <Enabled>false</Enabled>
             <!--
                Google Analytics Tracking ID    
             -->
             <TrackingID>UA-XXXXXXXX-X</TrackingID>
       </GoogleAnalyticsTracking>
    </APIUsageTracking>

[NOTE] When we specifying the BAMServerURL we have specified two URLs, first one is the BAM server URL and the second one is the CEP server URL where we need to send the statistics to both CEP and BAM. 

CEP
Setting offset to 2
This is if you are running all the servers in the same machine to avoid the port conflict.
File :$CEP_HOME/repository/conf/carbon.xml

 
        <!-- Ports offset. This entry will set the value of the ports defined below to
         the define value + Offset.
         e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
         -->
        <Offset>2</Offset>

Configuring mail transport
File :$CEP_HOME/repository/conf/axis2/axis2_client.xml

 
    <!--please change the mail configuration, wso2cep.demo@gmail.com is provided for demo purposes only-->
    <transportSender name="mailto"
                     class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.from">eranda@gmail.com</parameter>
        <parameter name="mail.smtp.user">eranda</parameter>
        <parameter name="mail.smtp.password">eranda@123</parameter>
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>

         <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
    </transportSender>


BAM
This is if you are running all the servers in the same machine to avoid the port conflict.
File :$BAM_HOME/repository/conf/carbon.xml

 
        <!-- Ports offset. This entry will set the value of the ports defined below to
         the define value + Offset.
         e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
         -->
        <Offset>1</Offset>

Configuring port offsets
Following two configurations are not changed with the offset and you need to manually change them with the offset (Eg. port + offset)
File : $BAM_HOME/repository/conf/datasources/bam-datasources.xml

   
        <datasource>
                        <name>WSO2BAM_CASSANDRA_DATASOURCE</name>
                        <description>The datasource used for Cassandra data</description>
                        <definition type="RDBMS">
                                <configuration>
                                        <url>jdbc:cassandra://localhost:9161/EVENT_KS</url>
                                        <username>admin</username>
                                        <password>admin</password>
                                </configuration>
                        </definition>
                </datasource>            

File : $BAM_HOME/repository/conf/etc/hector-config.xml

<hectorconfiguration>
        <cluster>
                <name>ClusterOne</name>

                 <!-- Node list of Cassandra cluster should be specified as a comma separated list
                eg. <nodes>192.168.0.2:9160,192.168.0.3:9160,192.168.0.4:9160</Nodes>   -->
                <nodes>localhost:9161</nodes>

                 <!-- Set 'false' to enable autodiscovery of nodes on the ring at startup 
                and at intervals (i.e. delay property) thereafter -->
                <autodiscovery delay="1000" disable="false">
        </autodiscovery></cluster>
</hectorconfiguration>

Configure the analysed statistics database.
This datasource configuration should be as same as the datasource you defined in the API Manager.
File : $BAM_HOME/repository/conf/datasources/master-datasources.xml 

         <datasource>
            <name>WSO2AM_STATS_DB</name>
            <description>The datasource used for getting statistics to API Manager</description>
            <jndiConfig>
                <name>jdbc/WSO2AM_STATS_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://localhost:3306/apimstat?autoReconnect=true&relaxAutoCommit=true</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
         </datasource>


Copying BAM tool box for API statistics analys.
Copy this $AM_HOME/statistics/API_Manager_Analytics.tbox toolbox into the "$BAM_HOME/repository/deployment/server/bam-toolbox/" directory.

Now you can follow the following steps to test the setup.

1. Login to the API Manager. Create and publish an API. Subscribe to it and invoke.
2. Stats should have shown in APIM stats page.

References : 

Thursday, October 9, 2014

Load Tenant Registry in a Carbon Component

Load tenant registry in a Carbon component can be useful when you have to access registries across tenants when tenant unloading is enabled (This is enabled by default). For example in API Manager you are allowed to access APIs across tenants which saved in the tenant specific registry.

Let's see how this can be done.
In order to load tenant registry we need Tenant Registry Loader OSGi service. To access this service you can use Apache Felix Maven SCR annotations where you can use the following annotations to get the service reference in BundleActivator class.

*@scr.reference name="tenant.registryloader"
* interface="org.wso2.carbon.registry.core.service.TenantRegistryLoader"
* cardinality="1..1" policy="dynamic"
* bind="setTenantRegistryLoader"
* unbind="unsetTenantRegistryLoader"
As we defined in the annotations setTenantRegistryLoader() method will invoked when bind the OSGi service and unsetTenantRegistryLoader() method will invoke when unbind OSGi service. Following are the method implementations.
protected void setTenantRegistryLoader(TenantRegistryLoader tenantRegistryLoader) {
    this.tenantRegistryLoader = tenantRegistryLoader;
}

protected void unsetTenantRegistryLoader(TenantRegistryLoader tenantRegistryLoader) {
    this.tenantRegistryLoader = null;
}
Now we can use this service reference to load tenant registry inside a Carbon Component as follows.
tenantRegistryLoader.loadTenantRegistry(tenantId);
Hope you find it useful when you need to load tenant registry manually.

Tuesday, April 22, 2014

How to enable debug logs in JSP files in WSO2 Servers

When it comes to debugging a problem in a programming code logs play a huge role. Here I am going to explain how to enable debug logs in JSP files in WSO2 Servers. For logging in WSO2 Servers, it uses Apache Commons Logging library. In order to add debug logs first you need to import the relevant classes to the JSP, where you can import them as follows.
 
<%@ page import="org.apache.commons.logging.Log" %>
<%@ page import="org.apache.commons.logging.LogFactory" %>


Then you need to initialize the Log instance to do logging. As an example here I am going to log ADMIN_SERVICE_COOKIE in WSO2 Servers stored in in the session object.
 
<%
    String cookie = (String) session.getAttribute(ServerConstants.ADMIN_SERVICE_COOKIE);
    Log log = LogFactory.getLog(this.getClass());
    log.info("Admin Cookie : " + cookie);
%>
Now the JSP page is ready for logging. But still there we need to add log4j properties in order to print logs in the carbon logs. You can do that by adding the following part to ${CARBON_HOME}/repository/conf/log4j.properties file.
 
log4j.logger.org.apache.jsp=INFO

You can define the logging granularity by changing INFO to DEBUG, WARNING, ERROR, FATAL, TRACE according to your requirement.

NOTE : You have to build the relevant bundle every time you change the JSP file/s.