Sunday, June 7, 2015

How to persist resolv.conf modifications

Default resolv.conf in my Ubuntu box is as follows,
udara@udara-home:~$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
Assume a scenario where I need to use a different DNS.. Eg: -google dns- 8.8.8.8

I can achieve this by modifying above resolv.conf, but that change won't persists after Ubuntu box restart, networking restart or my router restart.

How to persist my change,

Open /etc/network/interfaces using your favorite text editor, in my case it's vi :)
sudo vi /etc/network/interfaces
Update /etc/network/interfaces according to your preference. You can append dns-nameservers 8.8.8.8 to the end.


Eg:-
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
dns-nameservers 8.8.8.8

Note:- I tried different alternatives like update resolv.conf using a bash script during server restart, adding immutable characteristic to resolv.conf using chattr tool. But above is the only viable solution I can recommend to someone.

Tuesday, June 2, 2015

Git merge to restore lost commit

Lets start from a status like following,
udara@udara-home:~/workspace/USSD-Reminder$ git status
# On branch new-feature
# Your branch is ahead of 'origin/new-feature' by 1 commit.
#   (use "git push" to publish your local commits)
#
nothing to commit, working directory clean
So here my local branch is one commit ahead from the remote. Assume mistakenly I did a git reset --hard HEAD^. This will remove my last commit and now I need to find a way to undo my last command.

When we issue the git reset command, commit goes to dangling state and commit is not going to remove permanently. luckily !!

IMPORTANT:

Make sure you don't run git gc until you restore the lost commit. git gc command will trigger the garbage collector and remove all commits which are in dangling state.


Let's start the restoring process....

1. We need to find the SHA1 of the deleted commit so we can bring it back.
 git fsck --lost-found
git fsck command will list down all commits which are in dangling state.
udara@udara-home:~/workspace/USSD-Reminder$ git fsck --lost-found
Checking object directories: 100% (256/256), done.
Checking objects: 100% (16/16), done.
dangling commit 5e3079cc8ac9e15cbfc1f513d249678c0893feab
So we have the SHA1 of the commit which needs to restore.

2. Lets merge this commit.
git merge <SHA1>
udara@udara-home:~/workspace/USSD-Reminder$ git merge 5e3079cc8ac9e15cbfc1f513d249678c0893feab
Updating 0ffdab3..5e3079c
Fast-forward
 config.php | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Now if we run the git status command it should mention that my local branch is one commit ahead than the remote.

Friday, May 22, 2015

Deleting a remote branch using command line- Git

If you are familiar with GitHub UI after browsing 'branches' tab you can simply delete a branch(If you have relevant permissions).


Lets see how we do the same using command line.
git push <REMOTE_NAME> :<BRANCH_NAME>
Note the space between remote name and the colon. By following this syntax you are asking git to push nothing hence git push will delete the branch in remote repository.

Eg:-
udara@udara-home:~/workspace/h2-social-adaptor$ git push origin :v2
To git@github.com:udarakr/h2-social-adaptor.git
 - [deleted]         v2

Wednesday, May 13, 2015

Cross database pagination problem and how we solved it

When we are developing a webapp with any type of listing, pagination is a must have feature. From the UI perspective we can use infinite scrolling or pages approach.

If you are familiar with google search this is how google uses the second approach,



Now the problem is how we generate listing for a particular page?
Lets take a hypothetical library application where we need to list down existing books, assume we need to list 10 books per page.

This is how we query books to generate the 1st page,
SELECT * FROM BOOKS LIMIT 10;
to generate the 2nd page,
SELECT * FROM BOOKS LIMIT 10 OFFSET 10;
So easy,  by providing the offset value we can dynamically generate our pages.

BUT can we use the same across database types?

If you know the exact backend DB type you gonna use in production, lucky you!
But if you are developing a product which needs to support multiple DB types (MySQL, Oracle, Postgres, H2 ....) above query will fail in some of them. For example above will work in MySQL, H2 and Postgres but will not work in Oracle.

In Oracle 11g you can run following but not in 12c
SELECT *
FROM (
  SELECT b.*, ROWNUM RN
  FROM (
    SELECT *
    FROM BOOKS
    ORDER BY ID ASC
  ) b
  WHERE ROWNUM <= 10
)
WHERE RN > 10
 
Now let's discuss how we solved this problem. Following is high-level architecture of the solution.







In this scenario we have exposed all CRUD operations as OSGI services to the outside world. This component may or may not have DB specific logic. But for sure we don't have any DB type specific logic within our main OSGI component. We have delegated all those DB type specific logic to the DB Adapter.

What are the DB type specific things we can have?

1. Above pagination related stuff (LIMIT, OFFSET)
2. Auto generate key usage(getGenaratedKeys)

How to load correct DB Adapter class through reflection

We can maintain a configuration file where we state the DB Adapter class. Then read that class name within our main component and use/call through reflection.

Reference

http://www.jooq.org/doc/3.5/manual/sql-building/sql-statements/select-statement/limit-clause/

Saturday, April 18, 2015

Removing git Untracked files

You can mess up git working directory by merges, mistakes etc. and ends up with lots of unwanted untracked files,


You can use .gitignore to discard those files from the working directory, but that is not the only available option. .gitignore is perfect option to discard .project, target, .swp files permanently from the working directory. But what if we want to discard few files just once.

git clean is the solution, you can decorate this command by providing various parameters. Here in this post I'm focusing only -d, -n and -f  options.

-d, will do a dry run on the working directory, so you can find out what are the files/directories going to be removed.




If you provide the -f option and remove -n based on the permission, this will actually perform the task of removing those listed files.


Make sure to double check while performing this trick.

Monday, April 13, 2015

Access Raspberry Pi GPIO over internet

In my previous post I wrote about this neat framework for Raspberry Pi, WebIOPi. I was able to change the INPUT/OUTPUT direction, set status of the R Pi's GPIO remotely, over the local network.

In order to access WebIOPi over the internet we can use a simple tool called Weaved. You might have already installed "Weaved" during the WebIOPi installation else sign up here to get detailed instructions.

Follow instructions here to configure Weaved on top of your Raspberry Pi.




After configuring, you can list all devices as in above screen-shot and select the device you need to control. I only have one device registered here "media-center". I can get the publicly available URL of my device to control GPIO Pins over internet.



Still I will be able to access WebIOPi interface over the local network too.


I tried my same old example of measuring voltage difference between Ground PIN and PIN #12.




WebIOPi - Raspberry Pi IoT framework

I spent hours with WebIOPi framework today and decided to keep this note as a future reference.

Few days back I decided to create a RPi media center where I can play media files available within any device connected to my home network. I used WebIOPi to control GPIO and communicate with my RPi back and forth remotely from my mobile device.

How to install WebIOPi

You need python and make sure to install python before moving forward.

i) You can directly download install from the pi store.


ii) I took the second method, Downloaded WebIOPi from sourceforge and execute,

tar xvzf WebIOPi-0.7.1.tar.gz
cd WebIOPi-0.7.1
sudo ./setup.sh

webiopi-0.7.1 is the latest version available at the moment.

After installing run  sudo /etc/init.d/webiopi start to start the webIOPi service.

Let's do some basic stuff to see our installation progress. If you haven't specify the port number webIOPi service will start on the port 8000. So let's find the IP address of the RPi and browse, http://192.168.1.104:8000. Provide webiopi as the username and raspberry as the password.



Click on the GPIO Header link and browse, So i'm using Raspberry Pi B+ and this is the GPIO pin layout.

Let's use 1, 2, 6 PINs and verify this voltage values. I'm going to use a multimeter, two crocodile probes and two jumper cables for this.

Connect negative end to the ground PIN(6) and positive end to the PIN #1. Following is my voltage reading.



Disconnect the positive end from PIN 1 and connect to PIN #2,



There is a slight difference in the voltage reading, this can be due to the fact that I'm using wireless network adapter, mouse, keyboard attached to my RPi.

Let's toggle the WebIOPi OUT/IN button to change the GPIO direction/pin to change the output state and check multimeter reading one more time. For this I'm going to use PIN #12 so connected positive probe to the PIN #12. Then switched GPIO direction ans outout status like this.


This is my multimeter reading,



Great! Now I can control my Raspberry Pi's GPIO remotely. But here I accessed WebIOPi interface over the local network. I will put a note on "How to access WebIOPi interface over internet" very soon.

Wednesday, April 8, 2015

Working with forked git repo- proper way to sync- part2

After writing my previous blog post, I played around with github UI a bit and found another method to sync forked repository with the original. So decided to write down those steps here.

1. Browse your forked repository. I'm using https://github.com/udarakr/carbon-store/ repository which I have forked from https://github.com/wso2/carbon-store. You can notice the organization difference by looking at the URL.


You can notice that my carbon-store fork is 71 commits behind the original.

Then press the Pull Request link and you will get something similar to the following.



Since my fork is 1 commit ahead, by default github suggests me to create pull request  using this commit. But my intension is different here.

So click head fork drop down and select a different one. Other than the one selected at the moment. So I choose splinter/carbon-store and this is my outcome. (Don't worry about this step,  you can select whatever you want. This is a trick to change base fork to head fork and vice versa)

  
Now click on the base fork and select your forked repository/branch and then change head fork to the original. You will get something similar to the following.


Now press the Create pull request button and open a pull request.




Since I have 71 commits involved in this pull request I have to scroll a bit to find the Merge pull request button.



Press Confirm merge and you are done !!



Now if I browse my forked repository, I can see that I'm no more behind the original repo :)



Working with forked git repo- proper way to sync

Thought of writing this simple post after seeing one of my colleagues way of updating forked git repo. This guy used to delete the forked repo and fork again to get latest updates (:D Yes I'm talking about you).

The proper way of syncing is pretty simple (may be not simple as delete/fork method). Let me explain this in two steps.

1. Configure remote repo for our fork

git remote -v command will list the existing remote repositories.
udara@udara-home:~/wso2/git/fork/carbon-apimgt$ git remote -v
origin    git@github.com:udarakr/carbon-apimgt.git (fetch)
origin    git@github.com:udarakr/carbon-apimgt.git (push)
You can add a new remote repository by providing git remote add upstream command.
udara@udara-home:~/wso2/git/fork/carbon-apimgt$ git remote add upstream git@github.com:wso2/carbon-apimgt.git 
Now if I run the git remote -v command again,
udara@udara-home:~/wso2/git/fork/carbon-apimgt$ git remote -v
origin    git@github.com:udarakr/carbon-apimgt.git (fetch)
origin    git@github.com:udarakr/carbon-apimgt.git (push)
upstream    git@github.com:wso2/carbon-apimgt.git (fetch)
upstream    git@github.com:wso2/carbon-apimgt.git (push)
2. In order to sync your fork run git fetch upstream command.upstream is the name you provided earlier while configuring the remote.
udara@udara-home:~/wso2/git/fork/carbon-apimgt$ git fetch upstream
remote: Counting objects: 355, done.
remote: Compressing objects: 100% (173/173), done.
remote: Total 355 (delta 48), reused 5 (delta 5), pack-reused 80
Receiving objects: 100% (355/355), 256.29 KiB | 56.00 KiB/s, done.
Resolving deltas: 100% (53/53), done.
From github.com:wso2/carbon-apimgt
 * [new branch]      master     -> upstream/master
 * [new branch]      release-1.3.0 -> upstream/release-1.3.0
 * [new branch]      release-1.3.1 -> upstream/release-1.3.1
 * [new branch]      release-1.3.2 -> upstream/release-1.3.2
 * [new branch]      release-1.3.3 -> upstream/release-1.3.3
 * [new branch]      release-1.9.0 -> upstream/release-1.9.0
 * [new branch]      release-2.0.0 -> upstream/release-2.0.0
Make sure you switch to the correct branch (in my exercise it's release-2.0.0) . You can verify that by running git branch -a command.

Now run git merge upstream/release-2.0.0. this makes my local release-2.0.0 branch sync with the upstream repository.
udara@udara-home:~/wso2/git/fork/carbon-apimgt$ git merge upstream/release-2.0.0
Updating a68742d..4000cbd
Fast-forward
 .../org/wso2/carbon/apimgt/api/APIProvider.java    |  29 ++
 .../wso2/carbon/apimgt/impl/APIProviderImpl.java   | 466 ++++++++++++++++++++-
 .../src/main/resources/config/rxts/api.rxt         | 107 ++---
 .../resources/apipublisher/scripts/apipublisher.js |  29 +-
 4 files changed, 548 insertions(+), 83 deletions(-)

Friday, April 3, 2015

Simple Optocoupler sample (Introduction to Optocouplers)

What is an Optocoupler? It allows you to connect two circuits which do not share a common power source. There is a small LED inside which will illuminate to close and internal switch when you apply voltage. So this will act as a switch in the second circuit.

I'm going to use following simple arduino sketch to blink a LED connected to a different circuit with the use of a 4N35 optocoupler.
int optocouplerPin = 2;
void setup()
{
  pinMode(optocouplerPin, OUTPUT);
}
void loop()
{
  digitalWrite(optocouplerPin, HIGH);
  delay(1000);
  digitalWrite(optocouplerPin, LOW);
  delay(1000);                  
}
I have prototyped following circuit and connected one end to an arduino uno at the same time other end to a 3V power source. 


After uploading above sketch to my arduino following is my outcome :)

Thursday, March 26, 2015

Set the p2 location during maven clean build

We tend to remove or rename default repository when we need to clean build our maven component. Rather doing that simply create a new directory (may be within /tmp) and use that as the repository while building.
mvn clean install -Dmaven.repo.local=<PATH_TO_REPOSITORY>

Sunday, February 8, 2015

Invoke Java method from Jaggery.js

Lets assume we have a Java class with login() and logout() methods. login() takes two parameters username, password and use AuthenticationAdmin adminservice internally then returns sessionCookie.

So in this can I will have following package declaration and imports within the class.

package org.wso2.carbon.session.cookie;

import org.apache.axis2.context.ServiceContext;
import org.apache.axis2.transport.http.HTTPConstants;
import org.wso2.carbon.authenticator.stub.AuthenticationAdminStub;
import org.wso2.carbon.authenticator.stub.LoginAuthenticationExceptionException;
import org.wso2.carbon.authenticator.stub.LogoutAuthenticationExceptionException;
import java.rmi.RemoteException;


Then the login and logout methods.

public String login (String username, String password) throws RemoteException, LoginAuthenticationExceptionException {

authenticationAdminStub = new AuthenticationAdminStub("https://localhost:9443/services/AuthenticationAdmin");


String sessionCookie = null;
if (authenticationAdminStub.login(username, password, "localhost")) {
System.out.println("Login Successful");
ServiceContext serviceContext = authenticationAdminStub.
_getServiceClient().getLastOperationContext().getServiceContext();
sessionCookie = (String) serviceContext.getProperty(HTTPConstants.COOKIE_STRING);
}
return sessionCookie;
}


public void logout () throws RemoteException, LogoutAuthenticationExceptionException {
authenticationAdminStub.logout();
System.out.println("Logout successful");
}


Inorder to manage dependencies and packaging I'm using maven in this sample.
following is my repositories and dependencies section of the project pom.xml.

<repositories>
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
<repository>
<id>central</id>
<name>Maven Repository Switchboard</name>
<layout>default</layout>
<url>http://repo1.maven.org/maven2</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.authenticator.stub</artifactId>
<version>4.2.0</version>
</dependency>
</dependencies>
I'm using wso2IS-5.0 distribution to test this. Make sure to use maven packaging property as following,

<packaging>jar</packaging>

You can find the complete project at  https://github.com/udarakr/authenticator

Build org.wso2.carbon.session.cookie.gen using mvn clean install command.

Lets copy session.cookie.gen-1.0-SNAPSHOT.jar located within target directory to <IS_HOME>/repository/components/lib directory.

Then create out Jaggey application, authenticator jaggery application consists of one jag file index.jag with following content,
<%
 var SessionCookieGen =org.wso2.carbon.session.cookie.SessionCookieGen;

 var SessionCookieGen = new SessionCookieGen();
 var sessionCookie = SessionCookieGen.login('
admin', 'admin');
 print(sessionCookie);
 %>
Then copy authenticator application to the <IS_HOME>/repository/deployment/server/jaggeryapps/ directory.

NOTE :- Since this is for testing purpose only we are using the same IS node to host our application.

Then start the IS node, go to  <IS_HOME>/bin/ directory and run sh wso2server.sh within your command-line. After server start open your web browser and go to the https://localhost:9443/authenticator/

You can see,



If you don't have a complex logic within the Java class(simple method without any imports) you can follow this post published by Madhuka as a reference.

Find more information about jaggery.js from here

Thursday, January 29, 2015

Change location of the WSO2 Carbon server logs

I'm using WSO2 AM-1.8.0 in this post. Since AM-1.8.0 released in Carbon-4.2.0 same steps apply to all products within this carbon version.

By default all log files are stored in <CARBON_HOME>/repository/logs/ directory.

Lets assume we need to move all these logs to /var/logs/wso2 directory.

1. Open log4j.properties file resides within <CARBON_HOME>/repository/conf/ directory and update following properties as mentioned below.


log4j.appender.SERVICE_APPENDER.File
log4j.appender.TRACE_APPENDER.File
log4j.appender.CARBON_LOGFILE.File
log4j.appender.ERROR_LOGFILE.File
log4j.appender.AUDIT_LOGFILE.File
log4j.appender.ATOMIKOS.File

log4j.appender.SERVICE_APPENDER.File=/var/logs/wso2/${instance.log}/wso2-apigw-service${instance.log}.log
log4j.appender.TRACE_APPENDER.File=/var/logs/wso2/${instance.log}/wso2-apigw-trace${instance.log}.log
log4j.appender.CARBON_LOGFILE.File=/var/logs/wso2/${instance.log}/wso2carbon${instance.log}.log
log4j.appender.ERROR_LOGFILE.File=/var/logs/wso2/${instance.log}/wso2-apigw-errors.log
log4j.appender.AUDIT_LOGFILE.File=/var/logs/wso2/audit.log
log4j.appender.ATOMIKOS.File =
/var/logs/wso2/tm.out 

2.  configure HTTP Access log file by changing <CARBON_HOME>/repository/conf/tomcat/catalina-server.xml
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="/var/logs/wso2"
prefix="localhost_access_log_sample."
suffix=".log"
pattern="%{xxx}i %{xxx}o"
resolveHosts="false"/>
 You may face few hick-ups if you need to move ALL logs from default location.

Eg: - patches.log, wso2carbon-trace-messages.log

At the moment there is no direct way to configure store location for above logs.These properties exist within <CARBON_HOME>/lib/org.wso2.carbon.server-4.2.0.jar and patch applying process take place even before the carbon server start. So we have bundled log4j.properties file within org.wso2.carbon.server-4.2.0.jar.

I will note down a workaround here.

Go to <CARBON_HOME>/lib/ Directory. Open the org.wso2.carbon.server-4.2.0.jar with an archive manager. Open the log4j.properties using a text editor and modify following properties.
log4j.appender.CARBON_LOGFILE.File
log4j.appender.CARBON_TRACE_LOGFILE.File
log4j.appender.CARBON_PATCHES_LOGFILE