John's Technical Blog

Extract the icon from an APK file

2016-08-05

Using the apk-parser library (https://github.com/caoqianli/apk-parser), I developed the following code that will extract the icon from an APK file. Note that there is no guarantee about what size the icon will be. Enjoy!

Extract the icon from an APK file

2016-08-05

Using the apk-parser library (https://github.com/caoqianli/apk-parser), I developed the following code that will extract the icon from an APK file. Note that there is no guarantee about what size the icon will be. Enjoy!
Renaming a dependency JAR before buiding a WAR in Maven

2016-08-05

I was recently tasked with updating New Relic on our servers. When this was initially set up, the server was set up with a JVM command line option:
-javaagent:/usr/share/tomcat7/webapps/ROOT/WEB-INF/lib/newrelic-3.9.0.jar
When working on updating the version, I wanted to change this so that
  1. We could update the New Relic Agent whenever a new update is available
  2. We would not need to change the JVM command line each time the Agent was updated
To accomplish this, I wanted the New Relic Agent JAR name to always be the same, regardless of the version. I found maven-dependency-plugin through a web search, but ran across a problem where my WAR file was being created before the JAR was downloaded and renamed (using maven-war-plugin). All the examples I ran across used <phase>package</phase>, and while that looked right, I figured that this was still happening in the wrong life cycle phase for what I wanted to do. I resorted to reading the documentation, and with some experimenting found that <phase>prepare-package</phase> did this at the right part of the life cycle.
Here is the plugins section of the working pom.xml:
You do not need to include this dependency in your dependencies section, unless you are actually using it in your code.
We changed the server's JVM command line option to:
-javaagent:/usr/share/tomcat7/webapps/ROOT/WEB-INF/lib/newrelic.jar
And everything worked just great ... once I got the YAML file figured out 😉.

Bonus Notes

  • If you have not worked with YAML before, you will find out quickly that indentation is important to keep the hierarchy of properties right
  • The newrelic.yml file included as an example in the newrelic-java.zip file appears to have an error in it. Specifically, the classloader_excludes property values need to be a commented list on the same line. I got parse errors using the example as-is (i.e., the list is indented with each item on a separate line, and has an extra comma at the end).
  • If you are reading this and setting up a new configuration based on this article, you will also need the newrelic.yml file to end up in the same folder. To that end, place the file in /src/main/webapp/WEB-INF/lib in your Maven-based folder structure.

Renaming a dependency JAR before buiding a WAR in Maven

2016-08-05

I was recently tasked with updating New Relic on our servers. When this was initially set up, the server was set up with a JVM command line option:
-javaagent:/usr/share/tomcat7/webapps/ROOT/WEB-INF/lib/newrelic-3.9.0.jar
When working on updating the version, I wanted to change this so that
  1. We could update the New Relic Agent whenever a new update is available
  2. We would not need to change the JVM command line each time the Agent was updated
To accomplish this, I wanted the New Relic Agent JAR name to always be the same, regardless of the version. I found maven-dependency-plugin through a web search, but ran across a problem where my WAR file was being created before the JAR was downloaded and renamed (using maven-war-plugin). All the examples I ran across used <phase>package</phase>, and while that looked right, I figured that this was still happening in the wrong life cycle phase for what I wanted to do. I resorted to reading the documentation, and with some experimenting found that <phase>prepare-package</phase> did this at the right part of the life cycle.
Here is the plugins section of the working pom.xml:
You do not need to include this dependency in your dependencies section, unless you are actually using it in your code.
We changed the server's JVM command line option to:
-javaagent:/usr/share/tomcat7/webapps/ROOT/WEB-INF/lib/newrelic.jar
And everything worked just great ... once I got the YAML file figured out 😉.

Bonus Notes

  • If you have not worked with YAML before, you will find out quickly that indentation is important to keep the hierarchy of properties right
  • The newrelic.yml file included as an example in the newrelic-java.zip file appears to have an error in it. Specifically, the classloader_excludes property values need to be a commented list on the same line. I got parse errors using the example as-is (i.e., the list is indented with each item on a separate line, and has an extra comma at the end).
  • If you are reading this and setting up a new configuration based on this article, you will also need the newrelic.yml file to end up in the same folder. To that end, place the file in /src/main/webapp/WEB-INF/lib in your Maven-based folder structure.
IntelliJ Idea: Can't find gems in Cucumber configurations when using RVM Ruby

2016-07-27

I had updated Ruby on my Mac OS X laptop using RVM:

curl -sSL https://get.rvm.io | bash -s stable --ruby
rvm use 2.3 --default

In IntelliJ IDEA Ultimate 2016, I changed my Cucumber configuration to use the "RVM: ruby-2.3.0" SDK.

I then got any number of errors regarding not having gems installed like cucumber, any of the required gems, and then finally the debug gems (ruby-debug-ide and debase).
Run Configuration Error: Cucumber Gem isn't installed for RVM

Initially, I had some success getting rid of errors one by one by manually running "gem install" on the command line for every gem that was missing. However, in the end, I still had the problems with the debug gems not being installed, and getting errors when attempting to have IntelliJ install the debug gems itself.

I finally figured out that the Gems bin directory was incorrect. When I when to

File ➜ ProjectStructure ➜ SDKs ➜ RVM: ruby-2.3.0

and changed Gems bin directory to
/Users/[username]/.rvm/gems/ruby-2.3.0

Then things started working just fine.




IntelliJ Idea: Can't find gems in Cucumber configurations when using RVM Ruby

2016-07-27

I had updated Ruby on my Mac OS X laptop using RVM:

curl -sSL https://get.rvm.io | bash -s stable --ruby
rvm use 2.3 --default

In IntelliJ IDEA Ultimate 2016, I changed my Cucumber configuration to use the "RVM: ruby-2.3.0" SDK.

I then got any number of errors regarding not having gems installed like cucumber, any of the required gems, and then finally the debug gems (ruby-debug-ide and debase).
Run Configuration Error: Cucumber Gem isn't installed for RVM

Initially, I had some success getting rid of errors one by one by manually running "gem install" on the command line for every gem that was missing. However, in the end, I still had the problems with the debug gems not being installed, and getting errors when attempting to have IntelliJ install the debug gems itself.

I finally figured out that the Gems bin directory was incorrect. When I when to

File ➜ ProjectStructure ➜ SDKs ➜ RVM: ruby-2.3.0

and changed Gems bin directory to
/Users/[username]/.rvm/gems/ruby-2.3.0

Then things started working just fine.




Finagle Filter path with "andThen"

2015-12-11

In way of passing on what I am continuing to learn about chained Finagle Filters in Scala:

Filters can be chained together with the “andThen” function. This is essentially an indicator of which direction the Request (input) is handed off to the next filter. I believe that when we normally think about filters, we expect the filter to act on the Request (like a sieve, for example), and indeed it can. However, once the Request gets to the end of the Filter chain, it gets turned into a Response (output), which also, in turn, can be filtered as it is passed back back to the beginning of the Filter chain.

Here is a ScalaTest that shows how both the inward and outward paths can be used to modify the request and the response, as well as a short-circuit in Filter3 that prevents Filter4 from being run (you can change the condition to true to see the path through all four filters). The example Finagle Service here simply takes an initial value and concatenates the request to make a Response:

import com.twitter.finagle.{Service, SimpleFilter}
import com.twitter.util.Future
import org.scalatest._

class StringService(response: String) extends Service[String, String] {
  override def apply(request: String): Future[String] = Future.value(response + ":" + request)
}

object StringFilter1 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-1")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-1"))
  }
}

object StringFilter2 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-2")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-2"))
  }
}

object StringFilter3 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-3")
    val myCondition = false
    if(myCondition){
      service(requestUpdate).map(futureString => futureString.concat(" » exit-3"))
    } else {
      Future(requestUpdate.concat(" » short-circuit-3"))
    }
  }
}

object StringFilter4 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-4")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-4"))
  }
}

class FilterStackTest  extends FlatSpec with Matchers {
  "A Filter" should "Operate like a Stack" in {
    var testService =  new  StringService("Service A")
    var testFilter = StringFilter1 andThen StringFilter2 andThen StringFilter3 andThen StringFilter4

    System.out.println(testFilter("start",testService))
  }
}

The console output is:
Promise@71098046(state=Done(Return(start » enter-1 » enter-2 » enter-3 » short-circuit-3 » exit-2 » exit-1)))

And when val myCondition = true :

Promise@380962452(state=Done(Return(Service A:start » enter-1 » enter-2 » enter-3 » enter-4 » exit-4 » exit-3 » exit-2 » exit-1)))

Finagle Filter path with "andThen"

2015-12-11

In way of passing on what I am continuing to learn about chained Finagle Filters in Scala:

Filters can be chained together with the “andThen” function. This is essentially an indicator of which direction the Request (input) is handed off to the next filter. I believe that when we normally think about filters, we expect the filter to act on the Request (like a sieve, for example), and indeed it can. However, once the Request gets to the end of the Filter chain, it gets turned into a Response (output), which also, in turn, can be filtered as it is passed back back to the beginning of the Filter chain.

Here is a ScalaTest that shows how both the inward and outward paths can be used to modify the request and the response, as well as a short-circuit in Filter3 that prevents Filter4 from being run (you can change the condition to true to see the path through all four filters). The example Finagle Service here simply takes an initial value and concatenates the request to make a Response:

import com.twitter.finagle.{Service, SimpleFilter}
import com.twitter.util.Future
import org.scalatest._

class StringService(response: String) extends Service[String, String] {
  override def apply(request: String): Future[String] = Future.value(response + ":" + request)
}

object StringFilter1 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-1")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-1"))
  }
}

object StringFilter2 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-2")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-2"))
  }
}

object StringFilter3 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-3")
    val myCondition = false
    if(myCondition){
      service(requestUpdate).map(futureString => futureString.concat(" » exit-3"))
    } else {
      Future(requestUpdate.concat(" » short-circuit-3"))
    }
  }
}

object StringFilter4 extends SimpleFilter[String, String] {
  override def apply(request: String, service: Service[String, String]): Future[String] = {
    val requestUpdate = request.concat(" » enter-4")
    service(requestUpdate).map(futureString => futureString.concat(" » exit-4"))
  }
}

class FilterStackTest  extends FlatSpec with Matchers {
  "A Filter" should "Operate like a Stack" in {
    var testService =  new  StringService("Service A")
    var testFilter = StringFilter1 andThen StringFilter2 andThen StringFilter3 andThen StringFilter4

    System.out.println(testFilter("start",testService))
  }
}

The console output is:
Promise@71098046(state=Done(Return(start » enter-1 » enter-2 » enter-3 » short-circuit-3 » exit-2 » exit-1)))

And when val myCondition = true :

Promise@380962452(state=Done(Return(Service A:start » enter-1 » enter-2 » enter-3 » enter-4 » exit-4 » exit-3 » exit-2 » exit-1)))
Google API Fusion Table permissioning

2013-06-04

I recently worked on a project where I needed to update an application that leveraged Google Fusion Tables. The Google API changed significantly, and the application did not work anymore. While I found a good Fusion Table coding example of how to get the Java code changed properly, I had a lot of difficulty getting the permissioning set up.

Here is a brief summary of how I got it to work, in the hopes that it might help others who are having similar problems:

Connect the table to the Fusion Table application

  • If the table you are interested in is not already connected to Fusion Tables, click it in your Google Drive, and then click the Connect button.

Turn on the Fusion Table API Service

  • Open the Google API Console
  • Create a new project if you need to
  • In Services, turn on Fusion Tables API

Set up a Service Account

  • In the Google API Console, open API Access and click the Create an OAuth 2.0 client ID button
  • Enter a Product Name and click Next
  • Click the Service Account radio button, and then Create Client ID
  • Download the key file into your project, and rename it to whatever is appropriate for you to use in your application
  • You will also need the "Email Address", which is referred to as the "Application ID" within the API

Set permissions on the table file

This one was really difficult to figure out. If you need to do INSERTs or DELETEs into the Fusion Table, then you will need to set "writer" permissions for the Service Account. If you only need to SELECT from your application, then you can skip this step, of course.

  • Open the Google Drive SDK Permissions Page
  • Turn on the Authorize requests using OAuth 2.0 toggle (you should be prompted to authorize)
  • Enter the following information:
    • Field: [the fusion table ID]
    • role: writer
    • type: user
    • value: ["Email Address" from the Console]

Java code

That should be it. Following the example code, you will set up a credential:

credential = new GoogleCredential.Builder()
   
.setTransport(HTTP_TRANSPORT)
   
.setJsonFactory(JSON_FACTORY)
   
.setServiceAccountId(config.getAccountId())
   
.setServiceAccountScopes(Collections.singleton(FusiontablesScopes.FUSIONTABLES))
   
.setServiceAccountPrivateKeyFromP12File(keyFile)
   
.build();

Make your Fusion Table object:

fusiontables = new Fusiontables.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential).build();

Run your various SQL statements:

response = fusiontables.query().sql(insertSql).execute();

Hope that helps someone!

Google API Fusion Table permissioning

2013-06-04

I recently worked on a project where I needed to update an application that leveraged Google Fusion Tables. The Google API changed significantly, and the application did not work anymore. While I found a good Fusion Table coding example of how to get the Java code changed properly, I had a lot of difficulty getting the permissioning set up.

Here is a brief summary of how I got it to work, in the hopes that it might help others who are having similar problems:

Connect the table to the Fusion Table application

  • If the table you are interested in is not already connected to Fusion Tables, click it in your Google Drive, and then click the Connect button.

Turn on the Fusion Table API Service

  • Open the Google API Console
  • Create a new project if you need to
  • In Services, turn on Fusion Tables API

Set up a Service Account

  • In the Google API Console, open API Access and click the Create an OAuth 2.0 client ID button
  • Enter a Product Name and click Next
  • Click the Service Account radio button, and then Create Client ID
  • Download the key file into your project, and rename it to whatever is appropriate for you to use in your application
  • You will also need the "Email Address", which is referred to as the "Application ID" within the API

Set permissions on the table file

This one was really difficult to figure out. If you need to do INSERTs or DELETEs into the Fusion Table, then you will need to set "writer" permissions for the Service Account. If you only need to SELECT from your application, then you can skip this step, of course.

  • Open the Google Drive SDK Permissions Page
  • Turn on the Authorize requests using OAuth 2.0 toggle (you should be prompted to authorize)
  • Enter the following information:
    • Field: [the fusion table ID]
    • role: writer
    • type: user
    • value: ["Email Address" from the Console]

Java code

That should be it. Following the example code, you will set up a credential:

credential = new GoogleCredential.Builder()
   
.setTransport(HTTP_TRANSPORT)
   
.setJsonFactory(JSON_FACTORY)
   
.setServiceAccountId(config.getAccountId())
   
.setServiceAccountScopes(Collections.singleton(FusiontablesScopes.FUSIONTABLES))
   
.setServiceAccountPrivateKeyFromP12File(keyFile)
   
.build();

Make your Fusion Table object:

fusiontables = new Fusiontables.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential).build();

Run your various SQL statements:

response = fusiontables.query().sql(insertSql).execute();

Hope that helps someone!