JPA callbacks with Hibernate’s SessionFactory and no EntityManager

I wanted to use JPA callback annotations, such as @PostLoad
and @PostUpdate, but realized that those JPA annotations do not work, unless Hibernate is configured to use a JPA EnityManager. My project uses Hibernate’s SessionFactory, so these annotations are not available to me out of the box.

So, how do we configure Hibernate to get the best of both worlds? Here’s how I did it in Hibernate 5. Hibernate 4 can use a very similar approach, but the code would be slightly different (just grab it from org.hibernate.jpa.event.spi.JpaIntegrator)

Luckily, Hibernate’s IntegratorServiceImpl uses the java.util.ServiceLoader API, so we can specify an additional list of org.hibernate.integrator.spi.Integrator implementations we want the SessionFactory to use.

All we need to do is specify a service provider for org.hibernate.integrator.spi.Integrator in:

META-INF/services/org.hibernate.integrator.spi.Integrator:

# This allows us to use JPA-style annotation on entities, such as @PostLoad
our.custom.JpaAnnotationsIntegrator

You will also need to ensure that ‘hibernate-entitymanager‘ jar of the appropriate version is on your classpath.

our.custom.JpaAnnotationsIntegrator (taken from org.hibernate.jpa.event.spi.JpaIntegrator):

package our.custom;

import org.hibernate.annotations.common.reflection.ReflectionManager;
import org.hibernate.boot.Metadata;
import org.hibernate.boot.internal.MetadataImpl;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.event.service.spi.EventListenerRegistry;
import org.hibernate.event.spi.EventType;
import org.hibernate.integrator.spi.Integrator;
import org.hibernate.jpa.event.internal.core.JpaPostDeleteEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostInsertEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostLoadEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostUpdateEventListener;
import org.hibernate.jpa.event.internal.jpa.CallbackBuilderLegacyImpl;
import org.hibernate.jpa.event.internal.jpa.CallbackRegistryImpl;
import org.hibernate.jpa.event.spi.jpa.CallbackBuilder;
import org.hibernate.jpa.event.spi.jpa.ListenerFactory;
import org.hibernate.jpa.event.spi.jpa.ListenerFactoryBuilder;
import org.hibernate.mapping.PersistentClass;
import org.hibernate.service.spi.SessionFactoryServiceRegistry;

/**
 * This integrator allows us to use JPA-style post op annotations on Hibernate entities.
 *

 * This integrator is loaded by <code>org.hibernate.integrator.internal.IntegratorServiceImpl</code> from
 * <code>META-INF/services/org.hibernate.integrator.spi.Integrator</code> file.
 *

 * <b>Note</b>: This code is lifted directly from <code>org.hibernate.jpa.event.spi.JpaIntegrator</code>
 *
 * @author Val Blant
 */
public class JpaAnnotationsIntegrator implements Integrator {
	private ListenerFactory jpaListenerFactory;
	private CallbackBuilder callbackBuilder;
	private CallbackRegistryImpl callbackRegistry;

	@Override
	public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) {
		final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService( EventListenerRegistry.class );

		this.callbackRegistry = new CallbackRegistryImpl();

		// post op listeners
		eventListenerRegistry.prependListeners( EventType.POST_DELETE, new JpaPostDeleteEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_INSERT, new JpaPostInsertEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_LOAD, new JpaPostLoadEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_UPDATE, new JpaPostUpdateEventListener(callbackRegistry) );

		// handle JPA "entity listener classes"...
		final ReflectionManager reflectionManager = ( (MetadataImpl) metadata )
				.getMetadataBuildingOptions()
				.getReflectionManager();

		this.jpaListenerFactory = ListenerFactoryBuilder.buildListenerFactory( sessionFactory.getSessionFactoryOptions() );
		this.callbackBuilder = new CallbackBuilderLegacyImpl( jpaListenerFactory, reflectionManager );
		for ( PersistentClass persistentClass : metadata.getEntityBindings() ) {
			if ( persistentClass.getClassName() == null ) {
				// we can have non java class persisted by hibernate
				continue;
			}
			callbackBuilder.buildCallbacksForEntity( persistentClass.getClassName(), callbackRegistry );
		}
	}

	@Override
	public void disintegrate(SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) {
		if ( callbackRegistry != null ) {
			callbackRegistry.release();
		}
		if ( callbackBuilder != null ) {
			callbackBuilder.release();
		}
		if ( jpaListenerFactory != null ) {
			jpaListenerFactory.release();
		}
	}

}

How to use JSF libraries without packaging them as JARs during development

Tags

Introduction

JSF spec allows us to place JSF configuration documents, such as faces-config.xml and *taglib.xml either inside WEB-INF/ of our WAR, or in META-INF/ of JARs included in WEB-INF/lib of our WAR. For JSF annotated classes, they can either be in WEB-INF/classes, or in the included JARs.

But what if we want all these things to work properly without having to package all our JSF dependency projects as jars? Naturally, we never want to deploy like that, but during development it would be really nice, b/c then we could actually make changes to any code inside our JSF dependencies with full hot-swap support, without having to package anything, or to restart the application server! Unfortunately, this is not possible with JSF out-of-the-box…

This article describes a technique I used to work around these limitations of JSF, thus gaining the ability to make direct modifications to my JSF libraries without restarting or repackaging, and achieving the state of coding zen :).

This solution was tested with Mojarra JavaServer Faces 2.1.7, and it is intended to work with Eclipse workspaces. There would probably be small differences in the implementation for other configurations, but the general approach should work everywhere.

Solution

We have 3 problems to solve:

1) Picking up JSF Annotated Classes from other JSF projects in the workspace

This turned out to be the hardest problem to solve.

Normally JSF annotated classes (such as @FacesComponent, @FacesConverter, @FacesRenderer, etc) must be inside a JAR, or in /WEB-INF/classes/. What we need is to pick up annotated classes from other Eclipse projects we depend on, which means that they need to be loaded from our Web Project’s classpath.

There is no way to extend JSF to do this, b/c everything inside AnnotationScanTask and ProvideMetadataToAnnotationScanTask is hard coded. In order to make the necessary changes, we’ll need some AspectJ magic.

The idea is to use Load Time Weaving to advise the call to JavaClassScanningAnnotationScanner.getAnnotatedClasses() and merge results from our own annotation scan with the results coming from the stock JSF implementation.

This can be achieved with a simple aspect, and some code to scan for annotated classes, which is the first part of our solution. I am using Google Reflections here to do the annotation scan inside the packages where I know my JSF libraries will be. Modify this for your own needs.

JsfConfigurationShimForEclipseProjectsAspect.aj:

/**
 * This is an AspectJ shim used to find more JSF annotated classes during the setup process. 
 * Normally, JSF configuration and JSF annotations are only processed on paths inside our own WAR, and from other jars.
 * However, in development mode we are interested in linking to DryDock dependencies as local Eclipse projects, rather than jars.
 * This shim provides a missing extension point, which scans the DryDock project classpath for JSF annotations.
 *

 * The other part of this solution is found in <code>EclipseProjectJsfResourceProvider</code>
 *

 * Since we are weaving JSF, Load Time Weaving is required, which means that this aspect must be declared in <code>META-INF/aop.xml</code>.
 * Also, Tomcat must be started with:
 *



<pre>
 *  -javaagent:/fullpath/aspectjweaver-version.jar -classpath /fullpath/aspectjrt-version.jar
 * </pre>

 *
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public aspect JsfConfigurationShimForEclipseProjectsAspect {

	pointcut sortedFacesDocumentsPointcut() : execution(* ConfigManager.sortDocuments(..));
	after() returning (DocumentInfo[] sortedFacesDocuments): sortedFacesDocumentsPointcut() {
		System.out.println("\n ====== Augmented list of JSF config files detected with JsfConfigurationShimForEclipseProjectsAspect ====== ");
		for ( DocumentInfo doc : sortedFacesDocuments ) {
			System.out.println(doc.getSourceURI().toString());
		}
		System.out.println("\n");
	}

	pointcut getAnnotatedClassesPointcut(Set<URI> urls) : execution(* JavaClassScanningAnnotationScanner.getAnnotatedClasses(Set<URI>)) && args(urls);
	Map<Class<? extends Annotation>, Set<Class<?>>> around(Set<URI> urls): getAnnotatedClassesPointcut(urls)  {

		Map<Class<? extends Annotation>, Set<Class<?>>> oldMap = proceed(urls);
		Map<Class<? extends Annotation>, Set<Class<?>>> newMap = EclipseJsfDryDockProjectAnnotationScanner.getAnnotatedClasses();
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new AnnotatedJsfClassMerger().merge(oldMap, newMap);

		return mergedMap;

	}
}

EclipseJsfDryDockProjectAnnotationScanner.java:

/**
 * Scans DryDock project classpath to find any JSF annotated classes. This scanner is activated by 
 * the <code>JsfConfigurationShimForEclipseProjectsAspect</code>, which requires Load Time Weaving.
 *

 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 *
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public class EclipseJsfDryDockProjectAnnotationScanner extends AnnotationScanner {
	
	private static final Log log = LogFactory.getLog(EclipseJsfDryDockProjectAnnotationScanner.class);
	
	
	
	private static Reflections reflections = new Reflections( 
			new ConfigurationBuilder()
				.addUrls(ClasspathHelper.forPackage("ca.gc.agr.common.web.jsf"))
				.addUrls(ClasspathHelper.forPackage("ca.ibm.web"))
	);


	public EclipseJsfDryDockProjectAnnotationScanner(ServletContext sc) {
		super(sc);
	}
	
	
	public static Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses() {
		Map<Class<? extends Annotation>, Set<Class<?>>> annotatedClassMap = new HashMap<>();
		
		for ( Class<? extends Annotation> annotation : FACES_ANNOTATION_TYPE ) {
			Set<Class<?>> annotatedClasses = reflections.getTypesAnnotatedWith(annotation);
			
			if ( !annotatedClasses.isEmpty() ) {
				Set<Class<?>> classes = annotatedClassMap.get(annotation);
				if ( classes == null ) {
					classes = new HashSet<Class<?>>();
					annotatedClassMap.put(annotation, classes);
				}
				
				classes.addAll(annotatedClasses);
			}
		}
		
		log.info(" ====== Found additional JSF annotated classes from Eclipse classpath ====== \n" + annotatedClassMap);
		
		return annotatedClassMap;
	}

	@Override
	public Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses(Set<URI> urls) {
		return getAnnotatedClasses();
	}

}

AnnotatedJsfClassMerger.java:

/**
 * Merges 2 maps of JSF annotated classes into one map.
 * 
 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 * 
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public class AnnotatedJsfClassMerger {
	
	public Map<Class<? extends Annotation>, Set<Class<?>>> merge(
				Map<Class<? extends Annotation>, Set<Class<?>>> oldMap,
				Map<Class<? extends Annotation>, Set<Class<?>>> newMap) {
		
		
		Set<Class<? extends Annotation>> annotations = new HashSet<>();
		annotations.addAll(oldMap.keySet());
		annotations.addAll(newMap.keySet());
		
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new HashMap<>();
		for ( Class<? extends Annotation> annotation : annotations ) {
			Set<Class<?>> classes = new HashSet<>();
			
			Set<Class<?>> oldClasses = oldMap.get(annotation);
			Set<Class<?>> newClasses = newMap.get(annotation);
			
			if ( oldClasses != null ) classes.addAll(oldClasses);
			if ( newClasses != null ) classes.addAll(newClasses);
			
			mergedMap.put(annotation, classes);
		}
		
		return mergedMap;
	}

}

Next, we need to properly set up the Load Time Weaver.

First we create src/main/resources/META-INF/aop.xml in our Web Project.

META-INF/aop.xml:

<!-- This file is read by AspectJ weaver java agent. Make sure you specify the following on server startup command line: -javaagent:/fullpath/AgriShare/aspectjweaver-version.jar -classpath /fullpath/AgriShare/aspectjrt-version.jar Also, make sure that you actually compile the aspects specified below. Eclipse can't do it! You'll have to use Gradle for that. -->

<aspectj>
 <aspects>
   <aspect name="ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect"/>
 </aspects>
 <weaver options="-verbose -showWeaveInfo -XnoInline">
 	<include within="com.sun.faces.config.*"/>
 </weaver>
</aspectj>

Now we need to make sure that we start our application with the AspectJ weaver.

  • Append the following to your Application Server’s startup JVM parameters:
-javaagent:/home/val/.gradle/caches/modules-2/files-2.1/org.aspectj/aspectjweaver/1.7.4/d9d511e417710492f78bb0fb291a629d56bf4216/aspectjweaver-1.7.4.jar

Note: Use the correct path for your machine!

  • Make sure that this jar is first on your Application Server’s classpath:
/home/val/.gradle/caches/modules-2/files-2.1/org.aspectj/aspectjrt/1.7.4/e49a5c0acee8fd66225dc1d031692d132323417f/aspectjrt-1.7.4.jar

Note: Use the correct path for your machine!

And that’s it – now your annotated JSF classes will be picked up directly from project classpath!

To make sure that it is working, look for messages from EclipseJsfDryDockProjectAnnotationScanner in the log. It will have the following heading:

 ====== Found additional JSF annotated classes from Eclipse classpath ======

You should also see some messages from the AspectJ weaver:

[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
com.sun.faces.config.DocumentInfo[] com.sun.faces.config.ConfigManager.sortDocuments(com.sun.faces.config.DocumentInfo[], com.sun.faces.config.FacesConfigInfo))'
in Type 'com.sun.faces.config.ConfigManager' (ConfigManager.java:503) 
advised by afterReturning advice from 'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:36)
[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
java.util.Map com.sun.faces.config.JavaClassScanningAnnotationScanner.getAnnotatedClasses(java.util.Set))' 
 in Type 'com.sun.faces.config.JavaClassScanningAnnotationScanner' (JavaClassScanningAnnotationScanner.java:121) 
 advised by around advice from 
'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:45)

2) Picking up Taglibs from other JSF Projects in the Workspace

This one is easy in comparison.

All we need to do here is to specify an additional custom FacesConfigResourceProvider.

EclipseProjectJsfResourceProvider.java:

/**
 * This custom resource provider is used for finding JSF Resources located in other Eclipse Projects, rather 
 * than jars. JSF spec does not support this, but it is very useful for running DryDocked projects inside the local Eclipse workspace.
 *

 * In order to enable this resource provider, this class's name must be specified in 
 * <code>META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider</code>
 *

 * <b>NOTE:</b> The Gradle build will not include the com.sun.faces.spi.FacesConfigResourceProvider file, b/c we never want this 
 * customization to be deployed - it's for development only.
 * 
 * @see JsfConfigurationShimForEclipseProjectsAspect
 *
 * @author Val Blant
 */
public class EclipseProjectJsfResourceProvider implements FacesConfigResourceProvider {
	
	private static final Log log = LogFactory.getLog(EclipseProjectJsfResourceProvider.class);
	
	
	
	@Override
	public Collection<URI> getResources(ServletContext context) {
		
		List<URI> unsortedResourceList = new ArrayList<URI>();

        try {
            for (URI uri : loadURLs(context)) {
            	if ( !uri.toString().contains(".jar!/") ) {
                   unsortedResourceList.add(0, uri);
            	}
            }
        } catch (IOException e) {
            throw new FacesException(e);
        }

        List<URI> result = new ArrayList<>();
        
        // Then load the unsorted resources
        result.addAll(unsortedResourceList);
        
		log.info(" ====== Found additional JSF configuration resources on Eclipse classpath ====== \n" + result);

        return result;
	}
	
	
    private Collection<URI> loadURLs(ServletContext context) throws IOException {

        Set<URI> urls = new HashSet<URI>();
        try {

// Turns out these are already grabbed by MetaInfFacesConfigResourceProvider, so we don't need to do it again	
//            for (Enumeration<URL> e = Util.getCurrentLoader(this).getResources("META-INF/faces-config.xml"); e.hasMoreElements();) {
//                    urls.add(new URI(e.nextElement().toExternalForm()));
//            }
            URL[] urlArray = Classpath.search("META-INF/", ".taglib.xml");
            for (URL cur : urlArray) {
                urls.add(new URI(cur.toExternalForm()));
            }
        } catch (URISyntaxException ex) {
            throw new IOException(ex);
        }
        return urls;
        
    }
	

}

To register this provider, we add the following into our Web Project:

src/main/resources/META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider:

ca.gc.agr.common.web.jsf.drydock.eclipse.EclipseProjectJsfResourceProvider

Note: Use the correct package name for your project!

3) Picking up Facelet includes and resources from OTHER JSF PROJECTS IN THE WORKSPACE

This one is also easy.

We create a custom Facelets ResourceResolver.

ClasspathResourceResolver.java:

/**
 * This is a special Facelets ResourceResolver, which allows us to ui:include resources from
 * the classpath, rather than from jars. This is necessary in for the Incubator to see stuff
 * in other projects under META-INF/resources/ 
 * 
 * @author Val Blant
 */
public class ClasspathResourceResolver extends DefaultResourceResolver {
	/**
	 * First check the context root, then the classpath
	 */
    public URL resolveUrl(String path) {
        URL url = super.resolveUrl(path);
        if (url == null) {
            
            /* classpath resources don't start with /, so this must be a jar include. Convert it to classpath include. */
            if (path.startsWith("/")) {
                path = "META-INF/resources" + path;
            }
            url = Thread.currentThread().getContextClassLoader().getResource(path);
        }
        return url;
    }
}

Now we register it in our web.xml:

	<!-- This allows us to "ui:include" resources from the classpath, rather than from jars, which is important for working with DryDocked projects directly from our Eclipse workspace -->
	<context-param>
		<param-name>facelets.RESOURCE_RESOLVER</param-name>
		<param-value>ca.gc.agr.common.web.jsf.ClasspathResourceResolver</param-value>
	</context-param>	

And that’s it! We now have everything we need to load all JSF resources from Eclipse projects instead of JARs.

Eclipse Project Setup

All that remains is to reconfigure the Eclipse workspace to start using our new capabilities.

  1. Import your JSF library projects and all their dependencies into your Eclipse workspace together with the Web Application you are working on.
  2. Go to all projects that have dependencies on common component jars, delete the jar dependencies, and replace them with project dependencies that are now in your workspace.
  3. Get rid of any test related project exports from the library projects that might interfere with the running of the app. This may not be necessary depending on your configuration.
  4. Configure your Application Server classpath to use the Eclipse Projects instead of JARs.
  5. Configure your build scripts to turn off these modifications, so they don’t get deployed anywhere past your development machine. This is as simple as not including META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider and META-INF/aop.xml in your WAR.

And that’s it.

How to Save HDS Flash Streams from any web page

Tags

, ,

I came across a Flash video that I was not able to save with any Video Downloader app, including the ones that actually sniff traffic on your network adapter, such as Replay Media Catcher and many others.

Turns out that this particular page was using the new Adobe HTTP Dynamic Streaming (HDS) technology. With HDS, the original MP4 or FLV file is split up into many F4F segments, which are then served to the media player on the page one after the other, so there is no single video file to download like with most other video streaming technologies.

You can easily check if HDS is being used by using Firefox to watch the video.

  1. Clear Firefox cache (Tools -> Options -> Network, Clear Cached Web Content, Clear User Data)
  2. Load the page with the video
  3. Open a new tab and browse to about:cache?storage=disk
  4. Search for a bunch of files that have the word ‘Frag’ in them. They’ll look something like this:
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag39 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag38 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag37 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag36

These are all the F4F fragments of the video. You could download them all and combine them together, but that’s not the best way to do this.

There is a script called AdobeHDS.php, which can automate the download process for you if you provide it with the F4M Manifest for the stream. You can download the script from https://github.com/K-S-V/Scripts

This manifest file is easy to obtain, b/c it is delivered via a plain GET request that is issued before the video starts playing. To find the URL:

  1. Open Firefox Console (Ctrl+Shift+K) or Tools -> Web Developer -> Web Console
  2. Make sure that “Net” filter is selected
  3. Clear the Console
  4. Open the video page and let the video load
  5. In the Filter text box type “f4m” and you should now see a few F4M requests. You want the first one, which will probably be called “manifest.f4m“. Mine looked like this:
GET http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m

Now just run the script with the manifest URL and you should get the re-combined flv file:

$ php AdobeHDS.php --delete --manifest "http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m"
 KSV Adobe HDS Downloader

Processing manifest info.... 
Quality Selection: 
 Available: 2048 1856 1536 1280 896 640 480 299
 Selected : 2048 
Fragments Total: 55, First: 1, Start: 1, Parallel: 8 
Downloading 55/55 fragments 
Found 55 fragments 
Finished

You should now have an FLV file waiting for you in the script directory.

For Mac Users

Posting some info from a comment by Eric L. Pheterson below:

To add a few more baby steps (for Mac users) :

  • When you view the AdobeHDS.php file at Sourceforge, copy/paste it into a file, and name it AdobeHDS.php
  • PHP should be installed alreadyon your mac
  • A dependency of AdobeHDS is not installed, so in Terminal run :
brew install homebrew/php/php55-mcrypt
  • After installing mcrypt, you must open a new terminal window or tab to use it
  • If you don’t have brew installed, in Terminal run :
/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”
  • After installing brew, run
brew update

How to build a WebRTC Controlled RC Car

Tags

,

Creeper Drone was created from a cheap RC truck, which I modified with an Android and Raspberry Pi, so it can now be driven over a WiFi network from any browser that supports WebRTC. The Creeper transmits a video stream, allowing the driver to control the Creeper from a remote location. Bi-directional audio is also supported, providing the driver with the ability to converse through the Creeper.

This post was more convenient to do as an Instructable, so you can find all the details about the hardware and software, including source code and 3D Models here:

http://www.instructables.com/id/WebRTC-Creeper-Drone-Browser-Controlled-RC-Car/

Video: https://www.youtube.com/watch?v=fUkK5v_VtI0

Hibernate XML Mapping Fragment Re-use

Tags

Hibernate mapping files are a frequent source of code duplication. For example, let’s say that all your database tables contain the same set of audit columns. Why should you have to repeat that declaration in every single mapping file? Or maybe you have similarly structured tables with different names, which is also a good opportunity for reuse.

It is possible to reuse the same Hibernate XML mapping snippet from other mapping files by utilizing XML entities.

XML snippet in ca/gc/agr/common/jms/domain/portal/PortalEventMessage.xml:

<!-- This fragment is included from an another hbm -->

	<version name="lockSeqNum" type="int" column="LOCK_SEQ_NUM" />
	
	<property name="partyId" type="string" column="PARTY_ID" length="20" not-null="true" />
	<property name="fromAppNameEnglish" type="string" column="SOURCE_SYSTEM_NAME_ENG" length="100" not-null="true"  />        
	<property name="fromAppNameFrench" type="string" column="SOURCE_SYSTEM_NAME_FR" length="100" not-null="true" />
	
        ... etc ...
	
	<property name="createdDtm" type="timestamp" column="CREATED_DTM" />
	<property name="createdUserOid" type="long" column="CREATED_USER_OID" />
	<property name="updatedDtm" type="timestamp" column="UPDATED_DTM" />
	<property name="updatedUserOid" type="long" column="UPDATED_USER_OID" />

Hibernate mapping::

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping SYSTEM "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd" [
    <!ENTITY commonMapping SYSTEM "classpath://ca/gc/agr/common/jms/domain/portal/PortalEventMessage.xml">
    ]>

<hibernate-mapping>

    <class name="ca.gc.agr.common.jms.domain.portal.PortalEventMessage" table="PIN_PORTAL_EVENT" dynamic-update="true">

        <id name="oid" type="long" column="PORTAL_EVENT_OID" unsaved-value="0">
            <generator class="sequence">
                <param name="sequence">pin_portal_event_seq</param>
            </generator>
        </id>

		&commonMapping;

    </class>
    
</hibernate-mapping>

Make sure that the XML snippet is on the classpath and you are done.

Secure NFS Shares on Lenovo ix2-dl NAS

Tags

,

Introduction

ix2-dl offers many ways to connect to it, but none of them can provide such a seamless experience for Linux computers as NFS:

Protocols

The problem with NFS is that without a Domain Controller that can provide Kerberos authentication somewhere on the LAN, NFS is horribly insecure. All you have to do to infiltrate the storage is somehow connect to the LAN. Once you are in, it is trivial to steal everything from un-authenticated NFS shares.

Samba 4

It is possible to set up Samba4 as a Domain Controller that will provide Active Directory and Kerberos services:

http://sector7e.com/setup-of-samba4-4-10-on-ubuntu-server-12-04-lts-and-13-10/
http://wiki.samba.org/index.php/Samba4/HOWTO
https://help.ubuntu.com/community/Kerberos

The set up procedure is not trivial unfortunately, and would result in a complication of my infrastructure that I was not willing to deal with.

Windows File Sharing (CIFS)

CIFS shares are attractive, b/c they have built in password authentication. I have tried using CIFS mounts, but quickly rejected the idea b/c the shares were much slower than NFS, did not allow symlinks and did not allow fine grained ownership control of files under one share.

OpenVPN

This ended up being the best and simplest option that allows me to have complete and seamless integration of my shares and best possible security.

The idea is to completely turn off all security on the NFS share, including no_root_squash, and then export the shares exclusively over the VPN subnet. Here’s an example, with an additional read-only export for the local wired net:

shares

OpenVPN Setup

Before you can follow these instructions, you must first enable SSH access to the NAS, connect to package repositories and tie into the boot process. All of this is described in my previous posts:

https://n1njahacks.wordpress.com/2014/02/25/ssh-access-to-lenovo-ix2-dl-nas/
https://n1njahacks.wordpress.com/2014/02/27/setting-up-mysql-server-on-lenovo-ix2-dl-nas/

Install OpenVPN package and dependencies:

# ipkg install openvpn

Open /opt/etc/init.d/S20openvpn:

  • Comment out the tunnel driver and “return 0” line. It’s important to make sure that this script does not try to insert the module, b/c module tun is already compiled into the kernel on this distro
  • Specify correct file name for --config (lan-server.conf)

Add the startup script to /etc/rc.local:

# Start OpenVPN
echo 'Starting OpenVPN server...'
/opt/etc/init.d/S20openvpn

Note: in order for this to work, you must first modify the distro’s boot process as described in the previous section.

OpenVPN Server Configuration

I will provide my config as an example.

/opt/etc/openvpn/lan-server.conf:

# Configure server mode and supply a VPN subnet
# for OpenVPN to draw client addresses from.
# The server will take 192.168.129.1 for itself,
# the rest will be made available to clients.
# Each client will be able to reach the server
# on 192.168.129.1
#
server 192.168.129.0 255.255.255.224

daemon

# Which TCP/UDP port should OpenVPN listen on?
port 1194

# TCP or UDP server?
;proto tcp
proto udp

# By increasing the MTU size of the tun adapter and by disabling
# OpenVPN's internal fragmentation routines the throughput can be
# increased quite dramatically. The reason behind this is that by
# feeding larger packets to the OpenSSL encryption and decryption
# routines the performance will go up. The second advantage of not
# internally fragmenting packets is that this is left to the operating
# system and to the kernel network device drivers.
tun-mtu 9000
fragment 0
mssfix 0

# &quot;dev tun&quot; will create a routed IP tunnel,
dev tun0

# SSL/TLS root certificate (ca), certificate
# (cert), and private key (key).  Each client
# and the server must have their own cert and
# key file.  The server and all clients will
# use the same ca file.
#
# See the &quot;easy-rsa&quot; directory for a series
# of scripts for generating RSA certificates
# and private keys.  Remember to use
# a unique Common Name for the server
# and each of the client certificates.
#
# Any X509 key management system can be used.
# OpenVPN can also use a PKCS #12 formatted key file
# (see &quot;pkcs12&quot; directive in man page).
ca /etc/ssl/certs/VACE-LAN-CA-Chain.crt
cert /etc/ssl/certs/nas-lan-server.crt
key /etc/ssl/private/nas.key

# Diffie hellman parameters.
# Generate your own with:
#   openssl dhparam -out dh1024.pem 1024
dh /etc/ssl/private/dh1024.pem

# Maintain a record of client  virtual IP address
# associations in this file.  If OpenVPN goes down or
# is restarted, reconnecting clients can be assigned
# the same virtual IP address from the pool that was
# previously assigned.
ifconfig-pool-persist /opt/var/openvpn/lan-ipp.txt

# The keepalive directive causes ping-like
# messages to be sent back and forth over
# the link so that each side knows when
# the other side has gone down.
# Ping every 10 seconds, assume that remote
# peer is down if no ping received during
# a 120 second time period.
keepalive 10 120

# Enable compression on the VPN link.
# If you enable it here, you must also
# enable it in the client config file.
comp-lzo

# The maximum number of concurrently connected
# clients we want to allow.
max-clients 3

# It's a good idea to reduce the OpenVPN
# daemon's privileges after initialization.

# The persist options will try to avoid
# accessing certain resources on restart
# that may no longer be accessible because
# of the privilege downgrade.
persist-key
persist-tun

# Output a short status file showing
# current connections, truncated
# and rewritten every minute.
status /opt/var/openvpn/lan-status.log

# By default, log messages will go to the syslog (or
# on Windows, if running as a service, they will go to
# the &quot;\Program Files\OpenVPN\log&quot; directory).
# Use log or log-append to override this default.
# &quot;log&quot; will truncate the log file on OpenVPN startup,
# while &quot;log-append&quot; will append to it.  Use one
# or the other (but not both).
;log         openvpn.log
log-append  /opt/var/openvpn/lan-server.log
writepid    /opt/var/openvpn/lan-server.pid

# Set the appropriate level of log
# file verbosity.
#
# 0 is silent, except for fatal errors
# 4 is reasonable for general usage
# 5 and 6 can help to debug connection problems
# 9 is extremely verbose
verb 4

# Silence repeating messages.  At most 20
# sequential messages of the same message
# category will be output to the log.
mute 20

Pay close attention to the comment on tun-mtu. These settings significantly speed up the tunnel.

OpenVPN Client Configuration

/etc/openvpn/nas-client.conf:

daemon

client

remote nas

dev tun

port 1194
proto udp

# By increasing the MTU size of the tun adapter and by disabling
# OpenVPN's internal fragmentation routines the throughput can be
# increased quite dramatically. The reason behind this is that by
# feeding larger packets to the OpenSSL encryption and decryption
# routines the performance will go up. The second advantage of not
# internally fragmenting packets is that this is left to the operating
# system and to the kernel network device drivers.
tun-mtu 9000
fragment 0
mssfix 0

log-append  /var/log/openvpn/nas-client.log

# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup

# Try to preserve some state across restarts.
persist-key
persist-tun

# SSL/TLS parms.
# See the server config file for more
# description.  It's best to use
# a separate .crt/.key file pair
# for each client.  A single ca
# file can be used for all clients.
ca /etc/ssl/certs/VACE-LAN-CA-Chain.crt
cert /etc/ssl/certs/boss-lan-client.crt
key /etc/ssl/private/boss.key

# Enable compression on the VPN link.
# Don't enable this unless it is also
# enabled in the server config file.
comp-lzo

# Set log file verbosity.
verb 4

# Silence repeating messages
mute 20

Mounting NFS shares

That’s pretty much it! Now you can mount the NFS shares from the client like so:
/etc/fstab:

nas_tunnel:/nfs/music    /mnt/nas/music     nfs     rw,auto    0       0
nas_tunnel:/nfs/video    /mnt/nas/video     nfs     rw,auto    0       0
nas_tunnel:/nfs/programs /mnt/nas/programs  nfs     rw,auto    0       0
nas_tunnel:/nfs/work     /mnt/nas/work      nfs     rw,auto    0       0
nas_tunnel:/nfs/pictures /mnt/nas/pictures  nfs     rw,auto    0       0

Where nas_tunnel = 192.168.129.1

Tunnel Performance Tuning

https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

Setting up MySQL server on Lenovo ix2-dl NAS

Tags

,

This article will explain how to install a MySQL server on the Lenovo ix2-dl NAS. It will also demonstrate how to customize the boot process.

This MySQL server will be set up as the back-end for my MediaWiki installation running on a different server.

Enable SSH Access

https://n1njahacks.wordpress.com/2014/02/25/ssh-access-to-lenovo-ix2-dl-nas/

Basic Config

Add the following to /etc/profile:

alias ls='ls --color'

# Set the locale properly
export LANG=en_US.utf8
export LANGUAGE=en_US:en

The locale settings were necessary to properly display Russian file names from a Terminal.

Custom Boot Scripts

One of the difficulties with this box is that it does not respect the startup scripts in /etc/rc* directories, even though they are there. Instead boot processes are managed by appmd, which uses an XML config file found here: /usr/local/cfg/sohoProcs.xml. Unfortunately, you can’t modify that file directly.

The /usr directory is actually part of the /boot/images/apps image mounted on /mnt/apps, so if we want to add anything to the startup config, we must modify the image itself.

Here are some scripts to help with that:

/opt/editconfig.sh:

#!/bin/sh
# edit the bootup config of the ix2
# inspired by http://www.chrispont.co.uk/2010/10/allow-startup-daemons-on-storcenter-ix2-200-nas/
# modified from http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/
mknod -m0660 /dev/loop3 b 7 3
chown root.disk /dev/loop3
mkdir /tmp/apps
mount -o loop /boot/images/apps /tmp/apps
vi /tmp/apps/usr/local/cfg/sohoProcs.xml
sleep 1
umount /tmp/apps
rm /dev/loop3

/opt/init-opt.sh:

#!/bin/sh
# modified from http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/

rm /opt/init-opt.log
echo "Last bootup:" >> /opt/init-opt.log
date >> /opt/init-opt.log
#Add your command below
/etc/init.d/rc.local start >> /opt/init-opt.log
while true; do
        sleep 1d
done

After creating these scripts, you must run /opt/editconfig.sh and make modifications to the opened file. At the end of <Group Level="2"> section:

<Group Level="2">

    ..... Other Program defs .....

    <Program Name="CustomInitScript" Path="sh">
        <Args>/opt/init-opt.sh</Args>
        <SysOption Restart="-1"/>
    </Program>

</Group>

After these modifications, you can place all your startup scripts into /etc/rc.local, which will be executed after you reboot.

svcd Performance Tweak

svcd is some sort of indexing service that tends to take up a lot of CPU. We can renice it though.

Since we now have access to sohoProcs.xml (see previous section), we can set the Nice level in there.

Run /opt/editconfig.sh, find the entry for svcd and add the Nice attribute:

<Program Disable="0" Name="Svcd" Path="/usr/local/svcd/svcd">
        <SysOption MaxMem="96M" Nice="19" Restart="-1"/>
</Program>

Connecting to package (ipkg) repositories

LifeLine Linux distro in this NAS is based on NSLU2-Linux, so we can make use of their resources.

Open /etc/ipkg.conf and add the following:

src cross http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/cross/unstable
src native http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/native/unstable
root@ix2-dl:/# ipkg update

MySQL Installation

root@ix2-dl:/# ipkg install mysql5

This will install MySQL and dependencies into /opt (aka /mnt/system/opt), but the permissions will be wrong so the server won’t start after installation. You need to follow these steps:

  • Add mysql user through the Web Console
  • Fix permissions
root@ix2-dl:/# chmod o+w /opt/var
root@ix2-dl:/# chown -R mysql /opt/mysql-test
root@ix2-dl:/# chown -R mysql /opt/var/mysql
  • In /etc/passwd change home directory for ‘mysql’ user to /opt/var/mysql
  • Setup environment
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# vi .bashrc

Add the following:

export PATH=$PATH:/opt/bin
  • Start MySQL. As root:
root@ix2-dl:/# /opt/share/mysql/mysql.server start
Starting MySQL..
  • Configure the server. Follow the wizard and change the root password.
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# /opt/bin/mysql_secure_installation
  • Log in:
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# mysql -u root -p
Enter password: *****
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 5.0.88 optware distribution 5.0.88-1

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema | 
| lib                | 
| log                | 
| mysql              | 
| test               | 
+--------------------+
5 rows in set (0.00 sec)
  • To start the server on reboot, open /etc/rc.local and add:
# Start MySQL server
/opt/share/mysql/mysql.server start

Note: This last step will only work if you followed instruction in the “Custom Boot Scripts” section.

You are done!

Importing the Wiki Database

mysql@ix2-dl:/# mysql -u root -p
mysql> create database wikidb;
mysql> CREATE USER 'wiki'@'%' IDENTIFIED BY '********';
mysql> GRANT ALL PRIVILEGES ON wikidb.* TO 'wiki'@'%';
mysql@ix2-dl:/# mysql -u wiki -p wikidb < wikidb-db-backup.sql

Daily Backups of the Wiki Database

The wiki database is backed up and versioned with RCS daily. Here is the setup:

  • Install RCS:
root@ix2-dl:/# ipkg install rcs
  • Backup script (/opt/var/mysql/mysqlbackup.cron.sh):
#!/bin/bash

# DATABASE DEFINITION SECTION
# Database specified with a "dbname user password" triple
databases=("wikidb wiki ******")
# END DATABASE DEFINITION SECTION

WD="/nfs/backups/wiki"
MYSQLDUMP="/opt/bin/mysqldump"
CI="/opt/bin/ci"
AWK="/usr/bin/awk"

numdb=${#databases[@]}

cd $WD

for database in "${databases[@]}"; do
 db=$(echo $database   | $AWK '{print $1}')
 user=$(echo $database | $AWK '{print $2}')
 pass=$(echo $database | $AWK '{print $3}')

 filename=${db}-db-backup.sql

 echo "Backing up database $db..."
 $MYSQLDUMP -u $user --password=$pass $db > $filename 2> MY_SQL_DUMP_ERROR_$db
 if [[ $? -ne 0 ]] ; then
   # The backup has failed. Send a notification e-mail
   #
   echo "WIKI BACKUP FAILURE!"
 else
   # Success. Delete the error file if any and check in the new backup into RCS
   #
   echo "Creating an RCS version for $db..."
   rm MY_SQL_DUMP_ERROR_$db 2>&1 > /dev/null
   export TMPDIR=$WD
   echo . | $CI -l -d"`date`" $filename
 fi

done

Cron Job

/etc/cron.daily/mysql_backup:

#!/bin/sh
/opt/var/mysql/mysqlbackup.cron.sh

Credits

http://vincesoft.blogspot.ca/2012/01/how-to-run-program-at-boot-on-iomega.html
http://iomega.nas-central.org/wiki/Hacking_(Home_Media_CE)
http://www.nslu2-linux.org/
http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/

SSH access to Lenovo ix2-dl NAS

I recently purchased the Lenovo ix2-dl NAS, b/c it was time to upgrade my storage capacity and I did not want to deal with my current setup anymore. My datahost box runs LVM on top of software RAID 1, on Slackware 10.2, with 5 drives in the machine :).

I was attracted to the Lenovo ix2-dl, b/c it is small, quiet, provides RAID 1 and costs $90 at Tigerdirect, which is significantly cheaper than any other NAS I came across.

This NAS box has a 1.5GHz ARM Feroceon 88FR131 processor, 256MB of RAM and runs LifeLine Linux, which is a distro developed by Iomega’s parent company EMC, specifically to power their NAS boxes.

The only concern I immediately had with the ix2-dl, was the lack of SSH access to the box. A Linux box w/o SSH access is extremely irritating, so I decided to research this further.

Enabling SSH Access

Turns out that there is a hidden Diagnostics page available in the web interface at /manage/diagnostics.html . This page allows the user to set an SSH port and root password. The catch is that the selected password is prefixed by the word ‘soho‘. So if you select ‘GOD’ as your password on the page, the actual password is ‘sohoGOD‘. You can change the password to whatever you want with the ‘passwd’ command.

Once you log in, you can work with the drives, software raid, Apache, NFS, etc. just like you are used to on any Linux box.

Credits

Most of the information was obtained from here: https://blog.liftsecurity.io/jon-lamendola

 

Step By Step Guide to Rooting your Galaxy S4 (SGH-I337M) from Ubuntu

Tags

, , ,

This guide was written by experimenting with the Canadian (Telus) version of Galaxy S4. If you have a different phone, this guide can still be useful for understanding the principles behind the process – you’ll just need to make sure that you get the right bootloader image for your phone.

I am assuming that you are using a Linux computer in this guide.

Before We Begin

The strangest and most stressful thing that happened to me during this process is when the key combination for booting the phone into Recovery mode stopped working. Normally we boot into Recovery by turning off the phone and holding down Vol Up & Home & Power buttons. This worked fine for a while, and then suddenly stopped working. If this happens to you check out the Troubleshooting section below for a solution.

Install ClockworkMod (CWM) Recovery Bootloader

  • Install firmware flash utility that speaks the Odin protocol (Samsung’s proprietary firmware flash software)
	sudo add-apt-repository ppa:modycz/heimdall
	sudo apt-get update
	sudo apt-get install heimdall
  • Power off the Galaxy S4 and connect the USB adapter to the computer but not to the Galaxy S4.
  • Now boot the Galaxy S4 into download mode by holding down Vol Down & Home & Power. Accept the disclaimer. After this insert the USB cable into the device. Your phone is now ready to flash a new Recovery bootloader via the Odin protocol.
  • On the computer, open a terminal and run the following command from the Heimdall directory:
    sudo heimdall flash --RECOVERY recovery-clockwork-6.0.3.2-jfltecan.img --no-reboot

    A blue transfer bar will appear on the device showing the recovery image being transferred.

  • Turn off the phone
  • Boot the phone again by holding Vol Up & Home & Power. If you find that your phone just keeps rebooting instead of going into CWM Recovery, please read the Troubleshooting section for a solution.
  • CWM Recovery will present you with a text menu that you can navigate with the Volume keys, and select with Power key. Select the first option: “Reboot System Now”
  • The Galaxy S4 now has ClockworkMod Recovery installed!

Backup the Stock Image

This is a good time to make a backup of your entire phone, just in case you need to get back to the stock configuration later. DO NOT SKIP THIS STEP!

  • Reboot back into CWM Recovery by holding Vol Up & Home & Power during startup.
  • Go to “backup and restore” -> “backup to /sdcard”. This will take a while, so just wait. At the end of this process, your backup will be stored in “/mnt/shell/emulated/clockworkmod/backup/” on the phone’s file system. You can’t access that from your phone directly yet, but you can use the “adb pull” (https://developer.android.com/tools/help/adb.html) to transfer it to your PC though. You’ll also be able to do it easily after we finish rooting the phone, so no need to do that now.

Rooting The Phone

NOTE: I had a lot of trouble with this ROM as of November 29th, 2013. The author told me that he’ll fix it, so it is likely that you will not experience any problems now. However, if you find that you follow the instructions, yet your phone is not getting rooted, see the Troubleshooting section for a solution.

  • Copy “superuser.zip” into the root of your phone’s internal file system (by that I mean what the phone shows you as a root – in reality the root directory you see from the phone is actually mounted here: /mnt/shell/emulated/0). There are many ways to do this, such as mounting the phone over USB, over the network, using adb, etc. There are many tutorials out there that show you how to copy files from your computer to your phone.
  • Shut down again. Boot into CWM Recovery by holding Vol Up & Home & Power.
  • Navigate to “install zip from sdcard” -> “choose zip from sdcard” -> “0/”. You will find your ‘superuser.zip‘ here. Select it and confirm.
  • You’ll get some text at the bottom and a Success message. Click ‘Back’ and select ‘Reboot’
  • Your phone is now rooted! See next section for making sure that everything worked correctly.

Confirming Correct Operation

  • You should have a new app installed called Superuser. This is where you can configure how other apps get access to root, as well as see the log of apps that requested root.
  • Use the app to make sure that root access is granted. If it isn’t see the Troubleshooting section.

Install ROM Manager

ROM Manager is an extremely useful app that makes a lot of the operations we just did possible from a single click. It will also manage your backups, keep your CWM Recovery install up to date, and keep track of new ROMs, so you should install it:
https://play.google.com/store/apps/details?id=com.koushikdutta.rommanager

Remember that backup we took in the beginning from CWM Recovery? Go to “Manage and Restore Backups”, and you’ll see your backup in the list. Select “Download Backups”, and you’ll be offered a download link to transfer your backup to your PC for safe keeping.

Troubleshooting

Recovery Boot Loop

Many S4 owners have a problem with their phones going into an endless loop of restarts when trying to boot into Recovery Mode.

Do the following: when your phone is off, press VOLUME UP button and POWER BUTTON at the same time. Keep holding it until the actual recover options appear on your phone screen. Do not let go when you see that little message show up on upper left screen. Keep holding it until you actually see the recovery options on your screen. Now, if you see that your phone is going into another restart without options appearing, just keep holding the VOLUME UP button and hold it until you see the recovery options show up on your screen.

superuser ROM failing to root the phone

I had this problem after downloading http://download.clockworkmod.com/superuser/superuser.zip on November 29th, 2013. Although it is very likely fixed now, the fact that you are reading this section suggests otherwise, so let’s give this a try.

First, let’s take a look at exactly what changes superuser.zip ROM makes to the file system in order to root the phone:

  1. Replacing the ‘su‘ binary with another that has some added functionality, and that has the setuid bit [http://en.wikipedia.org/wiki/Setuid] set on it. This is what allows the apps to elevate privileges.
  2. An Android app that acts as a front end to ‘su‘, and keeps track of which apps are allowed to use it, and which ones are not.

It appears that on Galaxy S4 (Canadian) with Android 4.2+ installed, there have been some kernel changes that make the seteuid system call fail like this:

 seteuid (root) failed with 13: Permission denied

You can see this message if you use adb logcat while trying to elevate privileges.

As a result of this error, the phone does not get rooted. This problem is easy to fix, but it requires some code changes. There is some detailed info about this problem and the fix for it here: https://github.com/koush/Superuser/issues/196

The problem for me was that the official version of superuser.zip has not yet been updated with the fix for some reason. In any case, I have taken the patch from GitHub and updated the ROM. You can get the fixed version here: http://vace.homelinux.com/unprotected/superuser/fixed-superuser.zip

Follow exactly the same steps with this file as described above and everything should work out.